Abstract
The human action observation network (AON) encompasses brain areas consistently engaged when we observe other's actions. Although the core nodes of the AON are present from childhood, it is not known to what extent they are sensitive to different action features during development. Because social cognitive abilities continue to mature during adolescence, the AON response to socially-oriented actions, but not to object-related actions, may differ in adolescents and adults. To test this hypothesis, we scanned with functional magnetic resonance imaging (fMRI) male and female typically-developing teenagers (n = 28; 13 females) and adults (n = 25; 14 females) while they passively watched videos of manual actions varying along two dimensions: sociality (i.e., directed toward another person or not) and transitivity (i.e., involving an object or not). We found that action observation recruited the same fronto-parietal and occipito-temporal regions in adults and adolescents. The modulation of voxel-wise activity according to the social or transitive nature of the action was similar in both groups of participants. Multivariate pattern analysis, however, revealed that decoding accuracies in intraparietal sulcus (IPS)/superior parietal lobe (SPL) for both sociality and transitivity were lower for adolescents compared with adults. In addition, in the lateral occipital temporal cortex (LOTC), generalization of decoding across the orthogonal dimension was lower for sociality only in adolescents. These findings indicate that the representation of the content of others' actions, and in particular their social dimension, in the adolescent AON is still not as robust as in adults.
SIGNIFICANCE STATEMENT The activity of the action observation network (AON) in the human brain is modulated according to the purpose of the observed action, in particular the extent to which it involves interaction with an object or with another person. How this conceptual representation of actions is implemented during development is largely unknown. Here, using multivoxel pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data, we discovered that, while the action observation network is in place in adolescence, the fine-grain organization of its posterior regions is less robust than in adults to decode the abstract social dimensions of an action. This finding highlights the late maturation of social processing in the human brain.
Introduction
When we observe other people's actions, a set of brain areas is consistently engaged, contributing to our social interaction capabilities. The action observation network (AON) comprises premotor, inferior frontal and parietal regions, which are predominantly associated with action planning, as well as superior temporal sulcus (STS) and occipito-temporal areas, which are predominantly associated with high-level visual processing (Buccino et al., 2001; Downing et al., 2001; Carr et al., 2003; or meta-analyses; Grosbras et al., 2012). The AON supports not only the representation of low-level aspects of actions (e.g., kinematics) but also high-level aspects (e.g., goal, intention) indexing the abstract or conceptual knowledge about the action observed. Notably, watching instrumental (transitive) actions that involve manipulating an object is associated with enhanced activity in ventral premotor and anterior parietal regions, which are considered homologous of mirror-neuron regions in the monkey, than watching intransitive actions that involve gestures (communicative, symbolic or meaningless; Grosbras et al., 2012; Kilner and Lemon, 2013). Likewise, activity in different AON parts is modulated depending on whether the observed actions involve another agent (Centelles et al., 2011; Becchio et al., 2012; Saggar et al., 2014; Isik et al., 2017; Walbrin et al., 2018; Wurm and Caramazza, 2019; Tarhan and Konkle, 2020). Wurm et al. (2017) questioned in a single experiment the cortical representations of the social or transitive dimensions of actions and described involvement for both dimensions, albeit with some topographical specificities, of the occipito-temporal and superior temporal cortices that generalizes across a variety of perceptually divergent actions. Thus, abstract knowledge of actions seems to be encoded in various parts of the AON, with specificities as a function of whether this knowledge concerns objects (transitivity) or persons (sociality).
This raises the question of how this comes about during development. Responses of sensorimotor and temporal areas of the AON are present as early as three to five months (Shimada and Hiraki, 2006; Kosakowski et al., 2022). Functional magnetic resonance imaging (fMRI) studies in children (seven years and older) and adolescents have shown that all AON nodes are identified (Ohnishi et al., 2004; Shaw et al., 2011, 2012; Pokorny et al., 2015). Some results suggest weaker activation in children relative to adults (Biagi et al., 2016; Morales et al., 2019). Yet, it is uncertain whether adolescents engage the AON to the same extent as adults. More importantly, to date, no study has investigated whether the representation of the content of observed actions is the same as in adults. This question is particularly relevant to consider for adolescents, given the ongoing maturation of psychological and brain processes for perceiving others (Scherf et al., 2007; Mills et al., 2014; Ross et al., 2014).
Furthermore, since knowledge of objects and of actions related to them seems to develop faster than knowledge of social relationships (Hunnius and Bekkering, 2010; Clay et al., 2018), and since brain representation of objects is mature earlier than that of people (Golarai et al., 2007; Scherf et al., 2007), one may hypothesize that, in adolescence, the representation of the social but not transitive dimension of actions in the AON is immature.
We designed an fMRI experiment where adolescents (13–17 years old) and adults passively watched short videos of actions that varied in their social or transitive nature. We chose to focus on this age range, considered mid-adolescence, because it is an age when action control, including action predictive adaptation to objects properties (Martel et al., 2020), and basic biological-motion processing (Hadad et al., 2011) have reached adult level.
Our goals were (1) to fill the gap in the developmental brain imaging literature on adolescent AON and test whether regional activity for action observation in general and its difference between action categories would be weaker in adolescents than in adults; (2) to question the maturity of local representations of conceptual dimensions of action, and test whether the information conducted by local functional organization of neurons is as discriminant in adolescents than it is in adults.
Materials and Methods
Participants
Twenty-eight typically developing adolescents aged 13–17 years (Mage = 15.1, SD = 1.26; 13 females; 27 right-handed) were enrolled in the study. They completed the Pubertal Development Scale (PDS; Petersen et al., 1988), a sex-specific eight-item self-report measure of physical development based on Tanner stages (Tanner, 1962). Adolescents answered questions concerning their physical development (e.g., growth in stature, breast development, pubic hair) and on the basis of their answers they were assigned to one of the categories of pubertal status: mid-pubertal (Tanner stage 3, n = 9), advanced pubertal (stage 4, n = 13), and postpubertal (stage 5, n = 6). Twenty-five adult volunteers (Mage = 26.6, SD = 2.02, range = 24–33 years old; 14 females; 22 right-handers) were also recruited in the study. We determined these sample sizes in advance based on results of previous fMRI studies (including our owns) that used similar or smaller number of participants per group during passive viewing of social stimuli and identified significant task-related activity in adolescents (and adults) as well as age-group differences (Golarai et al., 2007; Morales et al., 2019; Ross et al., 2019; Afyouni et al., 2021).
All participants reported to be healthy and typically developing, they had normal or corrected-to-normal vision, and reported no history of neurologic or psychiatric disorder. All participants signed a written information and consent form. Written consent was also obtained from the adolescents' parents. The study was in line with the Declaration of Helsinki and was approved by the local Ethics Committee.
Inclusion in the final sample required that head motion during scanning did not exceed 2-mm displacement between consecutive volumes on 90% of volumes for each run. Data from one male adolescent were excluded based on this criterion. Data from one adult were also excluded following technical problems during fMRI scanning.
Stimuli
The stimuli consisted of 128 dynamic scenes, each representing the same background with two persons, among four possible actors, facing each other across a table, seen from the side (i.e., one actor on each side of the screen). Only the arms and hands of the persons were visible. Different objects were placed on the table. Only one of the two persons produced an action with her/his right or left arm. There was no physical contact between the two persons. To further increase perceptual variability, the actions could be performed by the actor sitting on the left or right side of the table and the film could be taken from two slightly different angles.
The stimuli were grouped into four classes, based on whether the action depicted involved the other person or not (Social or Nonsocial) and whether it involved an object or not (Transitive or Intransitive). Each class included four types of action (total 16 actions): (1) Social Transitive (ST): give pen, take pen, give book, and take book; (2) Nonsocial Transitive (NT): write with pencil, erase with pencil end, open book, and close book; (3) Social Intransitive (SI): thumb up “agree” gesture, thumb down gesture, “come” hand gesture, and “go away” hand gesture; and (4) Nonsocial Intransitive (NI): stroke own arm with finger, scratch arm with finger, stroke arm with hand, and scratch arm with hand (see Fig. 1A).
A, Stimuli used in the present study. They vary along two dimensions: sociality (social, nonsocial) and transitivity (transitive, intransitive), leading to four distinct categories of actions: Social Transitive (ST), Social Intransitive (SI), Nonsocial Transitive (NT), and Nonsocial Intransitive (NI). Class of actions are defined as: Give: the actor moves an object from his/her peri-personal space toward the peri-personal space of the passive actor; Take: the reverse of Give; Open: the actor opens the notebook; Close: the reverse of Open; Rub: the actor moves the eraser on the notebook with rapid oscillatory movements; Write: the actor writes on the notebook with the pencil; Agree: the actor indicates with a gesture (i.e., thumb up) to the passive actor that he agrees with him/her; Disagree: idem with thumb down, to signify disagreement; Come: the actor indicates with his/her hand to the passive actor to come closer; Go away: the reverse of Come; Stroke: the actor strokes his/her forearm with his/her opposite hand; Scratch: the actor scratches his/her forearm with his/her opposite hand. B, Schematic depiction of the sequence of events in a representative session. C, Behavioral ratings obtained during the fMRI sessions according to the action dimension (sociality and transitivity) and the age group (adults and adolescents). D, Motion magnitude mean values for each class of action.
To construct the stimuli, we recorded original videos of ∼3 s (with 30 frames per second) and a resolution of 640 × 480 pixels. All videos were visually inspected with mpv media player (available from https://mpv.io/) to determine the onset and duration of each action (from beginning to end of hand movement). Action duration was homogenized by slightly speeding up or slowing down individual videos (by a factor comprised between 0% and 20%, mean 7%, i.e., duplicating or removing five frames), so that on average actions lasted 2.83 s with a SD of 197 ms and no differences between action classes. To ensure that this manipulation was not perceived subjectively, the original and modified videos were presented randomly to an independent sample of 17 subjects (Mage = 22.41, SD = 5.67, 13 females) who were instructed to judge whether each video had been edited or not. We found that the status of the videos (modified or not) and the estimation made by the participants (modified or not) were independent (χ2 = 0.34, df = 1, p = 0.56, Yates' correction), therefore indicating that they perceived similarly modified and nonmodified videos. Videos duration was 3.16 s, including frames with the still background and actors before and after the execution of any action. Three videos of the same action class each but with different actors and perspective, were combined to create final trial stimuli lasting 9.5 s, which were used in a block design. All video editing was performed using ffmpeg (version 3.2, available from http://ffmpeg.org/) and in-house Python scripts.
The spatial amplitude of motion was inevitably different for each class of action. For instance, the social action “thumb down” implies a large gesture of the arm whereas the nonsocial action “scratch” implies a local gesture with low arm amplitude. As a consequence, the global and local visual motion was different across classes of action. In order to quantify and control in subsequent analyses for potentials effects of these interclass differences, we used a program developed in-house in Python with the library OpenCV (Open Source Computer Vision Library; https://opencv.org/) to compute, for each video frame, the number of pixels that changed intensity relative to the preceding frame (i.e., optical flow with Farneback method). Then, the total number of changing pixels was divided by the total number of frames to obtain a score of motion magnitude. Videos of social actions involved more visual motion than videos of nonsocial actions (see Fig. 1D). We thus used this motion magnitude score as a regressor of noninterest in the analysis of brain activity, to account for this potential confound (for more details, see below, Univariate fMRI analysis).
All videos were tested in a separate online experiment using the platform Testable (https://www.testable.org/). We created 8 subsets of 64 videos where all classes of actions were equally represented. For this experiment, we recruited 126 participants (M = 33.9 years, SD = 10.2; 77 females) who were randomly assigned to one of the eight subsets of videos and were asked to rate each video using visual analog scales (from 0 = not at all to 100 = very much), along two dimensions introduced with the following questions: for sociality, “To what extent does the action concern the passive person?”; for transitivity, “To what extents does the action involve the interaction with a physical object?”. As expected, the four categories were well-discriminated (see Fig. 2).
Scatter plots of individual videos ratings along the Sociality and Transitivity dimensions. These data were obtained during the pretest phase where 126 participants (M = 33.9 years, SD = 10.2; 77 females) were recruited to rate the videos. As can be seen, the four categories of videos were well-discriminated across the two dimensions.
In addition, we added control items consisting of eight modified action videos from the four action classes (two control videos per action class). In these videos, the actors were absent, and a pink disk moved within the scene. The trajectory and cinematic of the disk were matched with that of the gesture from the original video (see Fig. 3).
Example of the control condition. Representation of several frames of a control video showing upward and downward movement of a pink disk. In this example, the trajectory and cinematic of the disk are matched with that of the gesture “agree” (i.e., Social Intransitive video).
fMRI experiment
Each participant was scanned in a single-session with: (1) a T1-weighted anatomic scan, (2) one practice functional run to ensure that participants felt comfortable with the task, (3) eight functional runs. Each functional run contained 20 trials (16 action trials plus four control conditions; see Fig. 1B). Each trial started with a fixation cross (variable duration from 1 to 3 s) followed by a trial video 5, video 9 s), which was then immediately followed by a blank screen (variable duration from 0.5 to 1.5 s) and a subsequent rating screen (5 s). The trial duration thus varied from 16 to 19s. Each run ended with a 10-s fixation period. A genetic algorithm was used to optimize the experimental design with regards to contrast estimation and to minimize the variance of the design matrix (Wager and Nichols, 2003; Kao et al., 2009) using the toolbox NeuroDesign (https://neurodesign.readthedocs.io/en/latest/index.html). We thereby created eight different schedules of sequences of conditions and intertrial intervals. The assignment of these schedules to the eight runs was counterbalanced across participants.
In the scanner, stimuli were back-projected onto a screen (60-Hz frame rate, 1024 × 768 pixels screen resolution) via a liquid crystal DLP projector (OC EMP 7900, Epson) and viewed through a mirror mounted on the head coil. Image on the screen had a 40 × 30-cm size, covering a 20° angle of view. Participants gaze position on the projection mirror was recorded using an infrared-video based eye tracker (Eyelink 1000 system, SR Research). Before each functional run, the spatial accuracy of the calibration of the eye tracker was validated using nine points. If the average deviation exceeded 1° of visual angle, the spatial calibration was redone. Stimulus presentation, response collection and synchronization with the fMRI acquisition triggers and the eye tracker were implemented in a custom-built program, using the LabVIEW (National Instrument) environment. After each functional run, participants were allowed self-determined breaks.
Task
Participants were first asked to watch attentively each trial video. Immediately after a trial video, a response screen, showing a question and a slider, was presented and participants had to indicate, depending on the question, either the degree of sociality or the degree of transitivity of the action that was depicted in all the three videos they had just seen. We used the same questions as in the preliminary independent experiment. Participants gave their response by moving a track-ball with their right index along an analog-scale (from 0 = not at all to 100 = very much) and validated their choice by clicking with their right thumb. Only one question was displayed for each trial. As each trial video was presented twice during the experiment, both social and transitive ratings were collected for a specific video. The order of presentation of the questions was counterbalanced across subjects. Ratings were used to ensure that adolescents and adults were able to discriminate the items across sociality and transitivity. Importantly, as participants did not know in advance which question would be asked, they were not biased toward attending to one or the other dimension. Two questions were also asked for the control videos, one concerning the distance covered and the other concerning the velocity of the pink disk. To ensure that participants understood and correctly followed the instructions during the fMRI session, they completed a practice run before the scanning, outside the scanner. No information about the exact aim of the study was given before the experiment.
Data acquisition
Imaging data were acquired on a 3T Siemens Prisma Scanner (Siemens, Erlangen, Germany) using a 64-channel head coil. Blood-oxygen level-dependent (BOLD) images were recorded with T2*-weighted echo-planar images acquired with the multi-band sequence (version R016a for Syngo VE11B) provided by the University of Minnesota Center for Magnetic Resonance Research (https://www.cmrr.umn.edu/multiband/). Functional images were all collected as oblique-axial scans aligned with the anterior commissure–posterior commissure (AC–PC) line with the following parameters: 287 volumes per run, 54 slices, TR/TE = 1224/30 ms, flip angle = 66°, field of view = 210 × 210 mm2, slice thickness = 2.5 mm, voxel size = 2.5 × 2.5 × 2.5 mm3, multiband factor = 3. To correct for magnetic field inhomogeneity during data preprocessing, we also acquired a pair of spin-echo images with reversed phase encoding direction (TR/TE = 7.060/59 ms, flip angle = 90°, voxel size = 2.5 × 2.5 × 2.5 mm3). Structural T1-weighted images were collected using a T1-weighted Magnetization-Prepared 2 Rapid Acquisition Gradient Echoes (MP2RAGE) sequence (176 sagittal slices, TR/TE = 5000/2.98 ms, TI1/TI2 = 757/2500 ms, α1/α2 = 4°/5°, bandwidth = 240 Hz/pix, field of view = 256 × 256 × 176 mm3, slice thickness = 1 mm, voxel size = 1 × 1 × 1 mm3).
Preprocessing
Structural T1-weighted images were derived from MP2RAGE images by removing the noisy background and were skullstripped and segmented into tissue type (GM: gray matter, WM: white matter and CSF tissues) using the Computational Anatomy Toolbox (CAT12; http://dbm.neuro.uni-jena.de/cat12/). Functional data were analyzed using SPM12 (Wellcome Department of Cognitive Neurology, http://www.fil.ion.ucl.ac.uk/spm) implemented in MATLAB (MathWorks). Preprocessing for univariate analyses included the following steps (1) realignment to the mean EPI image with six-head motion correction parameters; (2) co-registration of the individual functional and anatomic images; (3) normalization toward MNI template; (4) spatial smoothing of functional images (Gaussian kernel with 5-mm FWHM).
Univariate fMRI analysis
A general linear model (GLM) was created using a design matrix containing one regressor (explanatory variable) for each condition of interest (i.e., social transitive, social intransitive, nonsocial transitive, and nonsocial intransitive) modeled as a boxcar function (with onsets and durations corresponding to the start of each trial video of that condition) convolved with the canonical hemodynamic response function (HRF) of SPM, one regressor for the control condition, built the same way, one regressor accounting for judgment and motor response (HRF-convolved boxcar function containing all the periods during which the rating screen was presented and responses given) and six regressors of noninterest resulting from 3D head motion estimation (x, y, z translation and three axis of rotation). As quantity and spatial amplitude of visual motion was different for each class of action, we also included one regressor controlling for unequal motion quantity. This regressor was modeled as a boxcar function with onsets and durations of each trial video convolved with the canonical HRF and parametrically modulated with motion quantity values (z-scored for each run). A regressor accounting for eye movements (see Results) was also included with each saccade modeled according to its onset and duration, convolved with the canonical HRF. In addition, to estimate and remove the variance corresponding to physiological noise, we used the PhysIO toolbox (Kasper et al., 2017). We extracted the time course of the signal from all voxels in the CSF and separately in the white matter. A principal component analysis (PCA) was performed (i.e., CompCor; Behzadi et al., 2007), and fourteen physiological components related to non-BOLD activity were extrapolated in the normalized WM (six first PCs + mean signal) and in the normalized CSF (six first PCs + mean signal). We included these fourteen components as confounds regressors in the GLM. The model was estimated voxelwise for each participant, also taking into account the average signal in each run. The contrast of parameter estimates of each condition computed at the individual level, were entered into a three-way repeated-measures ANOVA, with Age group (adolescents vs adults) as between-subjects factor, and Sociality (social vs nonsocial) and Transitivity (transitive vs intransitive) as within-subjects factors. We present results maps with a significance threshold set at pFWE < 0.05 with family-wise error (FWE) correction applied at the cluster level (cluster-defining noncorrected threshold at p < 0.001).
Multivoxel pattern analysis (MVPA)
Regions of interest (ROIs) definition
We focused on regions of the action observation network (see introduction). We defined six ROIs: bilateral lateral occipital temporal cortex (LOTC; including the posterior part of STS), bilateral intraparietal sulcus (IPS)/superior parietal lobe (SPL), and bilateral ventral premotor cortex (PMv). We first derived these ROIs from an independent meta-analysis of fMRI and PET data (Grosbras et al., 2012) that took the conjunction of activated voxels reported in a set of studies contrasting observing hand movements (with or without object) to control conditions (convergence threshold: p < 0.001 uncorrected). Then, in each participant, we identified the voxel showing the highest Z-value in the GLM contrast “all actions–control” in LOTC (left: x = −48 ± 3, y = −70 ± 5, z = 5 ± 5; right: x = 50 ± 4, y = −66 ± 6, z = 3 ± 6), in IPS/SPL (left: x = −33 ± 5, y = −48 ± 9, z = 55 ± 5; right: x = 35 ± 4, y = −44 ± 8, z = 56 ± 5), and in PMv (left: x = −52 ± 4, y = 6 ± 5, z = 28 ± 5; right: x = 44 ± 4, y = 7 ± 6, z = 39 ± 8; Fig. 4). We defined individual ROIs by taking a sphere of 12-mm radius centered around these peak voxels on each participant's brain mask (i.e., excluding voxel outside brain contour) resulting in a mean number of voxels of 475. Since previous studies have reported that brain size still changes in this age range (Mills et al., 2016), we wanted to ensure that this could not be an issue when delineating ROIs of the same size in adolescents and in adults. We compared the total intracranial volume (TIV), obtained using CAT12, between the two groups. We found that the TIV was similar in adolescents (M = 1460 cm3, SD = 109) and adults (M = 1426 cm3, SD = 136; Mann–Whitney test: U = 344.5, p = 0.71). Thus, the ratio between the ROI size and brain volume was equivalent in our two groups.
Graphical representation of the ROIs used in the MVPA. Individual MNI coordinates (red: adults; blue: adolescents) used as sphere centers to construct the ROIs are mapped on PALS-B12 atlas surface configurations (Van Essen, 2005). Regions colored in light white represent the union of all individual ROIs projected on the flat map for LOTC, IPS/SPL, and PMv. PrCS: precentral sulcus; CS: central sulcus; PoCS: postcentral sulcus; IPS: intraparietal sulcus; STS: superior temporal sulcus; ITS: inferior temporal sulcus; OTS: occipital temporal sulcus.
ROI-based MVPA
We performed MVPAs for each participant within each individual ROI At the individual level we computed a new GLM using the realigned and unwarped images in MNI space and estimating single trial activity (i.e., using 20 regressors per run). The new GLM included the same covariates used in the univariate analysis. In total this procedure resulted in 32 maps of parameter estimates (β) per action condition (four action exemplars × eight runs) for each subject (total 128 maps). MVPA was performed using nilearn (Abraham et al., 2014) for Python 3.7. For voxels within each ROI, we trained, on a subset of data, a linear support vector machine classification algorithm (regularization hyperparameter C = 1), to distinguish patterns of parameter estimates associated with each condition. We then tested the classifier ability to decode the conditions associated with patterns of parameter estimates on the remaining data. We used an eight-fold leave-one out cross-validation schedule, training on data from seven runs and testing on data from the remaining run. We averaged the classification accuracies (percent correct) across the eight iterations.
We conducted a within-category and an across-category decoding (see Fig. 5) separately. A significant within-category decoding (e.g., decoding sociality within transitive actions, i.e., ST vs NT) indicates that a brain region contains information about action categories at a low level of abstraction. The presence of significant across-category decoding indicates that a brain region represents information about action categories at a higher level of abstraction. In the within-category decoding, to decode along sociality, we trained the classifier to discriminate social from nonsocial actions for either transitive actions in data from seven runs (56 patterns: 28 ST vs 28 NT) and tested it on the eight patterns from the remaining run (four ST and four NT). We repeated this leave-one-run-out scheme eight times and averaged the results. Then we repeated the procedure to discriminate social from nonsocial actions for the intransitive category. To decode transitivity, the classifier was trained to discriminate between transitive and intransitive actions for social actions (56 patterns: 28 ST vs 28 SI) and was tested on the remaining eight patterns (4 ST and four SI). The procedure was repeated to discriminate transitive from intransitive actions for nonsocial dimension. The resulting accuracies were averaged across the two within decoding directions (i.e., ST vs NT mean accuracies were averaged with SI vs NI mean accuracies).
Schematic representation of the MVPA. A parameter (β) estimate was first extracted for each trial using a GLM. The SVM classification was performed using a leave-one-out cross-validation scheme. For within category decoding, to decode along sociality, the SVM classifier was trained to discriminate between ST versus NT (56 β) and tested on ST versus NT (8 β). In another step, the SVM classifier was trained to discriminate between SI versus NI (56 β) and tested on SI versus NI (8 β). Then the mean accuracies were averaged. For the across category decoding, to decode along sociality, the SVM classifier was trained to discriminate between ST versus NT (56 β) and tested on SI versus NI (8 β). Classification accuracies were averaged across iterations (8 iterations) and across the two generalization directions (e.g., Transitive to Intransitive, and vice versa). Mean classification accuracies were then entered in a three-way ANOVA with Age group (adolescents and adults) as between factor and hemisphere (left and right) and Action (sociality and transitivity) as within factors, for each level of decoding separately.
In addition, we used an across category decoding to test to what extent decoding on one type of action could generalize across the orthogonal dimension. More specifically, to decode actions along transitivity, we trained the classifier to discriminate between transitive and intransitive actions for the social actions (ST vs SI) and tested the classifier in the nonsocial dimension (NT vs NI), and vice versa. To decode along sociality, we trained the classifier to discriminate between social and nonsocial actions for the transitive actions (ST vs NT) and tested the classifier in the intransitive dimension (SI vs NI), and vice versa. We used the same leave-one-run out scheme as described above: for each action category and each iteration (n = 8 folds), the classifier was trained on 56 patterns (seven runs × eight patterns) and tested on the remaining eight patterns (1 run). The resulting accuracies were averaged across the two generalization directions.
For each analysis, to make group-level inferences we compared the averaged accuracies per ROI to chance level (50%) using a one-tailed one-sample Student's t test. We also entered classification accuracies in a mixed ANOVA with Age group (adolescents and adults) as between-subjects factor and Hemisphere (left and right) and Dimension category (sociality and transitivity) as within-subject factors. Separate ANOVAs were conducted for each ROI and for within-dimension and across-dimension decoding. Significant main and interaction effects were broken down in lower-order simple effects for further analysis using post hoc comparisons. Statistical results (i.e., Student's t tests and post hoc comparisons) were FDR-corrected for the number of tests performed (Benjamini and Yekutieli, 2001). As our main question was about functional immaturity in adolescents, we tested whether information carried by patterns of activity yielded classifiers to discriminate between categories better in adults than in adolescents. We thus performed one-sided t tests when comparing adults versus adolescents. We also assessed the significance of decoding at the individual level with a dataset-wise permutation scheme (Etzel and Braver, 2013), where both the training and the testing sets were relabeled before the cross-validation, while maintaining the dataset's structure. To do so, the classification was repeated 1000 times after randomizing the labels to construct a null-distribution per subject, ROI, action, and level of decoding. The p-value was then given by dividing the number of times the mean classification accuracy was greater than the classification score obtained by permuting labels, by the number of permutations.
Searchlight MVPA
To confirm the results obtained with the ROI-based decoding and to test the presence of additional brain areas for decoding Social versus Nonsocial and Transitive versus Intransitive actions, we conducted whole-brain searchlight analyses for within and across-dimension decoding in adolescents and adults. The whole-brain searchlight decoding analysis was implemented using a 12-mm radius (∼475 voxels per searchlight sphere). The data derived from the voxels within a sphere were classified and the classification accuracy was stored at the central voxel, yielding a 3D brain map of classification accuracy (Haynes, 2015). MVPA classification was conducted with the same parameter and procedure as the ROI-based MVPA. To identify regions where classification accuracy was significantly above chance (i.e., 50%) in adults and adolescents, the chance level was subtracted from classification maps and these maps were smoothed (FWHM = 5 mm). Then, we conducted one-sample t tests and corrections for multiple comparisons were applied at the voxel level (FWE, p < 0.05). We also contrasted whole-brain searchlight accuracy maps between adolescents and adults for within and across-dimension decoding using one-sided two-sample t tests.
Data and code availability statement
Unthresholded statistical maps for the main contrasts of interest can be visualized on NeuroVault (https://neurovault.org/collections/13305/). Behavioral and preprocessed neuroimaging data will be posted on a public repository (OpenfMRI) after publication of the research article. Stimulus materials and code are available on reasonable request.
Results
Behavioral ratings
We conducted a mixed ANOVA with Sociality (social vs nonsocial) and Transitivity (transitive vs intransitive) as within-subjects factors and Age group (adults vs adolescents) as between-subjects factors, separately for each rating task (i.e., rating sociality and rating transitivity; see Fig. 1C).
Concerning the rating of the transitive dimension, we found a main effect of Transitivity F(1,49) = 176.88, p < 0.001, ηp2 = 0.97, transitive actions videos (M = 0.93, SD = 0.32) were rated more transitive than intransitive action (M = −0.93, SD = 0.24), unsurprisingly. No other main effect nor interaction was found.
Concerning the rating of the social dimension, we found a main effect of Sociality F(1,49) = 623.82, p < 0.001, ηp2 = 0.93, social videos (M = 0.79, SD = 0.51) were rated more social than the nonsocial actions (M = −0.79, SD = 0.49). We also found a main effect of Transitivity F(1,49) = 11.32, p < 0.01, ηp2 = 0.19, intransitive actions (M = 0.14, SD = 1.10) were rated more social than transitive actions (M = −0.14, SD = 0.71). Finally, the ANOVA revealed an interaction between Sociality and Transitivity F(1,49) = 42.50, p < 0.001, ηp2 = 0.46: there was no difference between nonsocial transitive (M = −0.72, SD = 0.38) and nonsocial intransitive actions (M = −0.86, SD = 0.58, p = 0.47), whereas social intransitive actions (M = 1.14, SD = 0.29) were rated more social than social transitive videos (M = 0.44, SD = 0.44, p < 0.001). No other main effect nor interaction including the factor Age group was significant.
Gaze data
We identified saccades from the eye-tracking data to account for potential confounds because of eye-movements-related brain activity. We also tested for differences in eye movements parameters between conditions. For this, we conducted mixed ANOVAs on mean saccade numbers and mean saccade amplitude per trial video as a function of condition with Sociality (social vs nonsocial) and Transitivity (transitive vs intransitive) as within-subjects factors and Age group (adults vs adolescents) as between-subjects factor.
Altogether participants made slightly more saccades for social than nonsocial trial videos (M = 14.7, SE = 0.50 vs M = 13.2, SE = 0.45, main effect of Sociality F(1,49) = 83.75, p < 0.001) as well as more saccades for the trial videos of intransitive than transitive actions (M = 15.0, SE = 0.51 vs M = 12.9, SE = 0.42, Main effect of Transitivity F(1,49) = 109.9, p < 0.001). Adults made more saccades than adolescents (M = 15.1, SE = 0.63 vs M = 12.7, SE = 0.67; main effect of Age group F(1,49) = 6.67, p = 0.013), but there was no interaction with the effects of Sociality (F(1,49) = 0.17, p = 0.68) nor Transitivity (F(1,49) = 1.10, p = 0.30).
The exploratory pattern as indexed by saccades amplitude did not differ between groups nor conditions, apart from a small difference between the Intransitive condition and the Transitive condition (averaged amplitude 2.32° vs 2.03°, F(1,49) = 1.36, p < 0.001); this is unlikely to have an effect on brain activity.
Univariate fMRI results
We entered the individual maps of parameters estimates for the four action conditions (NI, NT, SI, ST) in a repeated-measures ANOVA with Sociality and Transitivity as within-subject factors and Age group as between-subjects factor. The results are displayed in Table 1 and Figure 6.
Brain regions activated in the whole-brain analysis for the main effect of age group, sociality, and transitivity
Brain activation associated with main effect of (A) Sociality, (B) Transitivity, and (C) Age group. Activations are projected on PALS-B12 atlas surface configurations (Van Essen, 2005): lateral fiducial surfaces. Statistical maps are FWE-corrected for multiple comparisons across the whole-brain at the cluster level; FWE, p < 0.05). AIPS: anterior intraparietal sulcus; SPL: superior parietal lobe; pSTS: posterior superior temporal sulcus; MTG: middle temporal gyrus; STS: superior temporal sulcus; iLOC: inferior lateral occipital cortex; Occ fusif G: occipital fusiform gyrus; Intracal: intracalcarine cortex; SMG: supramarginal gyrus; AG: angular gyrus; PostG: postcentral gyrus; dPMC: dorsal premotor cortex; LOC: lateral occipital cortex; TP: temporal pole; TOf: temporo-occipital fusiform gyrus; Lingual G: Lingual gyrus; EBA: extrastriate body area; FBA: fusiform body area; EVC: extrastriate visual cortex; vMPFC: ventral medial prefrontal cortex; TPJ: temporoparietal junction.
The ANOVA revealed a main effect of Sociality (see Fig. 6A): observing social compared with nonsocial actions induced stronger activity in bilateral posterior superior temporal sulcus and bilateral middle temporal gyrus, bilateral supramarginal gyrus, bilateral precentral gyrus, in left superior parietal lobe and in left inferior frontal gyrus bilateral, as well as in superior frontal gyrus, SMA, precuneus bilateral visual cortices (intracalcarine cortex and lingual gyrus). The reverse contrast yielded significant activation in left anterior parietal cortex (AIPS/SPL), left inferior occipital cortex and right precentral gyrus, as well as in occipital pole and lateral occipital cortex.
We found a main effect of Transitivity (see Fig. 6B): observing transitive actions was associated with stronger activity in bilateral medial occipital cortex, bilateral precentral cortex, right superior frontal sulcus, left parieto-occipital cortex, right inferior temporal cortex, bilateral cerebellum (lobule VIII/IX), left angular gyrus and right posterior cingulate cortex. The reverse contrast revealed significant activations in bilateral early visual cortices (cuneus), right lateral occipital temporal cortex (overlapping with EBA/FBA), right supramarginal gyrus and posterior superior temporal cortex (SMG/pSTS), bilateral temporal poles, right pericentral cortex (central sulcus and postcentral cortex).
There was also a main effect of Age group in left ventral medial prefrontal cortex and in left temporoparietal junction, reflecting an increased BOLD signal when observing actions in general, in adolescents compared with adults (see Fig. 6C), the reverse contrast yielding no significant activation.
We did not observe any significant interaction between Sociality and Transitivity, nor interaction between these factors and Age group nor three-way interaction.
ROI MVPA
In the ROI MVPA, we tested whether AON regions contain specific or general representations for the social and transitive dimensions (Fig. 7). When training and testing the classifier within the same dimension, significant above-chance decoding was found in all the regions of the AON, for both adolescents and adults. When testing for generalization of decoding across the orthogonal dimension, we found in both groups, significant above-chance decoding in LOTC for both dimensions and in PMv for the transitive dimension only. Finally, IPS/SPL did not show any significant across-dimension decoding neither for adults nor for adolescents. We also conducted a four-way classification in each ROI and each participant (i.e., testing the ability of the classifier to discriminate one action class against all others). We found above-chance decoding in all ROIs for all action classes (chance level = 25%; see Fig. 8).
ROI MVPA results. Bar graphs show group averaged decoding accuracies for within (top) and across (bottom) category decoding for social versus nonsocial actions (blue) and transitive versus intransitive actions (red) for the two groups (adolescents = dark and adults = light). Error bars indicate SD. Asterisks represent statistical significance (FDR-corrected for the number of tests). Dotted line indicates decoding accuracy at chance-level (50%). ***p < 0.001, **p < 0.01, *p < 0.05.
Four-way mean classification accuracies for each action class (NI, NT, SI, and ST) in adolescents (dark green) and adults (light green) for each ROI (LOTC, PMv, and IPS/SPL). Error bars indicate SD. Asterisks indicate statistical significance with one-tailed t tests (different from chance level = 25%) after FDR correction for multiple comparisons. ***p < 0.001.
In a second step, we compared classification performance for adolescents and adults in LOTC, PMv, and IPS/SPL, by entering mean classification accuracies in three-way ANOVAs with Hemisphere (left and right) and Dimension category (sociality and transitivity) as within-subjects factors and Age group (adolescents, adults) as between-subjects factor. These analyses were performed for each type of decoding (i.e., within and across) separately.
Concerning within-dimension decoding, the ANOVAs revealed a main effect of Dimension category in LOTC, F(1,49) = 24.72, p < 0.001, with higher values for decoding transitive/intransitive actions than for decoding social versus nonsocial actions. There was a main effect of Hemisphere in IPS/SPL, F(1,45) = 7.40, p < 0.01, with higher decoding values in the left hemisphere. There was a main effect of Age group in IPS/SPL, F(1,45) = 9.09, p < 0.01, with higher decoding values for adults compared with adolescents. We also found an interaction between Hemisphere and Dimension category in LOTC, F(1,45) = 7.50, p < 0.01: whereas transitivity was better decoded than sociality in the left hemisphere, t = −5.57, p < 0.001, there was no difference in the right hemisphere, t = −2.06, p = 0.13. Finally, ANOVAs revealed an interaction between Age group and Hemisphere in PMv, F(1,42) = 4.16, p = 0.048, but no statistical post hoc comparisons survived FDR corrections.
Concerning across-dimension decoding, the ANOVAs revealed a main effect of Dimension category in LOTC, F(1,49) = 27.77, p < 0.001, and in PMv, F(1,49) = 15.94, p < 0.001, with higher decoding values for distinguishing transitive from intransitive actions compared with social versus nonsocial actions. There was an interaction between Dimension category and Age group in LOTC, F(1,45) = 4.70, p = 0.035: while sociality was better decoded in adults compared with adolescents, t = −2.69, p = 0.017, there was no difference for decoding transitivity between the two groups, t = 0.21, p = 0.46. We also found an interaction between Dimension category and Hemisphere in LOTC, F(1,45) = 12.27, p ≤ 0.001, social and transitive dimensions of actions were decoded in a similar way in the right hemisphere, t = −1.17, p = 0.25, but transitive dimension was decoded better than sociality in the left hemisphere, t = −6.08, p < 0.001.
To get a deeper insight, we also assessed the significance of decoding in LOTC, IPS/SPL, and PMv at the individual level using permutations (Etzel and Braver, 2013; see Fig. 9; Table 2), with a cutoff of p < 0.05. For within dimension testing, the decoding accuracy was significant for almost all adults and adolescents in LOTC for both social versus nonsocial (left: [adolescents = 96%, adults = 96%]; right: [adolescents = 89%, adults = 100%]) and transitive versus intransitive actions (left: [adolescents = 100%, adults = 100%]; right: [adolescents = 96%, adults = 96%]). In IPS/SPL, the proportion of adults where decoding was significantly above chance was higher than the proportion of adolescents for both of action dimensions (Sociality, left: [adolescents = 81%, adults = 96%]; right: [adolescents = 69%, adults = 91%]; Transitivity, left: [adolescents = 78%, adults = 96%]; right: [adolescents = 80%, adults = 91%]). This proportion decreased in PMv (adults: range = 46–76%; adolescents: range = 29–60%). For LOTC, among individuals for whom decoding accuracy was significant within dimension, most also showed significant decoding across dimensions, indicating a more abstract representation of Transitivity (left: [adolescents = 85%, adults = 92%]; right: [adolescents = 67%, adults = 75%]). For Sociality however this was the case mainly for adults with less adolescents showing generalization (left: [adolescents = 41%, adults = 71%]; right: [adolescents = 59%, adults = 79%]). The proportion of participants showing significant across-dimension decoding was much lower for both groups in IPS/SPL (Sociality, left: [adolescents = 4%, adults = 17%]; right: [adolescents = 19%, adults = 23%]; Transitivity, left: [adolescents = 8%, adults = 9%]; right: [adolescents = 7%, adults = 18%]), and in PMv (Sociality, left: [adolescents = 8%, adults = 14%]; right: [adolescents = 0%, adults = 17%]; Transitivity, left: [adolescents = 17%, adults = 14%]; right: [adolescents = 24%, adults = 25%]).
Percentage of subjects showing classification accuracies significantly above chance following permutation testing
Representation of individual mean classification accuracies for within and across decoding levels for decoding social versus nonsocial and transitive versus intransitive actions, in each ROI. Significant classification accuracies following permutation testing (p < 0.05) are indicated in green and nonsignificant classification accuracies following permutation testing (p > 0.05) are indicated in red. Dashed lines represent chance level (50%).
Searchlight MVPA
To confirm the results of the ROI MVPA and to look for additional brain areas that may represent the social and/or the transitive dimension of actions in both adolescents and adults, we conducted a searchlight MVPA for the within and across decoding schemes, using the same classification procedure as for ROI MVPA, described in Materials and Methods, ROI-based MVPA. The searchlight analysis confirmed what we observed in the ROI MVPA (see Fig. 10; Table 3). Concerning within-dimension decoding, significant decoding was found for social versus nonsocial and for transitive versus intransitive actions bilaterally in brain areas of the AON overlapping with our ROIs in both groups of participants. Concerning across decoding, significant decoding was found for social versus nonsocial and for transitive versus intransitive actions in adults and adolescents mainly in bilateral LOTC. We also observed significant decoding in middle frontal gyrus and precentral gyrus for transitive actions in adults.
Clusters identified in searchlight within and across-category decoding of sociality and transitivity in adolescents and adults
MVPA searchlight analyses. Mean accuracy maps and statistical maps of the searchlight within and across decoding for Social versus Nonsocial actions (chance level = 50%) and for Transitive versus Intransitive actions (chance level = 50%) for adults and adolescents. Corrections for multiple comparisons were applied at the voxel level (FWE, p < 0.05).
We also contrasted whole-brain searchlight accuracy maps between adults and adolescents for within and across dimension decoding (see Fig. 11; Table 4) separately. For within-dimension decoding, a significant cluster was found in the left SPL/IPS for sociality when contrasting adults versus adolescents, but no difference was observed for transitivity. Concerning across-dimension decoding, a significant cluster was observed in the right LOTC for sociality when contrasting adults versus adolescents, but no difference was observed for transitivity. Contrasting adolescents versus adults yielded no significant clusters whatever the level of decoding or the dimension category.
Clusters identified in within and across-category decoding of sociality when contrasting searchlight maps for adults and adolescents
Contrasts of searchlight accuracy maps between adults and adolescents for within and across decoding for sociality and transitivity. Corrections for multiple comparisons were.
Discussion
We examined the local neural representation of conceptual dimensions of action in the AON of adolescents. Given the protracted development of social cognition relative to object-related knowledge, we hypothesized that the representation of socially-directed but not object-directed actions would differ between adults and adolescents. First, we report that the adolescent AON is engaged at an adult-like level and contains fine-grained representations of the social and transitive dimensions of actions. Second, MVPA revealed age-related representational differences in specific parts of the AON, as a function of the level of abstraction. In the LOTC, sociality and transitivity were decoded from brain activity at a low level of abstraction to the same extent in adolescents and adults; at a higher level of abstraction, lower decoding accuracies were observed in adolescents for the social dimension. In the IPS/SPL region, adolescents showed lower decoding accuracies compared with adults for both action dimensions at a low level of abstraction.
The AON is in place and contains fine-grained representations about actions in adolescence
Our univariate analyses indicate that the main components of the AON are engaged to a comparable level in adolescents and adults. This extends previous reports of adult-like AON engagement in childhood and early adolescence (Ohnishi et al., 2004; Pokorny et al., 2015; Biagi et al., 2016; Morales et al., 2019) by testing advanced and postpubertal adolescents (14–17 years old). Age-differences were observed only outside the AON, in the vMPFC and TPJ, regardless of action category. These two regions are associated to the mentalizing network (Frith and Frith, 2007; Van Overwalle and Baetens, 2009), which is involved in attributing mental states to others, and whose activity is enhanced during adolescence (for review, see Blakemore, 2008). It may be that, during our task, adolescents inferred thoughts and intentions, independently of the transitive or social nature of the actions. Future studies should investigate behavioral correlates of viewing these actions as well as links between the AON and mentalizing areas.
In addition, we show for the first time that, like in adults, the adolescents' AON activity is modulated by the transitive and social dimension of the actions. Social actions induced higher activity than nonsocial actions in the pSTS, in the supramarginal gyrus, as well in the precentral cortex. This complements previous studies, in adults, that investigated either object-directed actions with a social intent or communicative symbolic actions or interactions (Iacoboni et al., 2004; Montgomery et al., 2007; Centelles et al., 2011; Saggar et al., 2014; Walbrin et al., 2018). Nonsocial actions engaged more the most posterior part of the temporal occipital cortex, as well as anterior parietal/postcentral areas, perhaps in relation to the fact that they drew attention to somato-sensation in the actor, e.g., in the stroking or rubbing videos. Observing transitive, relative to intransitive, actions yielded significant activation in the bilateral medial fusiform gyrus. This region is involved in processing information about objects (Mahon et al., 2007) and object-directed actions (Chen et al., 2016). We also observed bilateral activation of IPS/SPL and dPMC, recurrently associated with grasping and reaching (Daprati and Sirigu, 2006), as well as in observing others performing manipulative actions (Lanzilotto et al., 2019; Orban et al., 2021). Observing intransitive versus transitive actions revealed activation in bilateral pSTS/STS and a large cluster of the lateral occipitotemporal cortex (extending into the fusiform gyrus). This latter region is likely to encompass the extrastriate body area (EBA) and the fusiform body area (FBA), which selectively process visual features of human bodies (Downing and Peelen, 2011). It could be that these regions are relatively more engaged when observing actions turned toward the body, self or other, as in scratching or signaling.
The multivariate results corroborate the univariate analyses, while in addition they reveal differences between adolescents and adults. This indicates that the pattern of activity in regional neuronal assemblies carries information that is more likely to distinguish the different categories in adults than it is in adolescents.
Weaker generalization for decoding social actions in the adolescents' LOTC
The LOTC contains a mosaic of overlapping regions selective for particular types of information that participate to action understanding and social interpretation (for a discussion, see Lingnau and Downing, 2015; Wurm and Caramazza, 2019). Here, we found that the transitive and the social dimensions of actions could be decoded from brain activity when tested within as well as across categories, replicating the results obtained in adults by Wurm et al. (2017). This indicates a representation at a high conceptual abstraction. While no significant age-difference was observed for within-category decoding, across-category decoding was lower in adolescents than in adults, specifically for the social dimension. This suggests that in the LOTC distinguishing social from nonsocial actions is adult-like in adolescents for relatively low levels of abstraction; for example, manipulating versus moving an object or scratching versus waving. In contrast, for higher levels of abstraction, that is being able to distinguish actions on the basis that they are directed toward the other person, independently of their spatial features or manipulative aspects, it seems that the neural representation in adolescents is not yet as good as in adults.
As adolescence is a period of major development in the processing of social stimuli, from a behavioral and neural point of view, it is perhaps not surprising to observe differences in the representation of the social dimension of actions. However, while it is well established that the LOTC encodes salient perceptual components important for interpreting social signals, whether it contains abstract representations that capture interpretational descriptions or intentional aspects of social actions is questionable (Wurm and Caramazza, 2019). Some authors have suggested that the LOTC forms the anchor of a pathway that extends into the superior temporal cortex and temporal parietal junction, where increasingly rich representations of postures, movements, actions, and mental states of other people are constructed (Carter and Huettel, 2013). The LOTC contains representations of body parts (Bracci et al., 2015), of interaction of body parts with external objects (Bracci and Peelen, 2013), of human-human interactions (Walbrin et al., 2018), and of the presence of another person in the scene (Wurm and Caramazza, 2019). Thus, the LOTC may form an intermediate processing stage during action recognition, which may be still maturing in adolescence. At the individual level, we observed significantly above-chance decoding of social versus nonsocial actions in the LOTC in only half of the adolescents. Such higher intersubject variability has been described in other domains and may explain some developmental differences (Bray et al., 2009).
Interestingly the transitive dimension was better decoded than the social dimension in the left LOTC, confirming partly previously published results (Wurm et al., 2017). This may be related to reports the tool-use observation network is lateralized to the left hemisphere (Reynaud et al., 2019).
Age-related difference for decoding action in IPS/SPL
Lower decoding accuracy of the social and the transitive dimensions were observed in the IPS/SPL. In contrast, the univariate analyses revealed no age effect, contrary to what Morales et al. (2019) reported when comparing children to adults during object-directed action observation. Our data thus speak in favor of ongoing maturation of the IPS/SPL region with respect to the specific action features that are represented. This region, consistently involved in observation and execution of manipulative actions (Dinstein, et al., 2007; Reynaud et al., 2016, 2019; Lanzilotto et al., 2019; Orban et al., 2019) has been proposed to play a role in action understanding by representing actor-object interactions (Tunik et al., 2007; Ramsey and Hamilton, 2010).
Here, we found strong within-category decoding in IPS/SPL in adults and adolescents, but no significant decoding at higher level of abstraction. This suggests that this region does not support high-level conceptual representation during action observation, as already pointed by previous studies (Wurm and Lingnau, 2015; Wurm et al., 2016). A possibility is that the anterior part of this region stores semantic representations of actions (Wurm and Lingnau, 2015; Garcea et al., 2018), and that the posterior/dorsal part is more related to online visuomotor guidance (Binkofski and Buxbaum, 2013) that can be activated for imitation. In agreement with this, the highest decoding values were located in the more dorsal part of this region.
Behavioral relevance
In the judgment task performed in the scanner, we did not observe any significant difference between adolescents and adults: both groups were able to explicitly recognize whether the action involves an object or not and whether it concerned the other person, and rated similarly the importance of the object or the person. This is not surprising given that this task is relatively crude. It also reinsures us that the adolescents were paying attention and categorizing the stimuli as well as adults. This raises the question, however, of whether differences in MVPA results would have implications at the behavioral level. Boria et al. (2009) reported that nine-year-old children, unlike adults, make more errors when asked questions about intentionality of actions that do not rely on the functional properties of objects than when asked simply what the person is doing. It is possible that, even in adolescence, the ability to correctly identify abstract features of action are still immature. This would need to be tested with more subtle questions or implicit tasks.
In conclusion, in line with our hypothesis, we revealed age differences in the local pattern of activation representing abstract social features of actions in LOTC, but we also found differences in representing low level of abstraction for actions in IPS/SPL. This underlies adolescent ongoing development in the functional organization of the posterior parts of the AON. These differences between adolescents and adults were captured by multivariate but not univariate analyses, suggesting that different patterns of voxels may capture subtle changes between adolescents and adults that could not be revealed at the voxel level.
Footnotes
This research was supported by Agence Nationale de la Recherche (France) Grants ANR-14-ACHN-0023 and ANR-16-CONV-0002 (ILCB) and the Excellence Initiative of Aix-Marseille University Grant A*MIDEX; ANR-11-IDEX-0001-02.
The authors declare no competing financial interests.
- Correspondence should be addressed to Mathieu Lesourd at mathieu.lesourd{at}univ-fcomte.fr or Marie-Hélène Grosbras at marie-helene.grosbras{at}univ-amu.fr