Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Articles, Behavioral/Cognitive

Distinct Regions of Right Temporal Cortex Are Associated with Biological and Human–Agent Motion: Functional Magnetic Resonance Imaging and Neuropsychological Evidence

Zaizhu Han, Yanchao Bi, Jing Chen, Quanjing Chen, Yong He and Alfonso Caramazza
Journal of Neuroscience 25 September 2013, 33 (39) 15442-15453; https://doi.org/10.1523/JNEUROSCI.5868-12.2013
Zaizhu Han
1State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Yanchao Bi
1State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jing Chen
2Department of Psychology, Emory University, Atlanta, Georgia 30322,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Quanjing Chen
1State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Yong He
1State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Alfonso Caramazza
3Department of Psychology, Harvard University, Cambridge, Massachusetts 02138, and
4Center of Mind/Brain Sciences (CIMeC), University of Trento, Rovereto (TN) 38068, Italy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

In human lateral temporal cortex, some regions show specific sensitivity to human motion. Here we examine whether such effects reflect a general biological–nonbiological organizational principle or a process specific to human–agent processing by comparing processing of human, animal, and tool motion in a functional magnetic resonance imaging (fMRI) experiment with healthy participants and a voxel-based lesion-symptom mapping (VLSM) study of patients with brain damage (77 stroke patients). The fMRI experiment revealed that in the lateral temporal cortex, the posterior superior temporal sulcus shows a preference for human and animal motion, whereas the middle part of the right superior temporal sulcus/gyrus (mSTS/STG) shows a preference for human and functional tool motion. VLSM analyses also revealed that damage to this right mSTS/STG region led to more severe impairment in the recognition of human and functional tool motion relative to animal motion, indicating the causal role of this brain area in human–agent motion processing. The findings for the right mSTS/STG cannot be reduced to a preference for articulated motion or processing of social variables since neither factor is involved in functional tool motion recognition. We conclude that a unidimensional biological–nonbiological distinction cannot fully explain the visual motion effects in lateral temporal cortex. Instead, the results suggest the existence of distinct components in right posterior temporal cortex and mSTS/STG that are associated, respectively, with biological motion and human–agent motion processing.

Introduction

Compelling findings in cognitive neuroscience show that the animate–inanimate distinction plays a fundamental role in the neural organization of perceptual and cognitive processes in both humans and monkeys (Martin et al., 1996; Caramazza and Shelton, 1998; Kriegeskorte et al., 2008; Haxby et al., 2011). In the visual motion processing stream in human lateral temporal cortex, some regions have been found to be differentially sensitive to motion of biological entities relative to other types of motion, with the posterior superior temporal sulcus (pSTS) seemingly specifically involved in the recognition of (articulated) biological motion (Beauchamp et al., 2002, 2003; Grossman et al., 2005; Saxe et al., 2004; for review, see Blake and Shiffrar, 2007; Grosbras et al., 2012).

The precise role of the pSTS and nearby regions in the recognition of biological motion remains unclear. One common caveat to the biological motion research is that the critical evidence is typically based on stronger activation for human motion relative to other stimuli, most commonly, scrambled motion (Grossman and Blake, 2002; but see Pelphrey et al., 2003; Gobbini et al., 2007). Thus, it is not clear whether the observed effects reflect a human–nonhuman or a general biological–nonbiological distinction along which the visual motion processing stream is organized (Chouchourelou et al., 2013). A further debate is whether the potential human–nonhuman distinction is to be explained by the effect of socially relevant motion (Kaiser et al., 2012). Human motion is intrinsically volitional and is naturally interpreted as being intentional (Lahnakoski et al., 2012). However, agency need not be social: solitary, instrument-directed (e.g., hammering), and intransitive (e.g., walking) acts need not have a social dimension. Thus, the question remains whether there are brain regions that are specifically involved in human (conspecific) motion recognition, as distinct from more general biological motion recognition, and whether such regions also show specialization for nonsocial human agency.

To further clarify the organization of visual motion processing regions, we distinguish among three categories of object motion: human motion, animal motion, and functional tool motion. The latter type of motion depicts a moving tool in the manner typical of its use by humans. Such motion is not articulated biological motion and does not have apparent social valence, but it implies a human agent: typical tools do not move by themselves, and their patterned, systematic motion is usually the result of human manipulation. Thus, a brain region's preferential response to both human and functional tool motion compared with animal motion would reflect a human agent effect that is not to be attributed to articulated motion or social human agency. In contrast, preference to human and animal motion would indicate a more general biological effect. We tested neural responses to these three types of motion in a functional magnetic resonance imaging (fMRI) experiment with healthy participants and further examined the causal role of specific brain regions on the processing of the three types of motion stimuli using the voxel-based lesion-symptom mapping (VLSM) approach (Bates et al., 2003) in a group of stroke patients.

Materials and Methods

Experiment 1: fMRI experiment with healthy participants

Participants

Sixteen college students (six males) from Beijing Normal University participated (with pay) in the study. They were all native Mandarin Chinese speakers and right handed [Edinburgh Handedness Inventory (Oldfield, 1971)], with a mean age of 20.7 years (range, 18–23) and 14.4 (range, 13–16) mean years of education. They had normal or corrected-to-normal vision. None suffered from psychiatric or neurological disorders, had ever sustained a head injury, or were on any psychoactive medication. All participants gave written informed consent approved by the Institutional Review Board of the Beijing Normal University (BNU) Imaging Center for Brain Research.

Materials and procedures

Participants were asked to perform a delayed matching-to-sample task on motion stimuli. The motion stimuli included point-light animations of human (e.g., running), animal (e.g., jumping), and tool (e.g., hammering) movements. We created the point-light animation of human motion following the procedure described by Johansson (1973). Thirteen light-sensitive small dots were adhered on the major joints of an actor (head, shoulders, elbows, wrists, hips, knees, and ankles). The actor performed the actions. A 3D motion analysis-capture system (Eagle-4 model; Motion Analysis Corporation, www.motionanalysis.com) with eight cameras captured each action and transformed them into the point-light stimuli. We created the point-light animation for tool motion using a similar method: light-sensitive dots attached on a set of tools (e.g., a scissors had five dots adhered: two on knife blades, two on handles, and one on a joint; mean number of dots, 8.1), and thus only the motion of the tools was captured. The actor manipulated the tools in their typical manner. The point-light animal motion was created by scanning the Muybridge photography collection of animals in motion, sequencing the scans, and obtaining the main joint positions, and it was then scripted in Matlab (e.g., an elephant had 12 dots: head, nose, neck, shoulders, hip, knees, forefeet, hindfeet; mean number of dots, 9.9; courtesy of Emily Grossman, University of California Irvine, Irvine, CA). Ten items of each category were selected, and mirror-reverse stimuli were constructed to increase item numbers. We also included a nonobject point-light “global motion” condition as a baseline, where in each item all but one point-light moved in the same direction (mean number of dots, 10). The exception point-light was included to make participants pay more attention. There were 48 global motion stimuli, and they were run on Psychtoolbox version 3.0.9 (Brainard, 1997; Pelli, 1997) in Matlab 2009b (Mathworks). The participants were shown the entire list of the stimuli before entering the scanner for familiarization.

In the scanner, participants viewed the stimuli binocularly through a mirror attached to the head coil adjusted to allow foveal viewing of a back-projected monitor (refresh rate, 60 Hz; spatial resolution, 1024 × 768). The distance between the screen and participants was 110 cm. The width and height of the point-light stimuli subtended ∼13.5 × 10.1° on the screen. The size of the dots was around 0.16°. The stimuli were presented in blocks of four items from the same condition (human, animal, tool, or global motion). For each block, participants were instructed to judge whether the last item was identical to any of the first three in terms of item identity (human, animal, and tool conditions) or overall movement direction (for global motion condition). Note that for the human condition, the judgment was whether the items were the same action; for tools and animals, the judgment was whether the items were the same object. We used this task to prevent judgment on the mere basis of low-level perceptual features. Within a block, each of the four stimuli lasted for 2.5 s with a 1 s fixation cross appearing between stimuli. The fixation cross before and after the last trial was colored green to cue the participant about the beginning and end of each block; the last fixation stimulus lasted for an additional 1 s to allow for the response. Participants responded “yes” or “no” by pressing a button with the thumb of the right or left hand after seeing the last fixation cross. Thus, each block was 15 s long. A fixation cross of 6 s occurred between blocks, as well as before the first block and after the last block. Each run included 16 blocks, with 4 blocks of each category (human, tool, animal, global motion), and lasted for 5 min, 42 s. There were three runs, for a total of 17 min, 6 s. For each condition, there were 12 blocks (48 trials) in total. Each motion item of the three critical conditions was repeated four to five times. The block order across all runs was assigned in a Latin-square fashion; the order of runs was pseudo-randomized across participants.

MRI data acquisition

Structural and functional MRI data were collected with a 3T Siemens Trio Tim scanner at the BNU imaging center. A high-resolution 3D structural data set was acquired with a 3D magnetization-prepared rapid gradient echo (MPRAGE) sequence in the sagittal plane [time repetition (TR), 2530 ms; time echo (TE), 3.39 ms; time inversion (TI), 1100 ms; field of view (FOV), 200 × 200 mm2; flip angle (FA), 7°; matrix size, 256 × 256 mm2; voxel size, 1 × 1 × 1.33 mm3; slice number, 144 slices; slice thickness, 1.33 mm]. BOLD signals were measured with an EPI sequence (TR, 2000 ms; TE, 30 ms; FOV, 200 × 200 mm2; FA, 90°; matrix size, 64 × 64 mm2; voxel size, 3.125 × 3.125 × 4 mm3; slice number, 33 slices; slice thickness, 4 mm; slice orientation, axial). E-prime 2.0 was used for stimulus presentation and response recording. The whole scanning time for each participant was about 30 min.

fMRI data analysis

fMRI data were analyzed using SPM8 (Wellcome Trust Centre for Neuroimaging, http://www.fil.ion.ucl.ac.uk/spm/) and Matlab 7.9 (Mathworks, http://www.mathworks.com). The first 6 s (3 volumes) in each functional run were discarded to include only data collected when the scanner had reached in the magnetic steady state. Preprocessing of the functional data included 3D motion correction with respect to the mean image of the functional images, coregistering 3D structural image to mean image, and normalizing functional images to Montreal Neurological Institute (MNI) standard space with unified segmentation on structural image and spatial smoothing (Gaussian filter, 6 mm full-width half-maximum). During the normalization to MNI space, all the functional images were resampled to 3 × 3 × 3 mm3 resolution.

All functional data were then analyzed using the general linear model (GLM). We included four regressors of interest corresponding to the four conditions (three critical categories and one global motion condition) and six head motion parameters as regressors of no interest.

In the whole-brain analyses, random-effect GLM analyses were conducted to analyze the group data. We examined the three effects of interest: human motion, human–agent, and biological motion effects. Regions showing a human motion effect were derived from the contrast of human motion versus global motion, with threshold set at p < 0.05 corrected with the false discovery rate (FDR) and the cluster size (k) at >20 voxels (540 mm3); those showing a human–agent effect were computed from the conjunction of human motion > animal motion and functional tool motion > animal motion; those showing a biological motion effect were computed from two kinds of conjunctions: (1) the conjunction of human motion > tool motion and animal motion > tool motion and (2) the conjunction of human motion > global motion and animal motion > global motion. We used these two kinds of baseline for examination of biological motion effects because pSTS has been shown to be activated by implied motion stimuli (Beauchamp et al., 2002; Peuskens et al., 2005) and functional tool motion may imply biological motion (human hand motion) to some extent. The threshold for individual contrast in the conjunction analyses were set at FDR p < 0.05 and k > 20 voxels. All results were shown in the MNI templates and projected onto the MNI brain surface using the BrainNet viewer (http://www.nitrc.org/projects/bnv/) (Xia et al., 2013).

Experiment 2: VLSM experiment with patients

Participants

The stroke patients and healthy controls who participated in this behavioral and imaging study were all Mandarin Chinese native speakers with normal vision and hearing (without or with correction), and all provided written informed consent. The study was approved by the Institutional Review Board of the BNU Imaging Center for Brain Research. More detailed information about the two groups of participants is presented below.

Patients.

Seventy-seven (63 males) individuals who suffered from stroke were recruited from the China Rehabilitation Research Center. The mean age was 48 years (SD, 12; range, 20–76), and the mean years of formal education was 13 years (SD, 3; range, 2–19). They had no previous neurological disorders. The behavioral and imaging data were collected no earlier than 1 month after onset. They were absent from other neurological or psychiatric illnesses, such as alcohol abuse or severe depression. All could understand oral or/and written task instructions. The Edinburgh Handedness Inventory was used to assess their handedness (74 right handed, 3 left handed).

Healthy participants.

Fifty healthy participants (26 males) took part in the present study. They had a mean age of 50 years (SD, 11; range, 26–72) and mean education of 13 years (SD, 4; range, 6–22). All but two participants were right handed. The difference between healthy participants and the patients was not significant in age (t(125) < 1), education level (t(125) < 1), and handedness (χ2(1) < 1). The groups differed in gender distribution (χ2(1) = 12.85, p < 0.001).

Behavioral tasks

Motion–picture verification.

A motion–picture verification task was developed for three types of stimuli: humans (20 motion items constructed into 40 trials, half “yes” responses, half “no” responses), animals (15 motion items, 30 trials), and tools (22 tool motion items, 44 trials). Each trial consisted of a point-light motion animation (e.g., human, a person walking; animal, a bird flying; tool, a hammer hammering) presented above a black-and-white picture (e.g., human, a picture depicting a person kicking; animal, a bird; tool, a hammer). The point-light animations were created using the same procedure as in Experiment 1. For the human motion items, subjects were instructed to judge whether the animated action matched the action depicted by the picture by pressing the “yes” or “no” button on the touch screen; for the animal and tool motion conditions, the instruction was to judge whether the animated action was a typical action associated with the object in the picture. Animations lasted about 1–2 s, and the picture was displayed until the subject's response or after a 6 s deadline. Responses were scored 1 if correct and 0 if wrong. If no response was given within the deadline, a 0.5 point was assigned because any random guessing would have a 0.5 chance of being correct. Giving 0 to these items may bias against patients who tended to be more cautious compared with patients whose strategy relied more on guessing. Such cases occurred in <1% of trials (6 of 8778, 0.07%).

Picture–word verification.

A picture–word verification task was performed to control for any effects in the motion–picture verification task that might originate from the picture rather than the motion identification process. This task had the same trial structure as the motion–picture verification task except that a visual word rather than a point-light animation was presented along with a picture. The participants judged whether the word and picture matched. There were 20 human action trials (e.g., the word “kick” with the picture of a person kicking), 20 tool trials (e.g., the word “hammer” with the picture of a hammer), and 20 animal trials (e.g., the word “dog” with the picture of a dog). There were no “no responses” across all patients.

Imaging data acquisition

Each patient was scanned at the Imaging Center of China Rehabilitation Research Center (Signa Excite 1.5T; GE Healthcare). We obtained two types of whole-brain structural images: 3D T1 and FLAIR T2 images. The 3D T1 image is a dedicated high-resolution T1-weighted, three-dimensional MPRAGE image on a sagittal plane with the following parameters: TR, 12.26 ms; TE, 4.2 ms; TI, 400 ms; FOV, 250 × 250 mm2; FA, 15°; matrix size, 512 × 512; voxel size, 0.49 × 0.49 × 0.70 mm3; slice number, 248 slices. The FLAIR T2 image is a fluid-attenuated inversion recovery T2-weighted image on an axial plane with the following parameters: TR, 8002 ms; TE, 127.57 ms; TI, 2000 ms; FOV, 250 × 250 mm 2; FA, 90°; matrix size, 512 × 512; voxel size, 0.49 × 0.49 × 5 mm 3; slice number, 28 slices. Two identical sequences of the 3D T1 image were collected and averaged to improve the signal-to-noise ratio in analysis. We performed all analyses on the 3D T1 image and used the FLAIR T2 image for visual reference when manually drawing patients' lesions on the 3D T1 image (Rorden et al., 2007).

Imaging data preprocessing

The two sequences of 3D T1 images of patients were first coregistered on their respective native space and then averaged using SPM5. The FLAIR T2 images were coregistered and resliced to the native space averaged 3D images with SPM5. Using MRIcroN software (Rorden et al., 2007), two experimenters manually drew each patient's lesion contour on the native space averaged 3D T1 image slice by slice, visually referring to FLAIR T2 images. The procedure was supervised by an experienced radiologist. The degree of reliability of these two experimenters calculated on the four same patients [mean percentage volume difference, 9 ± 8 and 4 ± 3; mean percentage discrepant voxels, 7 ± 4 and 6 ± 2 (discrepant was defined as 2 voxels from the other manually drawn lesion volume)] was comparable with the same measures of inter-rater reliability reported previously (Fiez et al., 2000). The structural images of each patient were resliced into 1 × 1 × 1 mm3 voxel size and registered into Talairach space via BrainVoyager QX version 2.0 (www.brainvoyager.com). We used the ANTS software package (www.ants.com) to extract the affine transformation matrix between native and Talairach spaces, which was used to register the lesion description in Talairach space.

Behavioral performance standardization

Compared with case studies (Battelli et al., 2003) or lesion overlap analysis (Heberlein et al., 2004), which displays the lesion (overlap) maps of the behaviorally impaired patients with descriptive statistics, the VLSM analysis performs inferential statistical comparisons across voxels, making use of continuous behavioral and lesion information (Bates et al., 2003). Previous studies on biological motion using the VLSM approach (Saygin, 2007), however, tended to directly use behavioral scores of patients without considering the distribution of performance of healthy controls. The “raw” behavioral scores in those studies may be contaminated by demographic factors (e.g., age, education, gender) and may not accurately reflect the severity of the impairment (Crawford and Garthwaite, 2006). An ideal behavioral measure should consider the performance distribution in the reference healthy population. We therefore adopted the method developed by Crawford and Garthwaite (2006), which takes into account such information, and transformed raw accuracies into standard t scores on the basis of the distribution in the healthy population for each behavioral task in each patient. For each task, we first established a regression model on the basis of the 50 healthy control subjects (dependent variable was accuracy; predictors were age, education, and gender). A predicted value for each patient was then obtained by introducing his or her demographic information into the regression model. A discrepancy value (Discrepancypatient) was calculated as the difference value between the observed value and the predicted value. Then we computed the corrected SE of estimate for the patient (SEpatient) using the following formula: Embedded Image where Syx and N are the SE and number of subjects for the control group, respectively; rii and rij are main diagonal and off-diagonal elements of the inverted correlation matrix for the k predictor variables (k = 3; i.e., age, education, gender), respectively; and z0 (z10,…,zk0) identifies the patient's scores on the predictor variables in z score form. The patient's t score was then calculated: t-scorepatient = Discrepancypatient/SEpatient) (see details by Crawford and Garthwaite, 2006). This way, each patient had a t score on each task, which was used in the subsequent analyses as the behavioral performance index. Note that we also performed VLSM analyses using the raw accuracies of the patients directly as behavioral measures for comparison purposes. The results were highly consistent with those using the normalized t scores and are not presented here for simplicity.

Lesion-symptom mapping

A VLSM analysis (Bates et al., 2003; Rorden et al., 2007) was conducted on the data of the 77 patients using the NPM (nonparametric mapping) program in MRIcroN and the Voxbo brain-imaging package (www.voxbo.org). Voxels in which fewer than five patients had lesions were excluded from the analysis. For each voxel entered in the analysis, the patients were divided into the lesion group and the intact group. The behavioral performance index (t scores described above) separately on each of the three motion–picture verification tasks (human motion, animal motion, and tool motion) was compared between these two groups, while controlling for picture recognition effects by either regressing out the performance indices (t scores) on the corresponding control tasks (picture–word verification) or excluding patients who were impaired in the picture–word verification task. Given that the behavioral index scores for lesioned and intact groups across voxels may not comply with assumptions of the t test, a nonparametric Brunner-Munzel (BM) test (Brunner and Munzel, 2000) was performed for the statistical comparison in VLSM (Rorden et al., 2007; Medina et al., 2010). An independent-samples t test was also performed in the main analyses for comparison purposes. To correct for multiple comparisons, the significance threshold was set at FDR corrected p < 0.005 for all analyses unless otherwise noted. A whole-brain VLSM z-map (BM test) or t-map (t test) was then obtained for each task of interest. To further consolidate our findings while excluding the potential confounding influence of multiple lesions, the same VLSM analysis was also performed on the 38 stroke patients with unilateral, single, focal lesions. The overall data pattern was highly similar for the entire group and for the subset of patients with unilateral, single, focal lesions.

Human motion, human–agent, and biological motion effects.

Separate VLSM maps were obtained for human, animal, and tool motion. These maps were transformed into binary maps individually, in which each significant voxel (FDR, p < 0.005) was scored as 1 and others as 0. Conjunction maps were obtained using a similar rationale to that used for the fMRI experiment. For the human–agent map, we first created a human > animal map by removing the binary animal motion map from the binary human motion map and a tool > animal map by removing the animal map from the tool map. Then the human > animal map and the tool > animal map were overlaid, indicating the regions that were significant both in the human and the tool maps but not in the animal map. Similarly, two biological motion maps were obtained. A first map was obtained by subtracting the binary tool map from the binary human map and the binary animal map, respectively, and then overlaying the human > tool and the animal > tool maps; the second was obtained by overlaying the binary human map and the binary animal maps directly without comparison with the tool map.

Results

Experiment 1

One run of one participant was discarded from the behavioral and imaging data analyses because of incomplete collection of this run caused by an unexpected pause of stimuli presentation by the E-prime program.

Behavioral results

The mean accuracy of 16 participants in the motion judgment task for each condition was the following: humans, 0.95 ± 0.09 (SD); animals, 0.95 ± 0.06; tools, 0.93 ± 0.09; global motion, 0.94 ± 0.10. There was no significant difference among the four conditions (F(3,45) = 0.21, p = 0.89). Because the subjects were asked to respond only after seeing the fixation cue, response times were not meaningful and were not analyzed.

fMRI results

Given our focus on the lateral temporal cortex, below we present and discuss the results in right and left lateral temporal cortex and adjacent regions, including the following regions on the anatomical automatic labeling template (Tzourio-Mazoyer et al., 2002): right superior temporal gyrus (STG), middle temporal gyrus (MTG), inferior temporal gyrus (ITG), superior temporal pole, middle temporal pole, and angular gyrus. Whole-brain results are listed in Table 1.

View this table:
  • View inline
  • View popup
Table 1.

Whole-brain results for Experiments 1 (fMRI) and 2 (VLSM with nonparametric test)

Human motion effect.

To replicate previous findings on human motion recognition, we compared the human motion and global motion conditions. The contrast of human motion > global motion revealed highly significant effects in bilateral occipital and posterior temporal cortex (Fig. 1A), encompassing the posterior STG, MTG, and ITG (peak-point MNI coordinates in the lateral temporal cortex: 57, −63, 3). A significant cluster was also observed in the right middle STS/STG (mSTS/STG) (coordinates: 54, −3, −9). These results are in line with the biological motion effects in the literature, where posterior and middle clusters of temporal cortex have been reported (Allison et al., 2000; Grosbras et al., 2012).

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Functional MRI results in the lateral temporal lobe for healthy participants in Experiment 1. A, Human motion effect: the contrast of human motion > global motion. B, Human–agent motion effect: the conjunction of human motion > animal motion and tool motion > animal motion. C, Biological motion effect 1: the conjunction of human motion > tool motion and animal motion > tool motion. D, Biological motion effect 2: the conjunction of human motion > global motion and animal motion > global motion. The threshold for each individual contrast was set at p < 0.05 (FDR). H, Human motion; T, tool motion; A, animal motion; G, global motion.

Human–agent motion effect.

To explore the regions showing responses to human–agent motion stimuli, we computed whole-brain conjunction analyses of human motion > animal motion and tool motion > animal motion. The results (Fig. 1B) revealed significant clusters in the middle portion of the right STS/STG (center coordinates: 51, −25, 0) and a small cluster in left MTG (center coordinates: −62, −51, 5).

Biological motion effect.

To identify the brain regions showing responses to biological motion stimuli, we calculated whole-brain conjunction analyses of human motion > functional tool motion and animal motion > functional tool motion. We observed a highly significant cluster in the right occipital-temporal cortex (OTC) (center coordinates: 48, −68, 8), close to the well-documented extrastriate body area (EBA) (Fig. 1C). This activation also extended into right pSTS when global motion was used as the baseline, i.e., in the conjunction analysis of human motion > global motion and animal motion > global motion (Fig. 1D). This latter contrast also revealed a significant cluster in left OTC, encompassing left MT.

Summary.

We replicated the classical finding that bilateral occipital-temporal cortex, pSTS, and mSTS/STG are sensitive to human motion stimuli. Furthermore, in two areas of the right temporal cortex, different patterns of results were obtained: right mSTS/STG showed a human–agent motion effect and, more posteriorly, the right OTC showed a biological motion effect.

Experiment 2

Behavioral results

Mean accuracies and SDs in the motion–picture and picture–word verification tasks for the 77 patients and 50 healthy participants are as follows. In the patient group, for the motion–picture verification task, tool motion (0.75 ± 0.12) was recognized more accurately than human (0.67 ± 0.13) and animal (0.68 ± 0.14) motion (ps < 0.001), with no significant difference between human and animal motion (t <1); for the picture–word task, there was no significant difference among the three categories (human, 0.95 ± 0.08; animal, 0.94 ± 0.08; tool, 0.94 ± 0.09) (ps > 0.05). Similar patterns were found in healthy participants (motion–picture verification: human motion, 0.81 ± 0.11; animal motion, 0.79 ± 0.13; tool motion, 0.87 ± 0.08; picture–word verification: human, 0.99 ± 0.02; animal, 0.98 ± 0.03; tool, 0.98 ± 0.03) except that the performance on the human stimuli in the picture–word verification task was better than that for the other two categories (ps < 0.001). The patient group's performances for all categories in the two tasks were significantly worse than those of the healthy controls (F(1,125) = 23, p < 0.001). The patient group's performances also had larger variations, having greater variation coefficients (SD/mean) than the control group even in the picture–word verification task where both groups showed high mean accuracies (patients: human, 8%; animal, 8%; tool, 10%; controls: human, 2%; animal, 3%; tool, 3%). We further observed that subjects' performance on the three categories of motion stimuli in the motion–picture verification task were correlated in both the patient group (rhuman – animal = 0.65, rhuman – tool = 0.65, ranimal – tool = 0.60; ps < 0.001) and the healthy participant group (rhuman – animal = 0.61, rhuman – tool = 0.69, ranimal – tool = 0.55; ps < 0.001). These between-category r values did not differ significantly for either group (ps > 0.05).

VLSM results

Among the 77 stroke patients, 26 had a left-hemisphere lesion, 15 had a right-hemisphere lesion, and 36 had bilateral lesions. Of the patients, 38 had unilateral, single, focal lesions (22 left and 16 right) and will be addressed as the 38 single-lesion group below. VLSM analyses were performed separately for all 77 stroke patients and for the 38 single-lesion patients (Fig. 2). The lesion distribution patterns for the whole group (n = 77) and for single-lesion patients (n = 38) are presented in Figure 2E. In the analyses, we included voxels that were lesioned in at least five patients. This resulted in coverage of a substantial portion of bilateral temporal lobes, frontal lobes, and some portion of the parietal and occipital lobes and many subcortical and cerebellar regions. In Figure 2F, we show power maps reflecting the probability of each voxel reaching statistical significance with α set to p < 0.05 (Cohen, 1977). Given the variation in power across brain regions, negative results should be interpreted in the context of such variation.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

VLSM results in right lateral temporal lobe for brain-damaged patients in Experiment 2. A–D, Results comparing performances between the lesioned and intact groups on each voxel. The first three columns show results with the control task performance (picture–word verification task) regressed out. Columns 1 and 2 show the VLSM with all 77 patients with the BM test and the independent-sample t test, respectively. Column 3 shows results of the BM test with the 38 stroke patients' unilateral, single, focal lesions. Column 4 shows the results of the BM test with a different way of controlling for the picture recognition effect by excluding patients from the whole 77 patient group who were impaired in picture recognition. The threshold was set at a FDR corrected p < 0.005 for all columns except for the third column (FDR corrected p < 0.05). A, Human motion effect. B, Human–agent motion effect: the common regions with effects of human motion and tool motion excluding the areas with an effect of animal motion. Human motion maps are those in A. Maps of tool motion and animal motion were obtained using an identical method to that of human motion. C, Biological motion effect 1: the common regions with effects of human motion and animal motion excluding the areas with an effect of tool motion. D, Biological motion effect 2: the common regions with effects of human motion and animal motion. E, Lesion overlap maps of all 77 patients and 38 single-lesion patients. The n value of each voxel denotes the number of patients with a lesion on it. Voxels with n > 5 (all colored except cyan voxels) entered into the VLSM analysis. F, Power maps of all 77 patients and the 38 single-lesion patients. The value of each voxel reflects the probability of this voxel reaching statistical significance with α set to p < 0.05. The voxels with power value lower than 0.50 is displayed in cyan. H, Human motion; T, tool motion; A, animal motion; L, left; R, right.

As was done for the fMRI study, here we present and discuss the results for lateral temporal cortex and adjacent regions. Whole-brain results are listed in Table 1. Figure 2 displays the VLSM results for the human motion effect (Fig. 2A), human–agent motion effect (Fig. 2B), and biological motion effect (Fig. 2C,D). For all types of motion effects, results of four analyses are shown: the first three columns show results with the control task performance (picture–word verification task) regressed out. Columns 1 and 2 show the VLSM results with all 77 patients with the BM and the t test, respectively. Column 3 shows the results of the BM test with the 38 single-lesion patients. The last column shows a different way of controlling for the picture recognition effect. In this analysis, we excluded those patients from the whole patient group whose performance for the picture–word verification task was 2 SDs below that of controls in any of the three motion categories. With this procedure, 39 patients were excluded, leaving 38 patients for the analyses. The resulting maps with the BM test are shown in column 4. As can be seen in Figure 2, highly consistent patterns were obtained across these different analyses. For simplicity, only the detailed coordinate and cluster size information for column 1 (BM test with all patients) and column 3 (BM test with 38 single-lesion patients) in Figure 2 are presented below (and in Table 1).

The VLSM results on the human motion recognition task are presented in Figure 2A. For all analyses, a strong right lateralization was apparent, covering a large portion of the right mSTS/STG (extending to MTG). No significant clusters were obtained on the left temporal cortex in any of the analyses. A large cluster was obtained at the FDR < 0.005 threshold in the whole group (77 patients) analysis (center coordinates: 51, −11, −9; 8452 mm3) and at the FDR < 0.05 threshold in the 38 single-lesion patients (center coordinates: 48, −16, −3; 20,696 mm3).

A human–agent motion effect was found in the right mSTS/STG in all analyses (Fig. 2B). No significant cluster was observed in the left temporal cortex. A large cluster was obtained at the FDR p < 0.005 threshold in the whole group analysis (center coordinates: 47, −11, −9; 4697 mm3) and at the FDR p < 0.05 threshold in the subgroup of 38 single-lesion patients (center coordinates: 51, −15, −4; 17,438 mm3).

For the biological motion contrasts, a small cluster in the right STG was observed only in the conservative contrast (human and animal; Fig. 2D) for the whole group analysis (center coordinates: 45, −12, −10; 108 mm3). No significant cluster in the right temporal lobe was found in other analyses, including the stringent contrast (human > tool and animal > tool; Fig. 2C) for the whole group or the conservative and stringent contrasts for the 38 single-lesion patient group (Fig. 2C,D).

Results across Experiments 1 and 2

In both the fMRI experiment and the patient VLSM experiment, we found a region in right mSTS/STG that is sensitive to human–agent motion (stronger effects for human motion and tool motion relative to animal motion), whereas the regions showing the biological motion effect differed across the two experiments.

To quantify the convergence between the two experiments, we first assessed the degree of their overlap by calculating an overlap index (Bracci et al., 2012). We treated the significant regions in right temporal cortex for human–agent motion obtained in the fMRI map (Fig. 1B) and the VLSM map (Fig. 2B) as regions of interest (ROIs) and divided the volume common to the two ROIs by the volume of the smaller of the two ROIs. The left temporal cortex was not included in this analysis because no voxels were obtained in the VLSM map. When the whole group of 77 patients was considered, the common volume between fMRI human–agent ROI (1620 mm3) and VLSM human–agent ROI (4697 mm3) was 343 mm3 (center coordinates: 49, −23, 0) and the overlap index was 21%. When the 38 single-lesion patients were considered, the common volume was 1155 mm3 (center coordinates: 48, −14, −5) and the overlap index was 71%. In contrast, there was zero overlap between the fMRI biological motion ROI and the VLSM biological motion ROI for both the stringent contrast (biological motion 1; Figs. 1C, 2C) and the conservative contrast (biological motion 2; Figs. 1D, 2D) in both kinds of patient group analyses. Thus, the overlap index of biological motion was 0%.

We also assessed the effects of each experiment in the ROIs from the other experiment. We considered first the bilateral STS/STG clusters showing human–agent motion effects and biological motion effects defined by the fMRI experiment (Fig. 1B,D) and extracted the number of lesioned voxels in each ROI as the lesion volume index for each patient. We then correlated the lesion volumes in each ROI and the behavioral performance index on the motion–picture verification task in each category (human, animal, tool) across patients, while regressing out the scores of the picture–word verification task in the corresponding category and whole-brain lesion volume. In this way, we obtained a correlation coefficient (r) for each category in each ROI. The reverse analyses were also performed: for the VLSM-defined human–agent effect ROI and biological motion ROI obtained in Experiment 2 (Fig. 2B,D, BM test), we extracted the mean BOLD β values of each motion category for each healthy participant in Experiment 1 and plotted the effects of the three critical categories, as indexed by the β differences between each of the categories with the global motion. Figure 3 presents the results with all 77 patients. The results with 38 single-lesion patients showed similar overall patterns, especially for the human–agent effects, and are described below.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Convergence analysis across Experiments 1 and 2. A, Convergent analysis of the human–agent effect across experiments. The top row shows the ROI in the left and right temporal lobe with significant human–agent motion effects in the fMRI experiment. The bar plots represent the correlation between lesion volume in the ROI and the behavioral performance in each category across patients, regressing out the performance in the corresponding control task (picture–word verification) and total lesion volume. The bottom row shows results for the healthy participants in the human–agent ROI defined using the patient VLSM experiment. The right figure displays the healthy participants' results. We extracted the mean BOLD β values of each motion condition in Experiment 1 on the ROI and plotted the β differences between each of the three motion categories and the global motion baseline. #p < 0.10; *p < 0.05; **p < 0.01. B, Convergent analysis of the biological effect across experiments. The top row displays the lesion–behavior correlation of the three motion categories on the fMRI-defined biological motion effect ROI (human motion > global motion and animal motion > global motion). The bottom row shows the fMRI results on the VLSM-defined biological motion ROI (not reflected on the surface). #p < 0.10; *p < 0.05; **p < 0.01. L, Left; R, right.

For the human–agent ROIs, the results showed that the effects converged well across the two experiments. Specifically, when all 77 patients were considered (Fig. 3A), in the ROIs defined in the healthy participant fMRI experiment (top row), for the patient data there was a significant correlation of the lesion volume in the rmSTS/STG ROI with human motion (r = −0.35, p < 0.01) and tool motion (r = −0.24, p < 0.05) performance but not with animal motion performance (r = −0.15, p = 0.21). The difference between human motion and animal motion was significant (p < 0.05), but not the difference between tool motion and animal motion (p = 0.21). The left pMTG ROI showed significant effects for all three types of motion stimuli (ps < 0.05), with no statistically significant differences among them (ps > 0.10). In the VLSM-defined human–agent ROI, the effects of human motion and tool motion in the fMRI experiment were significantly or marginally significantly stronger than animal motion (t(15) = 3.15, p < 0.01 and t(15) = 1.87, p = 0.08, respectively).

When the patient data for 38 single-lesion patients were considered, in the ROIs defined in the fMRI experiment, there was a significant correlation of the lesion volume in the rmSTS/STG ROI with human motion (r = −0.50, p < 0.01) and tool motion (r = −0.39, p < 0.05) performance but not with animal motion performance (r = −0.22, p = 0.20). The difference between human motion and animal motion was significant (p < 0.05), but not the difference between tool motion and animal motion (p = 0.20). For the left pMTG ROI, there was a significant correlation of the lesion volume in this ROI with animal motion performance (r = −0.38, p < 0.05) but not with human motion (r = −0.31, p = 0.07) or tool motion (r = −0.21, p = 0.21) performance. There was no significant difference among the three categories (ps > 0.05). In the VLSM-defined human–agent ROI, the effects of human motion and tool motion in the fMRI experiment were both significantly stronger than animal motion (t(15) = 3.50, p < 0.01 and t(15) = 2.96, p < 0.01, respectively).

For the biological motion ROIs, the results were rather divergent across the two experiments (Fig. 3B). When all 77 patients were considered, for the ROI obtained in the conservative biological motion contrast (human > baseline and animal > baseline) in the fMRI experiment, lesion volume tended to be more strongly correlated with human motion and animal motion performance than tool motion. The right-hemisphere ROI reached marginal significance for animal motion but not for the other two motion categories (Fig. 3B, top row; human motion: r = −0.17, p = 0.14; animal motion: r = −0.20, p = 0.09; tool motion: r = −0.16, p = 0.18). No significant effect was observed for any category in the left temporal ROI (rHuman = 0.19; rAnimal = 0.17; rTool = 0.08; ps > 0.10). For the VLSM-defined biological motion ROI (conservative contrast), the effects in the healthy participant fMRI experiment did not reach significance for any category either (rHuman = 0.16; rAnimal = 0.10; rTool = 0.07; ps > 0.20).

The pattern with the subgroup of the 38 single-lesion patients was again similar to that with the whole patient group analysis. For the ROI obtained in the conservative biological motion contrast in the fMRI experiment, lesion volume tended to be more strongly correlated with human motion and animal motion performance than tool motion. The right-hemisphere ROI reached significance for human motion (r = −0.35, p < 0.05) but not for the other two motion categories (animal motion: r = −0.30, p = 0.08; tool motion: r = −0.26, p = 0.12). For the left temporal ROI, no significant effect was obtained for any category (human motion, r = −0.07; animal motion, r = −0.24; tool motion, r = −0.18; ps > 0.14). No significant clusters were obtained in the VLSM analyses to allow for the reverse analysis.

Discussion

We examined whether human (conspecific) motion recognition and biological motion recognition are associated with distinct brain regions and, if so, what characterizes them. We compared the effects of three types of motion, human, animal, and functional tool motion, in a neuroimaging experiment and in a VLSM experiment. In the fMRI experiment with healthy participants, we observed two regions in the temporal lobe that are involved with different aspects of motion recognition: the middle portion of the right superior temporal gyrus/sulcus is more important for processing motion produced by human agents (human motion and functional tool motion) relative to animal motion, and the more posterior region in the bilateral occipital-temporal cortex responded more strongly to biological motion (human motion and animal motion) relative to nonbiological object motion (tool motion). The finding of a region seemingly specialized for the processing of human agency was confirmed in our VLSM study with brain-damaged patients, where we found that damage to right mSTG/STS is associated with impairment in the recognition of both human motion and functional tool motion, relative to animal motion. Highly consistent results were obtained across analyses when considering all stroke patients and only stroke patients with single, unilateral lesions. Note that one important caveat to consider is that the low-level motion properties of our motion stimuli and task requirements were not perfectly matched across conditions. However, the common effects of human and tools over animals could not be readily explained by such differences, as human and animal motion were more similar in motion properties (e.g., number of dots presented or articulated manner) and animal and tool motion had more similar task requirement (i.e., identification of object-specific actions). Furthermore, behavioral accuracies in Experiment 1 were comparable across the three categories, and in Experiment 2, they were more similar between human and animal motion conditions. Together, these result show that right mSTG/STS is not only activated during human–agent motion recognition, but is necessary for processing such motion.

The fMRI and VLSM finding that right mSTS/STG shows preference to human and tool motion stimuli relative to animal motion demonstrates that this region is not tuned to biological motion properties, but rather is selectively involved in processing the movement of human agents. Tools, as inanimate objects, do not have intrinsic, self-initiated motion, and their motion is usually induced by human agents. These considerations encourage the view that it is the processing of the implied agent of functional tool motion that is responsible for the observed association of human and tool motion in right mSTG/STS.

Previous functional imaging studies have reported that right mSTS/STG is more strongly activated in processing human motion (Howard et al., 1996; Allison et al., 2000; Iacoboni et al., 2004; Redcay, 2008; Grosbras et al., 2012) compared with scrambled motion and with the same types of motion produced by nonhuman agents such as robots/cartoon figures (Mar et al., 2007; Gobbini et al., 2011). Such human-related effects have generally been interpreted as the effects of a social information processing system. However, our results showing that right mSTS/STG is both more sensitive (fMRI) and necessary (VLSM) for tool motion processing, whereas it is insensitive and not necessary for animal motion processing, suggest that social property engagement is not necessary in driving such human agency effects. Assuming that social-related processing is defined by the interaction between two or more humans (and, by extension, between two or more animals), the motion of tools does not automatically involve such processing. That is, we have shown a human (agency) effect that is not to be reduced to general social or biological effects. Although it is undoubtedly the case that the human motion recognition system is at the service of social cognition (Lahnakoski et al., 2012; Pavlova, 2012; Simmons and Martin, 2012), it is not dependent on it and is not fully subsumed within it. The results reported here suggest that the right mSTS is involved in computing human agency in its most general form, including object-directed agency, and independently of social valence.

Our finding that in the posterior portion of the temporal lobe (the occipital-temporal cortex) human motion and animal motion elicited stronger activation relative to global motion and tool motion provides direct evidence for the common assumption that this region is sensitive to biological entities (see also Kaiser et al., 2012). This finding is consistent with the hypothesis that the human motion effect observed here is attributable to this region's sensitivity to a more general type of motion property, articulated motion, that is associated with biological entities (Beauchamp et al., 2002, 2003; Pelphrey et al., 2003). However, such findings were not supported by the VLSM study, perhaps because of low statistical power resulting from the small number of patients with lesions in the occipital-temporal cortex in our patient group. This issue remains to be explored.

The region showing biological motion selectivity in our fMRI study included a large cluster in the OTC that is more inferior to the pSTS region commonly indicated in human motion research and seems close to the well-documented EBA (Grosbras et al., 2012), which has been shown to be more responsive to human and animal bodies (Haxby et al., 2000; Downing et al., 2001; Peelen and Downing, 2007). Previous studies have found that point-light displays of human movement activate EBA along with right pSTS, and such effects might be driven by the body form information derived from the point-light display (Peelen et al., 2006). Our results might reflect effects from either or both of the two regions, the right pSTS for biological motion and the EBA for biological form. It is worth noting that the right OTC cluster observed in our study was obtained in the contrast human motion > tool motion and animal motion > tool motion, whereas the more classical right pSTS was obtained for the contrast human motion > global motion and animal motion > global motion. Although tool motion is not articulated, it shares other visual properties with biological motion, in that it can be seen as the extension of the effector causally involved in the object's motion (i.e., hand; Bracci et al., 2012). If such were the case, contrasting human and animal motion to functional tool motion might have the effect of subtracting out the biological “motion” component and leaving behind the shape dimension of the biological entities.

Compared with previous neuropsychological investigations, our results revealing the critical role of right mSTS/STG in human–agent motion recognition are better aligned with fMRI studies with healthy participants. Although most fMRI studies have reported right pSTS to be the peak of the human motion effects, right mSTS/STG has also been indicated (Grosbras et al., 2012). In contrast, previous patient studies have reported effects in left STS, premotor area, right superior parietal lobe, V5/MT, inferior temporal gyrus, medial frontal lobe, and right anterior temporal lobe, with little evidence for right middle/posterior STS/STG. There are several possibilities why the previous patient studies have not found right STS/STG effects in human motion processing. The brain regions of interest in those studies were limited to focal areas, such as the left hemisphere (Saygin, 2007), anterior temporal lobe (Vaina and Gross, 2004), and parietal lobes (Battelli et al., 2003); the sample size was relatively small with the exception of the study by Saygin (2007), which included a larger number of patients but could not examine the role of right STS/STG since only left-hemisphere lesion patients were included. Our study was performed on a much larger sample of patients (n = 77) with lesions covering a wide range of bilateral regions, allowing for greater power to detect the contribution of right STS/STG in human motion recognition.

In conclusion, we have shown the existence of distinct functional components in the motion recognition stream in right lateral temporal cortex. One component, in the right pSTS and bilateral OTC, is most likely driven by bottom-up visual motion and shape properties that are shared by biological entities. This component provides the initial interpretation of biological motion, which, together with the contribution of nearby areas, may then serve as the basis for the interpretation of observed actions. More importantly in the context of the present study, a second component, lying more anteriorly in the middle part of right superior temporal region, is involved in the recognition of human–agent motion. This component provides a more abstract interpretation of the agent, explicit or implicit, that is performing the motion. This region shows selectivity to the motion of a human agent, even when the stimulus itself is not biological and does not contain articulated motion properties or social valence. These findings suggest that the organization of lateral temporal cortex is not guided by a unidimensional biological–nonbiological principle but is hierarchically organized from undifferentiated biological motion processing to more complex (agency) and specific (human) dimensions. The careful distinction among different components of motion perception and interpretation is a necessary step in understanding the neural basis of human motion recognition at the service of both social and nonsocial cognition.

Footnotes

  • This work was supported by the 973 Program (Grant 2013CB837300), the Major Project of the National Social Science Foundation (Grant 11&ZD186), NSFC (Grants 31171073, 31222024, 31271115, 81030028, and 31221003), NCET (Grants 12-0055 and 12-0065), the National Science Fund for Distinguished Young Scholars (Grant 81225012 to Y.H.), and Grant BJNSF (7122089). A.C. was supported by the Fondazione Cassa di Risparmio di Trento e Rovereto. We thank Emily D. Grossman for sharing the animal motion stimuli; Myrna Schwartz, Daniel Kimberg, and Grant Walker for help with Voxbo software; Xueming Lu for fMRI data analyses; and Alex Martin, Marius Peelen, and Lorella Battelli for comments on a previous version of this manuscript. We are also grateful to all research participants.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Yanchao Bi, State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China. ybi{at}bnu.edu.cn

References

  1. ↵
    1. Allison T,
    2. Puce A,
    3. McCarthy G
    (2000) Social perception from visual cues: role of the STS region. Trends Cogn Sci 4:267–278, doi:10.1016/S1364-6613(00)01501-1, pmid:10859571.
    OpenUrlCrossRefPubMed
  2. ↵
    1. Bates E,
    2. Wilson SM,
    3. Saygin AP,
    4. Dick F,
    5. Sereno MI,
    6. Knight RT,
    7. Dronkers NF
    (2003) Voxel-based lesion-symptom mapping. Nat Neurosci 6:448–450, doi:10.1038/nn1050, pmid:12704393.
    OpenUrlCrossRefPubMed
  3. ↵
    1. Battelli L,
    2. Cavanagh P,
    3. Thornton IM
    (2003) Perception of biological motion in parietal patients. Neuropsychologia 41:1808–1816, doi:10.1016/S0028-3932(03)00182-9, pmid:14527544.
    OpenUrlCrossRefPubMed
  4. ↵
    1. Beauchamp MS,
    2. Lee KE,
    3. Haxby JV,
    4. Martin A
    (2002) Parallel visual motion processing streams for manipulable objects and human movements. Neuron 34:149–159, doi:10.1016/S0896-6273(02)00642-6, pmid:11931749.
    OpenUrlCrossRefPubMed
  5. ↵
    1. Beauchamp MS,
    2. Lee KE,
    3. Haxby JV,
    4. Martin A
    (2003) FMRI responses to video and point-light displays of moving humans and manipulable objects. J Cogn Neurosci 15:991–1001, doi:10.1162/089892903770007380, pmid:14614810.
    OpenUrlCrossRefPubMed
  6. ↵
    1. Blake R,
    2. Shiffrar M
    (2007) Perception of human motion. Annu Rev Psychol 58:47–73, doi:10.1146/annurev.psych.57.102904.190152, pmid:16903802.
    OpenUrlCrossRefPubMed
  7. ↵
    1. Bracci S,
    2. Cavina-Pratesi C,
    3. Ietswaart M,
    4. Caramazza A,
    5. Peelen MV
    (2012) Closely overlapping responses to tools and hands in left lateral occipitotemporal cortex. J Neurophysiol 107:1443–1456, doi:10.1152/jn.00619.2011, pmid:22131379.
    OpenUrlAbstract/FREE Full Text
  8. ↵
    1. Brainard DH
    (1997) The psychophysics toolbox. Spat Vis 10:433–436, doi:10.1163/156856897X00357, pmid:9176952.
    OpenUrlCrossRefPubMed
  9. ↵
    1. Brunner E,
    2. Munzel U
    (2000) The nonparametric Behrens–Fisher problem: asymptotic theory and a small-sample approximation. Biometr J 42:17–25, doi:10.1002/(SICI)1521-4036(200001)42:1<17::AID-BIMJ17>3.0.CO%3B2-U.
    OpenUrlCrossRef
  10. ↵
    1. Caramazza A,
    2. Shelton JR
    (1998) Domain-specific knowledge systems in the brain the animate-inanimate distinction. J Cogn Neurosci 10:1–34, doi:10.1162/089892998563752, pmid:9526080.
    OpenUrlCrossRefPubMed
  11. ↵
    1. Chouchourelou A,
    2. Jacobs A,
    3. Shiffrar M
    (2013) in Visual perception of the human body in motion, What does “biological motion” really mean? Differentiating visual percepts of human, animal, and non-biological motions, eds Johnson K, Shiffrar M (Oxford UP, New York) In press.
  12. ↵
    1. Cohen J
    (1977) Statistical power analysis for the behavioral sciences (Academic, New York).
  13. ↵
    1. Crawford JR,
    2. Garthwaite PH
    (2006) Comparing patients' predicted test scores from a regression equation with their obtained scores: a significance test and point estimate of abnormality with accompanying confidence limits. Neuropsychology 20:259–271, doi:10.1037/0894-4105.20.3.259, pmid:16719619.
    OpenUrlCrossRefPubMed
  14. ↵
    1. Downing PE,
    2. Jiang Y,
    3. Shuman M,
    4. Kanwisher N
    (2001) A cortical area selective for visual processing of the human body. Science 293:2470–2473, doi:10.1126/science.1063414, pmid:11577239.
    OpenUrlAbstract/FREE Full Text
  15. ↵
    1. Fiez JA,
    2. Damasio H,
    3. Grabowski TJ
    (2000) Lesion segmentation and manual warping to a reference brain: intra- and interobserver reliability. Hum Brain Mapp 9:192–211, doi:10.1002/(SICI)1097-0193(200004)9:4<192::AID-HBM2>3.0.CO%3B2-Y, pmid:10770229.
    OpenUrlCrossRefPubMed
  16. ↵
    1. Gobbini MI,
    2. Koralek AC,
    3. Bryan RE,
    4. Montgomery KJ,
    5. Haxby JV
    (2007) Two takes on the social brain: a comparison of theory of mind tasks. J Cogn Neurosci 19:1803–1814, doi:10.1162/jocn.2007.19.11.1803, pmid:17958483.
    OpenUrlCrossRefPubMed
  17. ↵
    1. Gobbini MI,
    2. Gentili C,
    3. Ricciardi E,
    4. Bellucci C,
    5. Salvini P,
    6. Laschi C,
    7. Guazzelli M,
    8. Pietrini P
    (2011) Distinct neural systems involved in agency and animacy detection. J Cogn Neurosci 23:1911–1920, doi:10.1162/jocn.2010.21574, pmid:20849234.
    OpenUrlCrossRefPubMed
  18. ↵
    1. Grosbras MH,
    2. Beaton S,
    3. Eickhoff SB
    (2012) Brain regions involved in human movement perception: a quantitative voxel-based meta-analysis. Hum Brain Mapp 33:431–454, doi:10.1002/hbm.21222, pmid:21391275.
    OpenUrlCrossRefPubMed
  19. ↵
    1. Grossman ED,
    2. Blake R
    (2002) Brain areas active during visual perception of biological motion. Neuron 35:1167–1175, doi:10.1016/S0896-6273(02)00897-8, pmid:12354405.
    OpenUrlCrossRefPubMed
  20. ↵
    1. Grossman ED,
    2. Battelli L,
    3. Pascual-Leone A
    (2005) Repetitive TMS over posterior STS disrupts perception of biological motion. Vision Res 45:2847–2853, doi:10.1016/j.visres.2005.05.027, pmid:16039692.
    OpenUrlCrossRefPubMed
  21. ↵
    1. Haxby JV,
    2. Hoffman EA,
    3. Gobbini MI
    (2000) The distributed human neural system for face perception. Trends Cogn Sci 4:223–233, doi:10.1016/S1364-6613(00)01482-0, pmid:10827445.
    OpenUrlCrossRefPubMed
  22. ↵
    1. Haxby JV,
    2. Guntupalli JS,
    3. Connolly AC,
    4. Halchenko YO,
    5. Conroy BR,
    6. Gobbini MI,
    7. Hanke M,
    8. Ramadge PJ
    (2011) A common high-dimensional model of the representational space in human ventral temporal cortex. Neuron 72:404–416, doi:10.1016/j.neuron.2011.08.026, pmid:22017997.
    OpenUrlCrossRefPubMed
  23. ↵
    1. Heberlein AS,
    2. Adolphs R,
    3. Tranel D,
    4. Damasio H
    (2004) Cortical regions for judgments of emotions and personality traits from pointlight walkers. J Cogn Neurosci 16:1143–1158, doi:10.1162/0898929041920423, pmid:15453970.
    OpenUrlCrossRefPubMed
  24. ↵
    1. Howard RJ,
    2. Brammer M,
    3. Wright I,
    4. Woodruff PW,
    5. Bullmore ET,
    6. Zeki S
    (1996) A direct demonstration of functional specialization within motion-related visual and auditory cortex of the human brain. Curr Biol 6:1015–1019, doi:10.1016/S0960-9822(02)00646-2, pmid:8805334.
    OpenUrlCrossRefPubMed
  25. ↵
    1. Iacoboni M,
    2. Lieberman MD,
    3. Knowlton BJ,
    4. Molnar-Szakacs I,
    5. Moritz M,
    6. Throop CJ,
    7. Fiske AP
    (2004) Watching social interactions produces dorsomedial prefrontal and medial parietal BOLD fMRI signal increases compared to a resting baseline. Neuroimage 21:1167–1173, doi:10.1016/j.neuroimage.2003.11.013, pmid:15006683.
    OpenUrlCrossRefPubMed
  26. ↵
    1. Johansson G
    (1973) Visual perception of biological motion and a model for its analysis. Percept Psychophys 14:201–211, doi:10.3758/BF03212378.
    OpenUrlCrossRef
  27. ↵
    1. Kaiser MD,
    2. Shiffrar M,
    3. Pelphrey KA
    (2012) Socially tuned: brain responses differentiating human and animal motion. Soc Neurosci 7:301–310, doi:10.1080/17470919.2011.614003, pmid:21943047.
    OpenUrlCrossRefPubMed
  28. ↵
    1. Kriegeskorte N,
    2. Mur M,
    3. Ruff DA,
    4. Kiani R,
    5. Bodurka J,
    6. Esteky H,
    7. Tanaka K,
    8. Bandettini PA
    (2008) Matching categorical object representations in inferior temporal cortex of man and monkey. Neuron 60:1126–1141, doi:10.1016/j.neuron.2008.10.043, pmid:19109916.
    OpenUrlCrossRefPubMed
  29. ↵
    1. Lahnakoski JM,
    2. Glerean E,
    3. Salmi J,
    4. Jaaskelainen IP,
    5. Sams M,
    6. Hari R,
    7. Nummenmaa L
    (2012) Naturalistic FMRI mapping reveals superior temporal sulcus as the hub for the distributed brain network for social perception. Front Hum Neurosci 6:1–14, doi:10.3389/fnhum.2012.00233, pmid:22905026.
    OpenUrlCrossRefPubMed
  30. ↵
    1. Mar RA,
    2. Kelley WM,
    3. Heatherton TF,
    4. Macrae CN
    (2007) Detecting agency from the biological motion of veridical vs animated agents. Soc Cogn Affect Neurosci 2:199–205, doi:10.1093/scan/nsm011, pmid:18985141.
    OpenUrlAbstract/FREE Full Text
  31. ↵
    1. Martin A,
    2. Wiggs CL,
    3. Ungerleider LG,
    4. Haxby JV
    (1996) Neural correlates of category-specific knowledge. Nature 379:649–652, doi:10.1038/379649a0, pmid:8628399.
    OpenUrlCrossRefPubMed
  32. ↵
    1. Medina J,
    2. Kimberg DY,
    3. Chatterjee A,
    4. Coslett HB
    (2010) Inappropriate usage of the Brunner-Munzel test in recent voxel-based lesion-symptom mapping studies. Neuropsychologia 48:341–343, doi:10.1016/j.neuropsychologia.2009.09.016, pmid:19766664.
    OpenUrlCrossRefPubMed
  33. ↵
    1. Oldfield RC
    (1971) The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia 9:97–113, doi:10.1016/0028-3932(71)90067-4, pmid:5146491.
    OpenUrlCrossRefPubMed
  34. ↵
    1. Pavlova MA
    (2012) Biological motion processing as a hallmark of social cognition. Cereb Cortex 22:981–995, doi:10.1093/cercor/bhr156, pmid:21775676.
    OpenUrlAbstract/FREE Full Text
  35. ↵
    1. Peelen MV,
    2. Downing PE
    (2007) The neural basis of visual body perception. Nat Rev Neurosci 8:636–648, doi:10.1038/nrn2195, pmid:17643089.
    OpenUrlCrossRefPubMed
  36. ↵
    1. Peelen MV,
    2. Wiggett AJ,
    3. Downing PE
    (2006) Patterns of fMRI activity dissociate overlapping functional brain areas that respond to biological motion. Neuron 49:815–822, doi:10.1016/j.neuron.2006.02.004, pmid:16543130.
    OpenUrlCrossRefPubMed
  37. ↵
    1. Pelli DG
    (1997) The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat Vis 10:437–442, doi:10.1163/156856897X00366, pmid:9176953.
    OpenUrlCrossRefPubMed
  38. ↵
    1. Pelphrey KA,
    2. Mitchell TV,
    3. McKeown MJ,
    4. Goldstein J,
    5. Allison T,
    6. McCarthy G
    (2003) Brain activity evoked by the perception of human walking: controlling for meaningful coherent motion. J Neurosci 23:6819–6825, pmid:12890776.
    OpenUrlAbstract/FREE Full Text
  39. ↵
    1. Peuskens H,
    2. Vanrie J,
    3. Verfaillie K,
    4. Orban GA
    (2005) Specificity of regions processing biological motion. Eur J Neurosci 21:2864–2875, doi:10.1111/j.1460-9568.2005.04106.x, pmid:15926934.
    OpenUrlCrossRefPubMed
  40. ↵
    1. Redcay E
    (2008) The superior temporal sulcus performs a common function for social and speech perception: implications for the emergence of autism. Neurosci Biobehav Rev 32:123–142, doi:10.1016/j.neubiorev.2007.06.004, pmid:17706781.
    OpenUrlCrossRefPubMed
  41. ↵
    1. Rorden C,
    2. Karnath HO,
    3. Bonilha L
    (2007) Improving lesion-symptom mapping. J Cogn Neurosci 19:1081–1088, doi:10.1162/jocn.2007.19.7.1081, pmid:17583985.
    OpenUrlCrossRefPubMed
  42. ↵
    1. Saxe R,
    2. Xiao DK,
    3. Kovacs G,
    4. Perrett DI,
    5. Kanwisher N
    (2004) A region of right posterior superior temporal sulcus responds to observed intentional actions. Neuropsychologia 42:1435–1446, doi:10.1016/j.neuropsychologia.2004.04.015, pmid:15246282.
    OpenUrlCrossRefPubMed
  43. ↵
    1. Saygin AP
    (2007) Superior temporal and premotor brain areas necessary for biological motion perception. Brain 130:2452–2461, doi:10.1093/brain/awm162, pmid:17660183.
    OpenUrlAbstract/FREE Full Text
  44. ↵
    1. Simmons WK,
    2. Martin A
    (2012) Spontaneous resting-state BOLD fluctuations reveal persistent domain-specific neural networks. Soc Cogn Affect Neurosci 7:467–475, doi:10.1093/scan/nsr018, pmid:21586527.
    OpenUrlAbstract/FREE Full Text
  45. ↵
    1. Tzourio-Mazoyer N,
    2. Landeau B,
    3. Papathanassiou D,
    4. Crivello F,
    5. Etard O,
    6. Delcroix N,
    7. Mazoyer B,
    8. Joliot M
    (2002) Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. Neuroimage 15:273–289, doi:10.1006/nimg.2001.0978, pmid:11771995.
    OpenUrlCrossRefPubMed
  46. ↵
    1. Xia M,
    2. Wang J,
    3. He Y
    (2013) BrainNet Viewer: a network visualization tool for human brain connectomics. PLoS One 8:e68910, doi:10.1371/journal.pone.0068910, pmid:23861951.
    OpenUrlCrossRefPubMed
  47. ↵
    1. Vaina LM,
    2. Gross CG
    (2004) Perceptual deficits in patients with impaired recognition of biological motion after temporal lobe lesions. Proc Natl Acad Sci U S A 101:16947–16951, doi:10.1073/pnas.0407668101, pmid:15556997.
    OpenUrlAbstract/FREE Full Text
Back to top

In this issue

The Journal of Neuroscience: 33 (39)
Journal of Neuroscience
Vol. 33, Issue 39
25 Sep 2013
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Advertising (PDF)
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Distinct Regions of Right Temporal Cortex Are Associated with Biological and Human–Agent Motion: Functional Magnetic Resonance Imaging and Neuropsychological Evidence
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Distinct Regions of Right Temporal Cortex Are Associated with Biological and Human–Agent Motion: Functional Magnetic Resonance Imaging and Neuropsychological Evidence
Zaizhu Han, Yanchao Bi, Jing Chen, Quanjing Chen, Yong He, Alfonso Caramazza
Journal of Neuroscience 25 September 2013, 33 (39) 15442-15453; DOI: 10.1523/JNEUROSCI.5868-12.2013

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Distinct Regions of Right Temporal Cortex Are Associated with Biological and Human–Agent Motion: Functional Magnetic Resonance Imaging and Neuropsychological Evidence
Zaizhu Han, Yanchao Bi, Jing Chen, Quanjing Chen, Yong He, Alfonso Caramazza
Journal of Neuroscience 25 September 2013, 33 (39) 15442-15453; DOI: 10.1523/JNEUROSCI.5868-12.2013
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Articles

  • Memory Retrieval Has a Dynamic Influence on the Maintenance Mechanisms That Are Sensitive to ζ-Inhibitory Peptide (ZIP)
  • Neurophysiological Evidence for a Cortical Contribution to the Wakefulness-Related Drive to Breathe Explaining Hypocapnia-Resistant Ventilation in Humans
  • Monomeric Alpha-Synuclein Exerts a Physiological Role on Brain ATP Synthase
Show more Articles

Behavioral/Cognitive

  • Zooming in and out: Selective attention modulates color signals in early visual cortex for narrow and broad ranges of task-relevant features
  • Target selection signals causally influence human perceptual decision making
  • The molecular substrates of second-order conditioned fear in the basolateral amygdala complex
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.