Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

TMS Reveals Dynamic Interaction between Inferior Frontal Gyrus and Posterior Middle Temporal Gyrus in Gesture-Speech Semantic Integration

Wanying Zhao, Yanchang Li and Yi Du
Journal of Neuroscience 15 December 2021, 41 (50) 10356-10364; DOI: https://doi.org/10.1523/JNEUROSCI.1355-21.2021
Wanying Zhao
1CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Yanchang Li
1CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Yi Du
1CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
2Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
3CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, 200031, China
4Chinese Institute for Brain Research, Beijing, 102206, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Semantic processing is an amodal process with modality-specific information integrated in supramodal “convergence zones” or “semantic hub” with executive mechanisms that tailor semantic representation in a task-appropriate way. One unsolved question is how frontal control region dynamically interacts with temporal representation region in semantic integration. The present study addressed this issue by using inhibitory double-pulse transcranial magnetic stimulation over the left inferior frontal gyrus (IFG) or left posterior middle temporal gyrus (pMTG) in one of eight 40 ms time windows (TWs) (3 TWs before and 5 TWs after the identification point of speech), when human participants (12 females, 14 males) were presented with semantically congruent or incongruent gesture-speech pairs but merely identified the gender of speech. We found a TW-selective disruption of gesture-speech integration, indexed by the semantic congruency effect (i.e., a cost of reaction time because of semantic conflict), when stimulating the left pMTG in TW1, TW2, and TW7 but when stimulating the left IFG in TW3 and TW6. Based on the timing relationship, we hypothesize a two-stage gesture-speech integration circuit with a pMTG-to-IFG sequential involvement in the prelexical stage for activating gesture semantics and top-down constraining the phonological processing of speech. In the postlexical stage, an IFG-to-pMTG feedback signal might be implicated for the control of goal-directed representations and multimodal semantic unification. Our findings provide new insights into the dynamic brain network of multimodal semantic processing by causally revealing the temporal dynamics of frontal control and temporal representation regions.

SIGNIFICANCE STATEMENT Previous research has identified differential functions of left inferior frontal gyrus (IFG) and posterior middle temporal gyrus (pMTG) in semantic control and semantic representation, respectively, and a causal contribution of both regions in gesture-speech integration. However, it remains largely unclear how the two regions dynamically interact in semantic processing. By using double-pulse transcranial magnetic stimulation to disrupt regional activity at specific time, this study for the first time revealed critical time windows when the two areas were causally involved in integrating gesture and speech semantics. Findings suggest a pMTG-IFG-pMTG neurocircuit loop in gesture-speech integration, which deepens current knowledge and inspires future investigation of the temporal dynamics and cognitive processes of the amodal semantic network.

  • gesture
  • inferior frontal gyrus
  • posterior middle temporal gyrus
  • semantic integration
  • speech
  • TMS

Introduction

Semantic processing, the cognitive act of accessing stored conceptual knowledge acquired from multimodal verbal and nonverbal experience, is believed to be a general process abstract away from modality-specific attributes (Caramazza et al., 1990). Convergent evidence supports that semantic cognition relies on two principal interacting neural systems (for review, see Binder et al., 2016; Ralph et al., 2017). One is the representation system, where modality-specific sensory, motor, and affective information is integrated in temporal and inferior parietal “convergence zones” (Damasio et al., 1996) and the “semantic hub” in anterior temporal lobe that store increasing abstract concepts and knowledge (Patterson et al., 2007). The other is the control system mainly including the inferior frontal cortex (IFG), which computes and manipulates activation in the representation system to suit the current context or goals (Whitney et al., 2011; Davey et al., 2015). Yet, the time course of the interaction between those two systems in semantic processing is largely unclear, the solution of which would certainly deeper our understanding of the multisensory semantic process.

Among the multimodal extralinguistic information, gestures are of particular importance in that gestures often co-occur with speech and convey not only relevant information but also additional information not present in the accompanying speech, as pantomimes can stand on their own (Kelly and Church, 1998; Goldin-Meadow and Sandhofer, 1999). Gestures and speech are believed to interact in a bidirectional way not only in terms of the external form, such as voice spectra and gesture kinematics (Bernardis and Gentilucci, 2006), but also at the semantic level (Kelly et al., 1999; Kita and Ozyurek, 2003), with an N400 effect been triggered when gestures and speech have incongruent meanings (Kelly et al., 2004; Holle and Gunter, 2007). Neuroimaging studies have identified two areas that are consistently activated in gesture-speech integration: the left IFG, which has been implicated in controlled retrieval, selection, and unification of semantic representations; and the posterior superior temporal sulcus (pSTS)/middle temporal gyrus (MTG), which has been involved in mapping multimodal inputs onto a stored common representation (Holle et al., 2008, 2010; Dick et al., 2012). In our prior study (Zhao et al., 2018), offline theta-burst transcranial magnetic stimulation (TMS) and online repetitive TMS over the left IFG or the left posterior middle temporal gyrus (pMTG) significantly reduced the difference in reaction time (RT) between the semantically incongruent and semantically congruent gesture-speech pairs, thus providing causal evidence for the involvement of both areas in gesture-speech semantic processing. Although neurophysiology studies have uncovered an earlier activity in the pMTG than in the IFG in gesture-speech integration (Drijvers et al., 2018a; He et al., 2018), a clear picture of the dynamic interplay between the two regions in integrating spatial-motoric gestures with linear-analytic speech in semantic processing is still lacking.

The present study aimed to unpack the above question using double-pulse TMS. By targeting short time periods, double-pulse TMS can benefit from the summation effect of the two pulses in transiently inhibiting specific brain region, thus providing an ideal protocol for causally investigating the timing of the brain area in a cognitive process (Pitcher et al., 2007). Naturally, gestures precede the onset of relevant speech (Morrel-Samuels and Krauss, 1992; Holler and Levinson, 2019). It is hypothesized that gestures serve as primes in conceptualization (Kita et al., 2017) and cues to constrain the perception and lexical representation of the unfolding speech (Smith et al., 2017). Hence, the present study adopted a semantic priming paradigm (Wu and Coulson, 2007; Yap et al., 2011; So et al., 2013) by presenting the speech onset at the point when gesture started to provide a clear meaning (i.e., the discrimination point [DP] of gesture). We further targeted on the lexical retrieval and unification process of speech by segmenting the possible gesture-speech integration window (Ozyurek et al., 2007; Zhao et al., 2018) into eight 40 ms time windows (TWs) surrounding the identification point (IP) of speech (i.e., the first time point when speech becomes semantically identified). We then applied double-pulse TMS in each TW to shortly inhibit the left IFG, the left pMTG, or the vertex (served as a control site) in a time-locked manner (Fig. 1A). By doing so, we could provide a direct investigation into the time courses and functional roles the left IFG and the left pMTG may play in gesture-speech semantic integration if a dynamic interplay between the two regions indeed existed.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Stimuli and experimental design. A, Illustration of the 8 TWs (duration = 40 ms) relative to the IP of speech in which TMS was applied. Speech onset coincided with the DP of the gesture. Top, Example of a speech IP >120 ms. Bottom, Example of a speech IP ≤120 ms. B, A total of 1280 gesture-speech pairs were split into 24 blocks that were completed on 4 different days with 5-7 d apart between each other. There were 8 blocks for each stimulation site, and the order of blocks was counterbalanced using a Latin square across participants. In each block, double-pulse TMS was conducted on one stimulation site [the left IFG (red spot), the left pMTG (blue spot), or the vertex (black spot)] in all 8 TWs randomly. In each trial, a fixation cross was first presented on the center of the screen for 0.5-1.5 s, followed by a video of gesture and speech, and participants were asked to respond to the gender of the voice within 2 s. A feedback was given only if the response was incorrect.

Materials and Methods

Participants

A total of 148 native Chinese speakers signed written informed consent forms approved by the Institute of Psychology, Chinese Academy of Sciences, and took part in the experiment. Thirty-two subjects (17 females, aged 19-28 years, SD = 8.19 years) participated in Pretest 1 for semantic congruency rating. Thirty participants (17 females, aged 19-28 years, SD = 9.92 years) took part in Pretest 2 to validate the stimuli. Another 2 sets of 30 subjects participated in Pretest 3 (16 females, aged 19-35 years, SD = 11.03 years) and Pretest 4 (15 females, aged 19-30 years, SD = 10.26 years) for the gating experiments of gesture and speech stimuli. Twenty-six participants (12 females, aged 18-28 years, mean = 22, SD = 7.83) who did not participate in any pretest took part in the TMS study. All participants were right-handed, had normal hearing, had normal or corrected-to-normal vision, and were paid ¥ 100 per hour for their participation.

Stimuli

The stimuli were revised based on our previous study (Zhao et al., 2018). Forty-four original, English common action videos were selected and translated into Mandarin Chinese, resulting in a total of 28 qualified actions. Two native Chinese speakers (1 male, 1 female) produced each action while uttering the corresponding Chinese word. The Chinese audios were then combined with the relevant videos produced by English speakers to generate the two experimental manipulations, the gender congruency factor (e.g., a man doing a gesture combined with a male voice or a woman doing a gesture combined with a male voice) and the semantic congruency factor (e.g., a man or a woman doing a “cut” gesture while saying Mandarin speech word “剪jian3 (cut)” or a man or a woman doing a “spray” gesture while saying “剪jian3 (cut)”). To counterbalance across all the stimulus sets, the reverse combination was also used (e.g., a man or a woman doing a “cut” gesture while saying “喷pen1 (spray)”) (for details, see Zhao et al., 2018). A total of 56 pairs of gestures and speech were used in the following four pretests.

Pretest 1: semantic congruency rating

To verify that the semantically congruent or incongruent combinations of gestures and speech were indeed perceived as such, 32 participants rated the relationship between the 56 pairs of gestures and speech on a 5-point rating scale (1 means “no relation” and 5 means a “very strong relation”). Based on the rating results, eight pairs of stimuli were moved from the main stimulus set to the practice set as examples of ambiguous relationship. The 48 remaining stimuli were used for further pretests. The mean rating for the remaining set of congruent pairs was 4.48 (SD = 0.40), and the mean rating for the incongruent pairs was 1.44 (SD = 0.42).

Pretest 2: stimulus set validation

Another 30 participants validated the stimulus set by replicating the semantic congruency effect in previous studies (Kelly et al., 2010). Participants were informed that the gender they saw on the screen and the gender of the voice they heard might be different or the same. Participants were asked to merely pay attention to the gender of the voice they heard and press one of two buttons on the keyboard (key “F” for male and key “J” for female in half of participants; reversed in the other half) as quickly and accurately as possible. The video started at the stroke of gestures. Speech onset occurred 200 ms after the onset of the video. RT was recorded relative to the onset of speech, and participants pressed the button at the moment of video stop.

Participants made very few errors in the task (overall accuracy = 96.49%); therefore, accuracy data were not analyzed, and incorrect trials were excluded. The remaining correct responses were trimmed within 2 SDs of each subject's mean RT. Overall, this resulted in 4.2% of trials being excluded as outliers. To maximize the semantic congruency effect, eight pairs of stimuli were further deleted. The rest 40 pairs of stimuli constituted 2 (Semantic congruency) × 2 (Gender congruency) conditions, with 160 trials in total. Similar to previous findings, a 2 (Semantic congruency) × 2 (Gender congruency) repeated-measures ANOVA revealed a significant main effect of Gender congruency (F(1,29) = 45.46, p < 0.001, ηp2 = 0.611), with gender-incongruent trials (mean = 556.64 ms, SE = 12.11) eliciting slower RTs than gender-congruent trials (mean = 531.78 ms, SE = 11.67). Importantly, a significant main effect of Semantic congruency (F(1,29) = 51.12, p < 0.001, ηp2 = 0.638) was replicated: participants were slower to judge the gender of the speaker when speech and gesture were semantically incongruent (mean = 554.51 ms, SE = 11.65) relative to when they were semantically congruent (mean = 533.90 ms, SE = 12.02). However, no significant interaction of Semantic congruency and Gender congruency was found (F(1,29) = 0.542, p = 0.468, ηp2 = 0.018).

Pretest 3: gating study of gesture stimuli

We used the gating paradigm (Obermeier et al., 2011; Obermeier and Gunter, 2015) to define the minimal length of each gesture required for semantic identification, namely, the DP of gesture (Table 1). To do so, the remaining 20 gestures (length = 1771.00 ms, SD = 307.98 ms) performed by either a male or a female were presented to 30 participants. Each gesture was presented without speech and in segments of increasing duration at a step of 40 ms (the first segment started at 40 ms). Participants were told that they would be presented with a number of videos of someone performing an action without holding the object. For each action, there were several videos of various durations. Participants were asked to infer what was described in the action with a single action word. There was no time limit in which participants had to respond. Participants could move to the next action by either giving no correct answer at the end of the action or giving the same answer for that action 3-6 times continuously (time varied to prevent a learning effect). The DP of a gesture was defined as the first time point the participant gave the final answer.

View this table:
  • View inline
  • View popup
Table 1.

The mean DP of each action gesture and mean IP of each speech word

To eliminate the influence of outliers, RTs outside 2 SDs of the mean for each gesture were excluded (5.5% of trials). On average, the DP of gestures was 183.78 ms (SD = 84.82 ms). Paired-sample t tests showed that there was no significant difference in whether the action was performed by a male or a female (t(20) = 0.21, p = 0.84).

Pretest 4: gating study of speech stimuli

To locate the IP of speech, the 20 action verbs pronounced by either a male or a female speaker (length = 447.08 ms, SD = 93.48 ms) were presented to another set of 30 participants. Each sound was presented in segments of increasing duration that started at a length of 80 ms, with an increase of 40 ms. Participants were asked to listen carefully, try to infer what was presented in the audio, and write down the word they heard. All other procedures were the same as those used in Pretest 3. Similar to the DP of gestures, the IP of speech was defined as the first time point participants gave the final answer of the various speech segments presented.

After removing the outliers (>2 SDs of the mean, 4.6% of trials) for each speech, on average, the IP of speech was 176.40 ms (SD = 66.21 ms, Table 1). Paired-sample t tests showed that there was no significant difference in whether the speech was pronounced by a male or a female (t(20) = 0.52, p = 0.61).

Experimental procedure

We used double-pulse TMS to investigate the temporal specificity of the involvement of the left IFG and the left pMTG in gesture-speech integration. Previous studies have shown that double pulses with a pulse interval of 40 ms were enough to elicit a transient time-locked “virtual lesion” to the normal neural firing pattern and generate an inhibitory effect on cortical functions (O'Shea et al., 2004; Pitcher et al., 2007, 2008). Therefore, the present study applied double-pulse TMS at the boundaries of each of the 40 ms TWs over either the left IFG or the left pMTG to examine the precise timing of two regions in gesture-speech integration. Notably, a mixed TMS effect between stimulation sites was not considered because of the lack of aftereffect of double-pulse TMS and the balanced order of stimulation sites across participants.

Eight 40 ms TWs were segmented relative to the speech IP. There were 3 TWs before the speech IP (TW1: −120 to 80 ms; TW2: −80 to 40 ms; and TW3: −40 to 0 ms) and 5 TWs after the speech IP (TW4: 0-40 ms; TW5: 40-80 ms; TW6: 80-120 ms; TW7: 120-160 ms; and TW8: 160-200 ms). To ensure that all TWs were located after the onset of speech, for those action verbs with IP <120 ms, the first TW was defined as from the onset of speech to 40 ms after onset (Fig. 1A).

To eliminate effect caused by stimuli, each of the 160 gesture-speech pairs underwent TMS in each of the 8 TWs, leading to 1280 trials in total. The 1280 trials were further split into 24 blocks (54 trials in the first 20 blocks, and 50 trials in the last 4 blocks) with 8 blocks for each stimulation site, and the order of blocks was counterbalanced using a Latin square design across participants. Participants completed the 24 blocks on 4 different days that were 5-7 d apart to avoid fatigue and the learning effect. In each block, one area was stimulated by double-pulse TMS in all 8 TWs in a random order (Fig. 1B). In each trial, a fixation cross was first presented on the center of the screen for 0.5-1.5 s, followed by a video of gesture and speech, and participants were asked to look at the screen but merely respond to the gender of the voice within 2 s. Feedback was given only if the response was incorrect. RT was recorded from the onset of speech. A 4 (Day) × 2 (Semantic congruency) × 2 (Gender congruency) repeated-measures ANOVA revealed a significant main effect of Day (F(1.648,41.192) = 19.590, p < 0.001, ηp2 = 0.439), with gradually decreased RTs as participants became more familiar with the task. However, there was no modulation of Day on either Semantic congruency (F(2.031,50.775) = 2.110, p = 0.131, ηp2 = 0.078) or Gender congruency (F(2.172,54.300) = 0.532, p = 0.605, ηp2 = 0.021).

The stimuli were presented using Presentation software (version 17.2, www.neurobs.com). All other procedures were the same as those described for Pretest 2. Before the formal experiment, participants performed 16 training trials to become familiar with the experimental procedure.

TMS protocol

The stimulation sites of the left IFG (−62, 16, 22) and the left pMTG (−50, −56, 10) corresponding to MNI coordinates were identified in a quantitative meta-analysis of fMRI studies on iconic gesture-speech integration (for details, see Zhao et al., 2018). The vertex was used as a control site.

To enable image-guided TMS navigation, high-resolution (1 × 1 × 0.6 mm) T1-weighted anatomic MRI scans of each participant were acquired at the Beijing MRI Center for Brain Research using a Siemens 3T Trio/Tim Scanner. Frameless stereotaxic procedures (BrainSight 2; Rogue Research) were used for online checking of stimulation during navigation. To ensure precise stimulation of each target region in each participant, individual anatomic images were manually registered by identifying the anterior and posterior commissures. Subject-specific target regions were defined by trajectory markers using the MNI coordinate system. The angles of the markers were checked and adjusted to be orthogonal to the skull during neuronavigation.

A Magstim Rapid2 stimulator (Magstim) was used to deliver the double-pulse TMS via a 70 mm figure-eight coil. The double-pulse TMS at an intensity of 50% of maximum stimulator output was delivered “online” in each TW. For example, in a trial where stimulation took place in TW1, a participant would receive a pulse at time −120 ms (relative to the speech IP) and a second pulse 40 ms later (−80 ms of the speech IP). All other apparatus details were the same as those described for Pretest 2.

Data analyses

All incorrect responses (1471 out of the total number of 33,280, 4.42% of trials) were excluded. To eliminate the influence of outliers, a 2 SD trimmed mean of RT for every participant in each session was conducted. First, we conducted a 3 (Site) × 2 (Semantic congruency) × 2 (Gender congruency) repeated-measures ANOVA to examine the general effects of Site, Semantic congruency, and Gender congruency on RTs. Next, we implemented an 8 (TW) × 2 (Semantic congruency) repeated-measures ANOVA on the vertex condition alone to ensure that vertex stimulation in different TWs did not significantly change RT or the semantic congruency effect. A similar 8 (TW) × 2 (Gender congruency) ANOVA on the vertex condition was conducted, too. By doing so, we could average RTs across TWs in the vertex condition to serve as the RT baselines of the four stimulus conditions to further test the TMS effect.

Then, we focused our analysis on the TMS effect (active-TMS – vertex-TMS) and its interaction with TW over the semantic congruency factor to determine in which TW the semantically congruent condition and the semantically incongruent condition were differently affected when activity in the IFG or pMTG was disrupted relative to vertex stimulation. Accordingly, we implemented a 2 (Site: pMTG-vertex, IFG-vertex) × 8 (TW) repeated-measures ANOVA directly on the semantic congruency effect (RTsemantically incongruent – RTsemantically congruent), followed by one-sample t tests with false discovery rate (FDR) correction to identify TWs in which the semantic congruency effect was significantly disrupted by TMS on each site.

We also used the TMS effect over the factor of gender congruency as a control, with the assumption that double-pulse TMS would selectively impact on semantic congruency but not on gender congruency, as reflected by an insignificant two-way interaction measured by a 2 (Site) × 8 (TW) repeated-measures ANOVA. In all ANOVAs, Greenhouse–Geisser adjustment was applied to correct for violations of the sphericity assumption where necessary; multiple comparisons were corrected by Bonferroni correction.

Results

RTs at each experimental condition are illustrated in Figure 2. RT was generally longer in the vertex condition (mean = 512.65 ms, SE = 12.43 ms) than in the pMTG (mean = 500.55 ms, SE = 14.93 ms) and IFG (mean = 500.90 ms, SE = 13.96 ms) stimulation conditions. However, the difference was not significant, as a 3 (Site) × 2 (Semantic congruency) × 2 (Gender congruency) repeated-measures ANOVA revealed an insignificant main effect of Site (F(1.831,45.778) = 0.944, p = 0.396, ηp2 = 0.036). Consistent with previous studies (Kelly et al., 2010; Zhao et al., 2018), there was a significant main effect of Semantic congruency (F(1,25) = 255.40, p < 0.001, ηp2 = 0.911), with longer RTs in semantically incongruent trials (mean = 513.97 ms, SE = 12.80 ms) than in congruent trials (mean = 495.44 ms, SE = 12.31 ms). There was also a significant main effect of Gender congruency (F(1,25) = 71.00, p < 0.001, ηp2 = 0.740), with longer RTs when speech and gestures were produced by conflicting genders (mean = 514.14 ms, SE = 12.98 ms) than when they were produced by the same gender (mean = 495.27 ms, SE = 12.20 ms). Furthermore, there was no significant interaction between Site and Semantic congruency (F(1.940,48.509) = 2.93, p = 0.065, ηp2 = 0.105), nor between Site and Gender congruency (F(1.951,48.785) = 0.37, p = 0.69, ηp2 = 0.015).

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

RTs of four experimental conditions (bar graphs) and the semantic congruent and incongruent conditions (line graphs) for the three stimulation sites. Sc, Semantically congruent; Si, semantically incongruent; Gc, gender congruent; Gi, gender incongruent. Rt_mean represents the mean RT for the Sc and Si conditions. Error bars in the bar graphs and shadows in the line graphs indicate SEM.

An 8 (TW) × 2 (Semantic congruency) repeated-measures ANOVA on vertex data revealed neither a main effect of TW (F(5.126,128.152) = 0.992, p = 0.439, ηp2 = 0.038) nor an interaction of TW by Semantic congruency (F(5.556,138.907) = 2.03, p = 0.07, ηp2 = 0.075). Another ANOVA of 8 (TW) × 2 (Gender congruency) showed similar effects, neither a main effect of TW (F(5.096,127.410) = 1.049, p = 0.399, ηp2 = 0.040) nor a TW × Gender congruency interaction (F(4.796,119.897) = 0.359, p = 0.869, ηp2 = 0.014) was found. These results suggest that double-pulse TMS on the vertex in different TWs did not significantly change RT, or the semantic or gender congruency effect. To simplify analyses and get reliable measures, we averaged RTs across TWs in the vertex condition to generate the baseline RTs of the four stimulus conditions in examining the TMS effect.

To directly show the Site- and TW-specific TMS effect on the semantic congruency effect (i.e., an RT cost because of semantic conflict), a 2 (Site: pMTG-vertex, IFG-vertex) × 8 (TW) repeated-measures ANOVA on the semantic congruency effect (RTsemantically incongruent – RTsemantically congruent) revealed a significant Site × TW interaction (F(5.247,131.175) = 2.252, p = 0.034, ηp2 = 0.083) (Fig. 3A). Following one-sample t tests showed a significant TMS disruption of the semantic congruency effect (negative values in Fig. 3A indicate decreased semantic congruency effect compared with the baseline) when stimulating the pMTG in TW1 (t(25) = 3.337, FDR-corrected p = 0.015, Cohen's d = 0.645), TW2 (t(25) = 3.019, FDR-corrected p = 0.015, Cohen's d = 0.592), and TW7 (t(25) = 3.063, FDR-corrected p = 0.015, Cohen's d = 0.601). Similar TMS impairment of the semantic congruency effect was found when stimulating the IFG in TW3 (t(25) = 3.299, FDR-corrected p = 0.012, Cohen's d = 0.647) and TW6 (t(25) = 4.348, FDR-corrected p = 0.002, Cohen's d = 0.853). To further depict how the semantically congruent condition and the semantically incongruent condition were differentially affected by TMS stimulation, the TMS effects on each condition were shown in Figure 4. The significant TMS disruption of the semantic congruency effect when stimulating the pMTG in TW1, TW6, and TW7 was caused mainly by a TMS-induced decrease of RT in the semantically incongruent condition without influencing the semantically congruent condition. In contrast, the significant TMS impairment of the semantic congruency effect when stimulating the IFG in TW3 and TW6 was because of a TMS-induced increase of RT in the semantically congruent condition without impacting the semantically incongruent trials.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

TMS effects on the semantic congruency effect (A) and the gender congruency effect (B). TMS effect was defined as active-TMS minus vertex-TMS. The semantic congruency effect was calculated as the RT difference between semantically incongruent and semantically congruent pairs, whereas the gender congruency effect was calculated as the RT difference between gender incongruent and gender congruent pairs. *p < 0.05; **p < 0.01; one-sample t tests after FDR correction. Error bars represent SEM.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

TMS effects on semantically congruent and incongruent conditions. TMS effects, defined as active-TMS minus vertex-TMS, on RTs of the semantically congruent (red) and semantically incongruent (black) conditions are shown for stimulating the left pMTG and left IFG in 8 TWs. *p < 0.05; **p < 0.01; one-sample t tests after FDR correction. Shadows represent SEM.

As a control analysis, another repeated-measures ANOVA of 2 (Site) × 8 (TW) on the gender congruency effect showed an insignificant two-way interaction (F(4.999,124.970) = 0.750, p = 0.587, ηp2 = 0.029), indicating no TW-specific TMS modulation on gender congruency (Fig. 3B).

Discussion

By splitting the integration process of gestures and speech into 8 TWs, and applying double-pulse TMS in each of the TWs on either the left IFG or the left pMTG, brain areas considered as the neural underpinnings supporting semantic control and cross-modal representations, respectively, during gesture-speech integration (Willems et al., 2009; Zhao et al., 2018), we created a novel paradigm to investigate how the left IFG and the left pMTG temporally interact in gesture-speech semantic processing. Crucially, our results, for the first time, revealed a causal involvement of both areas with differential time courses in automatic semantic processing. As summarized in Figure 5A, when speech had not reached its semantic IP, the semantic congruency effect (i.e., the RT cost induced by semantic conflict between gestures and speech) was significantly disrupted by double-pulse TMS over the left pMTG in TW1 and TW2 and over the left IFG in TW3 relative to vertex stimulation. After the speech reaching its semantic IP, significant TMS impairment of the semantic congruency effect was found when conducting TMS on the left IFG in TW6 and over the left pMTG in TW7. Our findings provide causal evidence for a sequential engagement in gesture-speech semantic integration from the left pMTG to the left IFG at the prelexical stage and from the left IFG to the left pMTG at the postlexical stage, a two-stage gesture-speech integration circuit was thus proposed in Figure 5B. As an extension of our previous findings (Zhao et al., 2018) that both the left pMTG and the left IFG played a causal contribution to semantic integration of gestures and speech, the present study fills the gap of knowledge in understanding how the frontal control region and temporal storage node dynamically interplay in integrating multimodal semantic information.

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Summary of results and the proposed two-stage integration circuit in semantic processing of gestures and speech. A, Summary of critical time periods that the left IFG (red spot) and left pMTG (blue spot) get involved in gesture-speech semantic integration. B, A gesture-speech semantic integration circuit was posited in which there were a pMTG-to-IFG sequential involvement at the prelexical stage (before speech reached its IP) for accessing and activating gesture semantics and top-down modulation of phonological processing in the STG/STS, and an IFG-to-pMTG timing relationship at the postlexical stage (after speech IP) for controlled retrieval and multimodal semantic unification.

Bidirectional neural pathways connecting the left IFG and pMTG in semantic processing have been proposed extensively (Hickok and Poeppel, 2007; Hagoort, 2013; Friederici et al., 2017). For instance, the Memory-Unification-Control model (Hagoort, 2013) proposed a recurrent neurocircuit connecting the left IFG and pMTG for semantic unification. In this circuit, semantic features and lexical information are activated in posterior temporal regions and relayed to the IFG; then IFG neurons send feedback signals to the temporal cortex to manipulate the activation level of semantic representations, so as to maintain the context and unify the lexical information within the context. Using double-pulse TMS with high temporal resolution, the present study offers the first causal evidence for the dynamic interplay between the left IFG and the left pMTG, which echoes such a recurrent neurocircuit in the sense of multimodal semantic processing. Specifically, we observed a pMTG-to-IFG timing sequence in the stage of mapping acoustic speech input onto corresponding lexical-conceptual semantic representation, as indexed by the involvement of the left pMTG in TW1 and TW2 (120-40 ms before speech IP) and the left IFG in TW3 (40-0 ms before speech IP). There was also an IFG-to-pMTG timing relationship after the semantic retrieval of speech, as shown by the involvement of the left IFG in TW6 (80-120 ms after speech IP) and the left pMTG in TW7 (120-160 ms after speech IP).

Naturalistically, the multisensory information is not strictly aligned in time and the process of integration is supposed to take place at various stages of bottom-up and top-down cortical interplays (for review, see Talsma, 2015; Xu et al., 2020). By presenting the onset of speech at the DP of gesture, the present study created a semantic priming paradigm of gestures onto speech. We therefore comprehend the results as a crosstalk of the top-down modulation of gestures with the bottom-up processing of speech. In the first prelexical stage, semantic information extracted from the gesture would constrain the phonological encoding of speech, while in the second postlexical stage, the most feasible lexical candidate was selected, retrieved, and unified with the gesture semantics to form context-appropriate semantic representations.

Furthermore, there seems a division of labor of the left pMTG and the left IFG in each stage of gesture-speech integration. Although TMS on both areas disrupted the semantic congruency effect, stimulation of the pMTG led to shorter RT in the semantically incongruent condition, whereas stimulation of the IFG led to longer RT in the semantically congruent condition (see Fig. 4). It is believed that the pMTG is involved in mediating the long-term storage of, and access to, supramodal semantic representations, while the left IFG has been reliably associated with controlled retrieval and selection of lexical representations to fit the current context or goal, and this process is independent of modality (for review, see Lau et al., 2008; Binder and Desai, 2011; Ralph et al., 2017). Consistent with those claims, we interpret the functional roles of the two regions in the proposed two-stage gesture-speech integration circuit (Fig. 5B) as follows. In the first stage, disrupting activity in the left pMTG may preclude both the top-down modulation of semantic information extracted from the gesture on the lower-level phonological processing of speech in the superior temporal gyrus and sulcus (STG/STS) (Bizley et al., 2016), and the bottom-up mapping of the phonological speech to lexical representation in the pMTG. This would likely lead to a failure in monitoring the semantic conflict between gestures and speech, as reflected by decreased RT only in semantically incongruent trials. In the second stage, although speech started at the point when gestures became semantically clear, we cannot dismiss that perturbing the pMTG activity would interfere with the reanalysis of semantic information for the observed iconic gesture to make it compatible with the accompanying speech context (Fritz et al., 2021). Since semantically incongruent pairs triggered an increased need for strategic recovery, inhibition of the pMTG in the second stage would dampen such a process, thus reducing the RT cost in the semantically incongruent condition. In contrast, disturbing activity in the left IFG likely impeded the controlled selection of the most appropriate lexical semantics with the current context, which may impact the top-down constraining of phonological processing in the STG/STS in the first stage, and the manipulation and unification of context-appropriate supramodal semantic representations in the pMTG in the second stage. Therefore, the TMS effect was reflected as increased RT in the semantically congruent condition but no substantial effect in the semantically incongruent condition following IFG stimulation at both stages. The stimulation effect on both sites was specific to semantic processing but not general cognitive processing because no effect on the task-relevant gender congruency effect was observed.

The two-stage gesture-speech integration loop proposed here is consistent with recent findings concerning gesture and speech processing. In a simultaneous EEG and fMRI study (He et al., 2018), an α-pSTS/MTG correlation was found in an earlier TW of gesture-speech integration and an α-IFG correlation was found in a later TW of integration. Similarly, an MEG study revealed an early α power suppression in right STS and a late α suppression in left IFG when gestures semantically disambiguated degraded speech comprehension (Drijvers et al., 2018b). Go beyond the prior knowledge, the current study made use of double-pulse TMS and offered straightforward proof for distinct engagement and temporal dynamics of the left IFG and the left pMTG in dichotomous stages of gesture-speech semantic processing.

Nonetheless, the conclusion of such a neurocircuit needs to be drawn with caution. The fact that TMS not only affects activity of the perturbed brain area but also influences activity of areas that are functionally connected with the perturbed brain area (Jackson et al., 2016; Hartwigsen et al., 2017) makes the cause–effect relationship of TMS stimulation with behavioral performance much more complex than previously thought (Bergmann and Hartwigsen, 2021). On one hand, the present results cannot tell whether the sequential involvement of the left pMTG and the left IFG was because of parallel modulations of the frontal cortex (Jacklin et al., 2016) and the temporal cortex (Noesselt et al., 2007) over the phonological processing of speech, or the information flow from the pMTG to the IFG before acting on the auditory region. On the other hand, whether there are other brain areas, such as the primary motor cortex (Marco et al., 2015) and the anterior temporal lobe (Patterson et al., 2007), that sequentially modulate activity in the pMTG and the IFG and involve in gesture-speech integration should also be clarified in future. Follow-up studies are encouraged to combine neuroimaging measures with TMS to truly unravel the functional roles and dynamic interaction of the IFG and the pMTG as well as the rapid reorganization of a wider semantic network in gesture-speech integration.

In conclusion, by applying double-pulse TMS covering the whole integration process, the present study is the first to provide causal evidence for the temporal dynamics of the left IFG and the left pMTG in gesture-speech integration. Our findings suggest a two-stage gesture-speech semantic integration circuit. In the early prelexical stage, semantic information extracted from gestures may exert top-down modulation over the phonological processing of speech, with left pMTG acting ahead of the left IFG. In the late postlexical stage, speech may unify with gestures to form context-appropriate semantic representations through a feedback signal from the left IFG to the left pMTG. This study promises for a further understanding of the dynamic interaction between the frontal control and temporal representation regions in multimodal semantic processing.

Footnotes

  • The authors declare no competing financial interests.

  • This work was supported by National Natural Science Foundation of China Grants 31800964 and 31822024; Scientific Foundation of Institute of Psychology, Chinese Academy of Sciences Grant Y8CX382005; and Strategic Priority Research Program of Chinese Academy of Sciences Grant XDB32010300.

  • Correspondence should be addressed to Yi Du at duyi{at}psych.ac.cn

SfN exclusive license.

References

  1. ↵
    1. Bergmann TO,
    2. Hartwigsen G
    (2021) Inferring causality from noninvasive brain stimulation in cognitive neuroscience. J Cogn Neurosci 33:195–225. doi:10.1162/jocn_a_01591 pmid:32530381
    OpenUrlCrossRefPubMed
  2. ↵
    1. Bernardis P,
    2. Gentilucci M
    (2006) Speech and gesture share the same communication system. Neuropsychologia 44:178–190. doi:10.1016/j.neuropsychologia.2005.05.007 pmid:16005477
    OpenUrlCrossRefPubMed
  3. ↵
    1. Binder JR,
    2. Desai RH
    (2011) The neurobiology of semantic memory. Trends Cogn Sci 15:527–536. doi:10.1016/j.tics.2011.10.001 pmid:22001867
    OpenUrlCrossRefPubMed
  4. ↵
    1. Binder JR,
    2. Conant LL,
    3. Humphries CJ,
    4. Fernandino L,
    5. Simons SB,
    6. Aguilar M,
    7. Desai RH
    (2016) Toward a brain-based componential semantic representation. Cogn Neuropsychol 33:130–174. doi:10.1080/02643294.2016.1147426 pmid:27310469
    OpenUrlCrossRefPubMed
  5. ↵
    1. Bizley JK,
    2. Maddox RK,
    3. Lee AK
    (2016) Defining auditory-visual objects: behavioral tests and physiological mechanisms. Trends Neurosci 39:74–85. doi:10.1016/j.tins.2015.12.007 pmid:26775728
    OpenUrlCrossRefPubMed
  6. ↵
    1. Caramazza A,
    2. Hillis AE,
    3. Rapp BC,
    4. Romani C
    (1990) The multiple semantics hypothesis: multiple confusions? Cogn Neuropsychol 7:161–189. doi:10.1080/02643299008253441
    OpenUrlCrossRef
  7. ↵
    1. Damasio H,
    2. Grabowski TJ,
    3. Tranel D,
    4. Hichwa RD,
    5. Damasio AR
    (1996) A neural basis for lexical retrieval. Nature 380:499–505. doi:10.1038/380499a0 pmid:8606767
    OpenUrlCrossRefPubMed
  8. ↵
    1. Davey J,
    2. Rueschemeyer SA,
    3. Costigan A,
    4. Murphy N,
    5. Krieger-Redwood K,
    6. Hallam G,
    7. Jefferies E
    (2015) Shared neural processes support semantic control and action understanding. Brain Lang 142:24–35. doi:10.1016/j.bandl.2015.01.002 pmid:25658631
    OpenUrlCrossRefPubMed
  9. ↵
    1. Dick AS,
    2. Goldin-Meadow S,
    3. Solodkin A,
    4. Small SL
    (2012) Gesture in the developing brain. Dev Sci 15:165–180. doi:10.1111/j.1467-7687.2011.01100.x pmid:22356173
    OpenUrlCrossRefPubMed
  10. ↵
    1. Drijvers L,
    2. Ozyurek A,
    3. Jensen O
    (2018a) Alpha and beta oscillations index semantic congruency between speech and gestures in clear and degraded speech. J Cogn Neurosci 30:1086–1097. doi:10.1162/jocn_a_01301 pmid:29916792
    OpenUrlCrossRefPubMed
  11. ↵
    1. Drijvers L,
    2. Ozyurek A,
    3. Jensen O
    (2018b) Hearing and seeing meaning in noise: alpha, beta, and gamma oscillations predict gestural enhancement of degraded speech comprehension. Hum Brain Mapp 39:2075–2087. doi:10.1002/hbm.23987 pmid:29380945
    OpenUrlCrossRefPubMed
  12. ↵
    1. Friederici AD,
    2. Chomsky N,
    3. Berwick RC,
    4. Moro A,
    5. Bolhuis JJ
    (2017) Language, mind and brain. Nat Hum Behav 1:713–722. doi:10.1038/s41562-017-0184-4 pmid:31024099
    OpenUrlCrossRefPubMed
  13. ↵
    1. Fritz I,
    2. Kita S,
    3. Littlemore J,
    4. Krott A
    (2021) Multimodal language processing: how preceding discourse constrains gesture interpretation and affects gesture integration when gestures do not synchronise with semantic affiliates. J Mem Lang 117:104191. doi:10.1016/j.jml.2020.104191
    OpenUrlCrossRef
  14. ↵
    1. Goldin-Meadow S,
    2. Sandhofer CM
    (1999) Gestures convey substantive information about a child's thoughts to ordinary listeners. Dev Sci 2:67–74. doi:10.1111/1467-7687.00056
    OpenUrlCrossRef
  15. ↵
    1. Hagoort P
    (2013) MUC (Memory, Unification, Control) and beyond. Front Psychol 4:416. doi:10.3389/fpsyg.2013.00416 pmid:23874313
    OpenUrlCrossRefPubMed
  16. ↵
    1. Hartwigsen G,
    2. Bzdok D,
    3. Klein M,
    4. Wawrzyniak M,
    5. Stockert A,
    6. Wrede K,
    7. Classen J,
    8. Saur D
    (2017) Rapid short-term reorganization in the language network. Elife 6:e25964. doi:10.7554/eLife.25964
    OpenUrlCrossRefPubMed
  17. ↵
    1. He YF,
    2. Steines M,
    3. Sommer J,
    4. Gebhardt H,
    5. Nagels A,
    6. Sammer G,
    7. Kircher T,
    8. Straube B
    (2018) Spatial-temporal dynamics of gesture-speech integration: a simultaneous EEG-fMRI study. Brain Struct Funct 223:3073–3089. doi:10.1007/s00429-018-1674-5 pmid:29737415
    OpenUrlCrossRefPubMed
  18. ↵
    1. Hickok G,
    2. Poeppel D
    (2007) The cortical organization of speech processing. Nat Rev Neurosci 8:393–402. doi:10.1038/nrn2113 pmid:17431404
    OpenUrlCrossRefPubMed
  19. ↵
    1. Holle H,
    2. Gunter TC
    (2007) The role of iconic gestures in speech disambiguation: ERP evidence. J Cogn Neurosci 19:1175–1192. doi:10.1162/jocn.2007.19.7.1175 pmid:17583993
    OpenUrlCrossRefPubMed
  20. ↵
    1. Holle H,
    2. Gunter TC,
    3. Rüschemeyer SA,
    4. Hennenlotter A,
    5. Iacoboni M
    (2008) Neural correlates of the processing of co-speech gestures. Neuroimage 39:2010–2024. doi:10.1016/j.neuroimage.2007.10.055 pmid:18093845
    OpenUrlCrossRefPubMed
  21. ↵
    1. Holle H,
    2. Obleser J,
    3. Rueschemeyer SA,
    4. Gunter TC
    (2010) Integration of iconic gestures and speech in left superior temporal areas boosts speech comprehension under adverse listening conditions. Neuroimage 49:875–884. doi:10.1016/j.neuroimage.2009.08.058 pmid:19733670
    OpenUrlCrossRefPubMed
  22. ↵
    1. Holler J,
    2. Levinson SC
    (2019) Multimodal language processing in human communication. Trends Cogn Sci 23:639–652. doi:10.1016/j.tics.2019.05.006 pmid:31235320
    OpenUrlCrossRefPubMed
  23. ↵
    1. Jacklin DL,
    2. Cloke JM,
    3. Potvin A,
    4. Garrett I,
    5. Winters BD
    (2016) The dynamic multisensory engram: neural circuitry underlying crossmodal object recognition in rats changes with the nature of object experience. J Neurosci 36:1273–1289. doi:10.1523/JNEUROSCI.3043-15.2016 pmid:26818515
    OpenUrlAbstract/FREE Full Text
  24. ↵
    1. Jackson RL,
    2. Hoffman P,
    3. Pobric G,
    4. Ralph MA
    (2016) The semantic network at work and rest: differential connectivity of anterior temporal lobe subregions. J Neurosci 36:1490–1501. doi:10.1523/JNEUROSCI.2999-15.2016 pmid:26843633
    OpenUrlAbstract/FREE Full Text
  25. ↵
    1. Kelly SD,
    2. Church RB
    (1998) A comparison between children's and adults' ability to detect conceptual information conveyed through representational gestures. Child Dev 69:85–93. doi:10.2307/1132072 pmid:9499559
    OpenUrlCrossRefPubMed
  26. ↵
    1. Kelly SD,
    2. Barr DJ,
    3. Church RB,
    4. Lynch K
    (1999) Offering a hand to pragmatic understanding: the role of speech and gesture in comprehension and memory. J Mem Lang 40:577–592. doi:10.1006/jmla.1999.2634
    OpenUrlCrossRef
  27. ↵
    1. Kelly SD,
    2. Kravitz C,
    3. Hopkins M
    (2004) Neural correlates of bimodal speech and gesture comprehension. Brain Lang 89:253–260. doi:10.1016/S0093-934X(03)00335-3 pmid:15010257
    OpenUrlCrossRefPubMed
  28. ↵
    1. Kelly SD,
    2. Creigh P,
    3. Bartolotti J
    (2010) Integrating speech and iconic gestures in a Stroop-like task: evidence for automatic processing. J Cogn Neurosci 22:683–694. doi:10.1162/jocn.2009.21254 pmid:19413483
    OpenUrlCrossRefPubMed
  29. ↵
    1. Kita S,
    2. Ozyurek A
    (2003) What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking. J Mem Lang 48:16–32. doi:10.1016/S0749-596X(02)00505-3
    OpenUrlCrossRef
  30. ↵
    1. Kita S,
    2. Alibali MW,
    3. Chu MY
    (2017) How do gestures influence thinking and speaking? The gesture-for-conceptualization hypothesis. Psychol Rev 124:245–266. doi:10.1037/rev0000059 pmid:28240923
    OpenUrlCrossRefPubMed
  31. ↵
    1. Lau EF,
    2. Phillips C,
    3. Poeppel D
    (2008) A cortical network for semantics: (de)constructing the N400. Nat Rev Neurosci 9:920–933. doi:10.1038/nrn2532 pmid:19020511
    OpenUrlCrossRefPubMed
  32. ↵
    1. Marco DD,
    2. De Stefani E,
    3. Gentilucci M
    (2015) Gesture and word analysis: the same or different processes? Neuroimage 117:375–385. doi:10.1016/j.neuroimage.2015.05.080 pmid:26044859
    OpenUrlCrossRefPubMed
  33. ↵
    1. Morrel-Samuels P,
    2. Krauss RM
    (1992) Word familiarity predicts temporal asynchrony of hand gestures and speech. J Exp Psychol Learn Mem Cogn 18:615–622. doi:10.1037/0278-7393.18.3.615
    OpenUrlCrossRef
  34. ↵
    1. Noesselt T,
    2. Rieger JW,
    3. Schoenfeld MA,
    4. Kanowski M,
    5. Hinrichs H,
    6. Heinze HJ,
    7. Driver J
    (2007) Audiovisual temporal correspondence modulates human multisensory superior temporal sulcus plus primary sensory cortices. J Neurosci 27:11431–11441. doi:10.1523/JNEUROSCI.2252-07.2007 pmid:17942738
    OpenUrlAbstract/FREE Full Text
  35. ↵
    1. O'Shea J,
    2. Muggleton NG,
    3. Cowey A,
    4. Walsh V
    (2004) Timing of target discrimination in human frontal eye fields. J Cogn Neurosci 16:1060–1067. doi:10.1162/0898929041502634
    OpenUrlCrossRefPubMed
  36. ↵
    1. Obermeier C,
    2. Gunter TC
    (2015) Multisensory integration: the case of a time window of gesture-speech integration. J Cogn Neurosci 27:292–307. doi:10.1162/jocn_a_00688 pmid:25061929
    OpenUrlCrossRefPubMed
  37. ↵
    1. Obermeier C,
    2. Holle H,
    3. Gunter TC
    (2011) What iconic gesture fragments reveal about gesture-speech integration: when synchrony is lost, memory can help. J Cogn Neurosci 23:1648–1663. doi:10.1162/jocn.2010.21498 pmid:20350188
    OpenUrlCrossRefPubMed
  38. ↵
    1. Ozyurek A,
    2. Willems RM,
    3. Kita S,
    4. Hagoort P
    (2007) On-line integration of semantic information from speech and gesture: insights from event-related brain potentials. J Cogn Neurosci 19:605–616. doi:10.1162/jocn.2007.19.4.605 pmid:17381252
    OpenUrlCrossRefPubMed
  39. ↵
    1. Patterson K,
    2. Nestor PJ,
    3. Rogers TT
    (2007) Where do you know what you know? The representation of semantic knowledge in the human brain. Nat Rev Neurosci 8:976–987. doi:10.1038/nrn2277 pmid:18026167
    OpenUrlCrossRefPubMed
  40. ↵
    1. Pitcher D,
    2. Walsh V,
    3. Yovel G,
    4. Duchaine B
    (2007) TMS evidence for the involvement of the right occipital face area in early face processing. Curr Biol 17:1568–1573. doi:10.1016/j.cub.2007.07.063 pmid:17764942
    OpenUrlCrossRefPubMed
  41. ↵
    1. Pitcher D,
    2. Garrido L,
    3. Walsh V,
    4. Duchaine BC
    (2008) Transcranial magnetic stimulation disrupts the perception and embodiment of facial expressions. J Neurosci 28:8929–8933. doi:10.1523/JNEUROSCI.1450-08.2008 pmid:18768686
    OpenUrlAbstract/FREE Full Text
  42. ↵
    1. Ralph MA,
    2. Jefferies E,
    3. Patterson K,
    4. Rogers TT
    (2017) The neural and computational bases of semantic cognition. Nat Rev Neurosci 18:42–55. doi:10.1038/nrn.2016.150 pmid:27881854
    OpenUrlCrossRefPubMed
  43. ↵
    1. Smith AC,
    2. Monaghan P,
    3. Huettig F
    (2017) The multimodal nature of spoken word processing in the visual world: testing the predictions of alternative models of multimodal integration. J Mem Lang 93:276–303. doi:10.1016/j.jml.2016.08.005
    OpenUrlCrossRef
  44. ↵
    1. So WC,
    2. Yi-Feng AL,
    3. Yap DF,
    4. Kheng E,
    5. Yap JM
    (2013) Iconic gestures prime words: comparison of priming effects when gestures are presented alone and when they are accompanying speech. Front Psychol 4:779. doi:10.3389/fpsyg.2013.00779 pmid:24155738
    OpenUrlCrossRefPubMed
  45. ↵
    1. Talsma D
    (2015) Predictive coding and multisensory integration: an attentional account of the multisensory mind. Front Integr Neurosci 9:19. doi:10.3389/fnint.2015.00019 pmid:25859192
    OpenUrlCrossRefPubMed
  46. ↵
    1. Whitney C,
    2. Kirk M,
    3. O'Sullivan J,
    4. Lambon Ralph MA,
    5. Jefferies E
    (2011) The neural organization of semantic control: TMS evidence for a distributed network in left inferior frontal and posterior middle temporal gyrus. Cereb Cortex 21:1066–1075. doi:10.1093/cercor/bhq180 pmid:20851853
    OpenUrlCrossRefPubMed
  47. ↵
    1. Willems RM,
    2. Ozyurek A,
    3. Hagoort P
    (2009) Differential roles for left inferior frontal and superior temporal cortex in multimodal integration of action and language. Neuroimage 47:1992–2004. doi:10.1016/j.neuroimage.2009.05.066 pmid:19497376
    OpenUrlCrossRefPubMed
  48. ↵
    1. Wu YC,
    2. Coulson S
    (2007) Iconic gestures prime related concepts: an ERP study. Psychon Bull Rev 14:57–63. doi:10.3758/bf03194028 pmid:17546731
    OpenUrlCrossRefPubMed
  49. ↵
    1. Xu X,
    2. Hanganu-Opatz IL,
    3. Bieler M
    (2020) Cross-talk of low-level sensory and high-level cognitive processing: development, mechanisms, and relevance for cross-modal abilities of the brain. Front Neurorobot 14:7. doi:10.3389/fnbot.2020.00007 pmid:32116637
    OpenUrlCrossRefPubMed
  50. ↵
    1. Yap DF,
    2. So WC,
    3. Yap JM,
    4. Tan YQ,
    5. Teoh RL
    (2011) Iconic gestures prime words. Cogn Sci 35:171–183. doi:10.1111/j.1551-6709.2010.01141.x pmid:21428996
    OpenUrlCrossRefPubMed
  51. ↵
    1. Zhao WY,
    2. Riggs K,
    3. Schindler I,
    4. Holle H
    (2018) Transcranial magnetic stimulation over left inferior frontal and posterior temporal cortex disrupts gesture-speech integration. J Neurosci 38:1891–1900. doi:10.1523/JNEUROSCI.1748-17.2017 pmid:29358361
    OpenUrlAbstract/FREE Full Text
Back to top

In this issue

The Journal of Neuroscience: 41 (50)
Journal of Neuroscience
Vol. 41, Issue 50
15 Dec 2021
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
TMS Reveals Dynamic Interaction between Inferior Frontal Gyrus and Posterior Middle Temporal Gyrus in Gesture-Speech Semantic Integration
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
TMS Reveals Dynamic Interaction between Inferior Frontal Gyrus and Posterior Middle Temporal Gyrus in Gesture-Speech Semantic Integration
Wanying Zhao, Yanchang Li, Yi Du
Journal of Neuroscience 15 December 2021, 41 (50) 10356-10364; DOI: 10.1523/JNEUROSCI.1355-21.2021

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
TMS Reveals Dynamic Interaction between Inferior Frontal Gyrus and Posterior Middle Temporal Gyrus in Gesture-Speech Semantic Integration
Wanying Zhao, Yanchang Li, Yi Du
Journal of Neuroscience 15 December 2021, 41 (50) 10356-10364; DOI: 10.1523/JNEUROSCI.1355-21.2021
del.icio.us logo Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • gesture
  • inferior frontal gyrus
  • posterior middle temporal gyrus
  • semantic integration
  • speech
  • TMS

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Expectation cues and false percepts generate stimulus-specific activity in distinct layers of the early visual cortex Laminar profile of visual false percepts
  • Acute ethanol modulates synaptic inhibition in the basolateral amygdala via rapid NLRP3 inflammasome activation and regulates anxiety-like behavior in rats
  • Haploinsufficiency of Shank3 in mice selectively impairs target odor recognition in novel background odors
Show more Research Articles

Behavioral/Cognitive

  • Prostaglandin E2 induces long-lasting inhibition of noradrenergic neurons in the locus coeruleus and moderates the behavioral response to stressors
  • Detection of spatially-localized sounds is robust to saccades and concurrent eye movement-related eardrum oscillations (EMREOs)
  • Rewarding capacity of optogenetically activating a giant GABAergic central-brain interneuron in larval Drosophila
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
(JNeurosci logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.