Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

How Auditory Experience Differentially Influences the Function of Left and Right Superior Temporal Cortices

Tae Twomey, Dafydd Waters, Cathy J. Price, Samuel Evans and Mairéad MacSweeney
Journal of Neuroscience 27 September 2017, 37 (39) 9564-9573; DOI: https://doi.org/10.1523/JNEUROSCI.0846-17.2017
Tae Twomey
1ESRC Deafness, Cognition and Language Research Centre, University College London, WC1H 0PD, United Kingdom,
2Institute of Cognitive Neuroscience, University College London, WC1N 3AR, United Kingdom,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Tae Twomey
Dafydd Waters
1ESRC Deafness, Cognition and Language Research Centre, University College London, WC1H 0PD, United Kingdom,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Cathy J. Price
3Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, WC1N 3BG, United Kingdom, and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Cathy J. Price
Samuel Evans
2Institute of Cognitive Neuroscience, University College London, WC1N 3AR, United Kingdom,
4Psychology Department, University of Westminster, 115 New Cavendish Street, London, W1W 6UW
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Samuel Evans
Mairéad MacSweeney
1ESRC Deafness, Cognition and Language Research Centre, University College London, WC1H 0PD, United Kingdom,
2Institute of Cognitive Neuroscience, University College London, WC1N 3AR, United Kingdom,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Mairéad MacSweeney
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

To investigate how hearing status, sign language experience, and task demands influence functional responses in the human superior temporal cortices (STC) we collected fMRI data from deaf and hearing participants (male and female), who either acquired sign language early or late in life. Our stimuli in all tasks were pictures of objects. We varied the linguistic and visuospatial processing demands in three different tasks that involved decisions about (1) the sublexical (phonological) structure of the British Sign Language (BSL) signs for the objects, (2) the semantic category of the objects, and (3) the physical features of the objects.

Neuroimaging data revealed that in participants who were deaf from birth, STC showed increased activation during visual processing tasks. Importantly, this differed across hemispheres. Right STC was consistently activated regardless of the task whereas left STC was sensitive to task demands. Significant activation was detected in the left STC only for the BSL phonological task. This task, we argue, placed greater demands on visuospatial processing than the other two tasks. In hearing signers, enhanced activation was absent in both left and right STC during all three tasks. Lateralization analyses demonstrated that the effect of deafness was more task-dependent in the left than the right STC whereas it was more task-independent in the right than the left STC. These findings indicate how the absence of auditory input from birth leads to dissociable and altered functions of left and right STC in deaf participants.

SIGNIFICANCE STATEMENT Those born deaf can offer unique insights into neuroplasticity, in particular in regions of superior temporal cortex (STC) that primarily respond to auditory input in hearing people. Here we demonstrate that in those deaf from birth the left and the right STC have altered and dissociable functions. The right STC was activated regardless of demands on visual processing. In contrast, the left STC was sensitive to the demands of visuospatial processing. Furthermore, hearing signers, with the same sign language experience as the deaf participants, did not activate the STCs. Our data advance current understanding of neural plasticity by determining the differential effects that hearing status and task demands can have on left and right STC function.

  • deaf
  • language
  • plasticity
  • sign language
  • superior temporal cortex
  • visuo-spatial working memory

Introduction

The brain is capable of considerable experience-dependent plasticity. Unique insight into the extent of this plasticity in the human brain is provided by those born severely or profoundly deaf. A robust and replicated finding is that when those born congenitally deaf are processing visual stimuli, they show enhanced activation, relative to hearing participants, in regions of the superior temporal cortex (STC) that respond to auditory input in hearing people. The aim of the current study was to investigate how auditory experience influences the function of the left and right STC.

Prior studies have shown stronger activation in the right STC in deaf than hearing participants in response to a wide range of nonverbal visual stimuli such as moving dot arrays (Finney et al., 2001; Fine et al., 2005; Vachon et al., 2013), arrows (Ding et al., 2015), flashes (Bola et al., 2017), and static and moving sinusoidal gratings (Shiell et al., 2014). In contrast, in left STC enhanced activation in deaf compared with hearing participants appears to be highly stimulus and task-dependent. For example, it is observed in response to sign language stimuli during sign target detection (Capek et al., 2010; Cardin et al., 2013) and semantic anomaly detection even when sign language experience is matched across deaf and hearing groups (MacSweeney et al., 2002, 2004). However, it has not been observed during spoken language tasks on written words (Waters et al., 2007; Emmorey et al., 2013), pictures (MacSweeney et al., 2008, 2009), or speechreading (Capek et al., 2010; but see Capek et al., 2008) even though speechreading, like sign language, involves the perception of linguistically complex, moving visual stimuli.

Plausibly, the enhanced left STC activation in deaf participants in response to sign language could reflect the demands on visuospatial working memory that are made during sign language processing but not when performing speech-based tasks. In addition to the right STC activation, Ding et al. (2015) have also reported the contribution of the left STC to visuospatial working memory in deaf participants during a visuospatial working memory task for colored arrows (i.e., nonverbal visual stimuli). Importantly this left STC activation was observed only during the maintenance and recognition phases of the task, not during the encoding phase when the visual stimulus was present (for commentary, see MacSweeney and Cardin, 2015). This account can explain why Bola et al. (2017) also reported increases in the left (and right) STC activation in deaf participants performing a visual rhythm working-memory task involving sequences of flashes.

To dissociate sensory, visuospatial, semantic, and phonological processing in left and right STC, we engaged deaf and hearing signers in three different tasks in response to pictures of two objects. Visual imagery and visuospatial working memory were engaged during a British Sign Language (BSL) phonological judgment task (MacSweeney et al., 2008). This task required participants to decide whether the BSL signs for the two objects depicted shared a BSL phonological parameter (handshape or location), which are used to describe the sublexical structure of signs (Stokoe, 1960; Brentari, 1998; Sandler and Lillo-Martin, 2006b). In addition, the same participants were engaged in semantic and perceptual tasks that placed minimal demands on visual imagery and visuospatial working memory while keeping the stimulus presentation constant.

To dissociate auditory experience from sign language experience, and to examine any possible interactions between hearing and sign language experience, we included two groups of deaf participants who were either early or late sign language learners and two groups of hearing participants who were also either early or late sign language learners. In line with previous studies, we predicted greater activation in deaf than hearing participants in right STC, regardless of task. In contrast, in the left STC we expected task-specific effects of deafness, with a stronger effect on the BSL phonological task than the semantic or visual tasks.

Materials and Methods

Participants.

Sixty participants were scanned. All participants knew BSL. All had normal or corrected-to-normal vision and all gave informed, written consent to participate in the study, which was approved by the University College London Research Ethics Committee. One participant was excluded due to a data acquisition problem. A further 11 participants were excluded because of excessive head motion in the scanner (i.e., voxel size >3 mm in translation or the equivalent in rotation calculated with 65 mm as the cortical distance; Wilke, 2014). Thus, data from 48 participants were included in the analyses. All participants were right-handed (measured by the Edinburgh inventory; Oldfield, 1971) and without any known neurological abnormality.

Four participant groups were tested: (1) deaf native signers who learnt BSL from birth [henceforth DE (deaf early); n = 11 (male = 4)]; (2) deaf non-native signers who began to learn BSL aged 15 or older [henceforth DL (deaf late); n = 12 (male = 6)]; (3) hearing native signers who learnt BSL from birth [henceforth HE (hearing early); n = 13 (male = 1)]; (4) hearing non-native signers who began to learn BSL aged 15 or older [henceforth HL (hearing late); n = 12 (male = 5)]. The mean age of each of the groups was as follows: DE: 35.03 years (range: 26.11–59.10 years); DL: 39.06 years (range: 29.01–55.05 years); HE: 36.01 years (range: 20.03–60.00 years); HL: 41.10 years (range: 25.10–56.02 years). There were no significant age differences between groups (F(3,44) = 1.168, p = 0.333, η2 = 0.074).

To facilitate group matching, participants were tested on the BSL grammaticality judgment task (Cormier et al., 2012), on performance IQ (PIQ; block design subtest of the WAIS-R), on reading attainment (Vernon-Warden, 1996) and on English vocabulary (shortened version of the Boston naming test; Kaplan et al., 1983). The BSL grammaticality judgment data were missing from two DE and one DL participants; the reading attainment data were missing from two HE and one DL participants; and the English vocabulary data were missing from one HL participant. There were no significant differences among the groups on the BSL grammaticality judgment task (F(3,41) = 1.322, p = 0.280, η2 = 0.088), PIQ (F(3,44) = 1.086, p = 0.365, η2 = 0.069) or English vocabulary (F(3,43) = 1.363, p = 0.267, η2 = 0.087). However, there were group differences on reading attainment (F(3,41) = 8.989, p < 0.001, η2 = 0.397) such that HL scored significantly better than HE (t(21) = 3.433, p = 0.002, d = 1.433), DE (t(21) = 4.610, p < 0.001, d = 1.924), and DL (t(21) = 4.397, p < 0.001, d = 1.835). There were no significant differences in reading attainment between the HE, DE, and DL groups.

All deaf participants reported being born severely or profoundly deaf. Past audiogram data was available for only half of the participants (DE: 5/11; DL: 6/12). The mean hearing loss in the better ear for the DE participants was 91.2 dB; range: 81–105. The mean hearing loss in the DL group was 102.0 dB; range: 91–116. See Table 1 for a summary of participant characteristics. The use of hearing aids varied across deaf participants. The preferred language at the time of the experiment was BSL for all deaf participants except one. The details of hearing aid use in deaf participants, language experience when growing up and preferred language in adulthood are detailed in Table 2.

View this table:
  • View inline
  • View popup
Table 1.

Participant characteristics

View this table:
  • View inline
  • View popup
Table 2.

The use of hearing aids and the experience of language use in deaf participants

Experimental design.

Two between-subject factors were included: hearing status (deaf vs hearing) and age of sign language acquisition (age of acquisition: early vs late). In addition, a within-subject factor, task, was included with three levels (BSL phonological, semantic, visual judgment). This resulted in a balanced, 2 × 2 × 3 (hearing status × age of acquisition × task) factorial design.

Stimuli and task.

The stimuli consisted of 200 pictures which were recombined to form 300 different picture pairs. Three picture pair sets were established such that 100 pairs were used in each of the three tasks: phonological, semantic, and visual judgment. Within each picture set, 50 pairs were established to form “yes” trials and 50 to form “no” trials. Overall this design ensured that the same pictures were used across all three tasks. All 200 pictures were used in the phonological and semantic tasks, whereas only 150 of the pictures were used in the visual task due to the nature of the “same picture?” task (see Visual task).

Of the 200 pictures, 194 were black and white line drawings depicting high-familiarity nouns, of which all but one (“dream”) was concrete. The remaining six pictures were colored squares representing color names. Half of the pictures were from the Snodgrass and Vanderwart (1980) normed picture set. The other half was sourced from a range of picture-naming projects and were selected or adapted to match the visual characteristics of the Snodgrass and Vanderwart (1980) set.

Phonological judgment task.

Twenty-five picture pairs were established in which the BSL label for the picture overlapped in handshape and 25 which overlapped in hand location. These are two of the phonological parameters of signed languages (Sandler and Lillo-Martin, 2006a). A further 50 picture pairs were established as “no” trials in which the BSL labels did not overlap in any phonological parameter and the items were not semantically related.

Semantic judgment task.

The 200 picture stimuli were recombined to form 50 category-related pairs (e.g., “pear–banana”, “drum–guitar”, “sun–moon”) and 50 unrelated pairs. These stimuli were piloted with 15 hearing native speakers of English. Only pairs in which 12 or more of the pilot participants reported a category relationship were used as “yes” stimuli in the fMRI study. Similarly, “no” trials were only used if a minimum of 14 of 15 pilot participants agreed that the pictures were unrelated.

Visual task.

In the visual matching (same?) condition, 50 of 200 pictures appeared in 50 same-picture pairs (e.g., “sun–sun”) and 100 appeared in 50 different picture pairs (e.g., sun–pear). Examples of the stimuli are shown in Figure 1.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Stimulus examples. Top, BSL phonological task “Same handshape?” Middle, Semantic task “Same category?” Bottom, Visual task “Same picture?”

Due to lexical variation in BSL (Schembri et al., 2010), it was important to show participants all experimental pictures before the fMRI experiment to ensure that they used the desired BSL label, to facilitate the BSL phonological task. For each participant, there were only a few pictures where it was necessary to ask participants to base their decisions on signs that, although part of the BSL lexicon, were not the signs they usually used for the item.

Procedure.

Participants performed three judgment tasks: BSL phonological, semantic, and visual. In the BSL phonological task, participants were required to press a button when the BSL labels for the two pictures shared a sign phonological parameter. In separate blocks participants were required to detect shared handshape or shared location. In the current study, data are combined to form the “BSL phonological judgment” condition. The data contrasting handshape and location decisions will be reported separately. In the semantic task, participants were required to press a button when the picture pairs came from the same category (e.g., elephant/donkey). In the visual task participants judged whether the pictures presented were the same or different.

For all participants, the right index finger was used to respond to “yes” trials. “No” trials did not require a response. Half the trials in each condition were “yes” trials and half were “no” trials. Participants practiced the tasks, on stimuli not presented in the scanner, immediately before the fMRI experiment.

Each participant completed four fMRI runs (7 min each). Each run consisted of 15 × 21 s blocks of which five were BSL phonological decision blocks, five were semantic decision blocks and five were visual matching blocks. The order of presentation of conditions was pseudorandomized across runs. Each block began with a 1 s printed English task prompt (either “handshape?” or “location?” for the BSL phonological decision, “related?” for the semantic decision, or “same?” for the visual decision). This was followed by five picture-pair presentations, each with a 3.5 s exposure duration and an interstimulus interval of 500 ms. Task blocks were separated by baseline blocks of crosshair fixation: 13 × 6 s blocks; and two longer 13.5 s fixation blocks positioned in the middle and toward the end of the run. Stimuli were projected onto a screen positioned at the top of the scanner bore. Participants viewed the stimuli via a mirror placed on the MRI head coil.

MRI acquisition.

Anatomical and functional images were acquired from all participants using a Siemens 1.5-T Sonata scanner. Anatomical T1-weighted images were acquired using a 3-D MDEFT (modified driven equilibrium Fourier transform) sequence. One-hundred seventy-six sagittal partitions with an image matrix of 256 × 224 and a final resolution of 1 mm3 were acquired [repetition time (TR): 12.24 ms; echo time (TE): 3.5 ms; inversion time (TI): 530 ms]. Structural scans indicated that our participants were free from gross neurological abnormalities.

Functional T2*-weighted echo-planar images with BOLD contrast comprised 38 axial slices of 2 mm thickness (1 mm gap), with 3 × 3 mm in-plane resolution. One-hundred thirty-four volumes were acquired per run (TR: 3.42 s; TE: 50 ms; flip angle = 90°). TR and stimulus onset asynchrony were mismatched, allowing for distributed sampling of slice acquisition across the experiment (Veltman et al., 2002), which obviates the need for explicit “jittering”. To avoid Nyquist ghost artifacts, a generalized (trajectory-based) reconstruction algorithm was used for data processing. After reconstruction, the first six volumes of each session were discarded to ensure tissue steady-state magnetization.

Statistical analysis.

Behavioral data were analyzed in a 2 × 2 × 3 ANOVA with hearing status (deaf, hearing), the age of BSL acquisition (early, late) as between-subject factors and task (BSL phonological, semantic, visual) as a within-subject factor. The d′ scores, accuracy and reaction times (RTs) were the dependent measures. Where Mauchly's test indicated significant non-sphericity in the data, a Greenhouse–Geisser correction was applied. When there was a main effect of task or interaction effects with task, planned comparisons were performed using paired t tests to evaluate differences between: (1) the BSL phonological and the semantic tasks, (2) the semantic and the visual tasks, and (3) the BSL phonological and the visual tasks. For the calculation of the d′ scores, corrections of ±0.01 were made because some subjects had the hit rate of 1 and/or the false alarm rate of 0. RTs were measured for go trials only and were recorded from the onset of the stimulus. Anticipatory responses (<200 ms) were trimmed (n = 9; 0.05% of all the trials across participants).

The imaging data were processed using SPM12 (Wellcome Trust Centre for Neuroimaging, London UK; http://www.fil.ion.ucl.ac.uk/spm/). All functional volumes were spatially realigned and unwarped to adjust for minor distortions in the B0 field due to head movement (Andersson et al., 2001). All functional images were normalized to the Montreal Neurological Institute (MNI) space (maintaining the original 3 × 3×3 mm resolution). Functional images were then smoothed using an isotropic 6 mm full-width at half-maximum Gaussian kernel.

First-level fixed-effects analyses were based on a least-squares regression analysis using the general linear model in each voxel across the whole brain. Low-frequency noise and signal drift were removed from the time series in each voxel with high-pass filtering (1/128 Hz cutoff). Residual temporal autocorrelations were approximated by an AR(1) model and removed. At the first level, the onsets of stimuli (3.5 s) were modeled as epoch-related responses (for the exact duration of the stimuli) and convolved with a canonical hemodynamic response function. Correct trials for each of the three conditions over four sessions and the errors were modeled separately. Button press manual responses were modeled as event-related responses and convolved with a canonical hemodynamic response function. Fixation was not modeled and served as an implicit baseline. The contrasts of interest were each experimental condition (BSL phonological, semantic, and visual) relative to fixation, averaged over sessions.

At the second-level, a random-effects analysis included the contrast images for the three task conditions relative to fixation (within-subject) for each of the four (2 × 2) groups (between-subject), resulting in 2 × 2 × 3 ANOVA with hearing status (deaf, hearing), the age of BSL acquisition (early, late) as between-subject factors and task (BSL phonological, semantic, visual) as a within-subject factor with a correction for non-sphericity. The RTs, which may have contributed to the task effects, were not included in the imaging analyses because we were interested in the task difference.

We identified the effects in the left STC and the right STC separately. We first identified the effects of task modulation. Given the stepwise increase on the linguistic task demands, we specifically looked for the BSL phonological task > the semantic task; and the semantic task > the visual task. We then established whether deaf signers activated more than the hearing signers across tasks (i.e., the effect of deafness). Finally, we identified whether the effect of deafness was dependent on task and on age of BSL acquisition. We report activation as significant at voxel-level inference of p < 0.05, familywise error corrected for multiple comparisons at the whole-brain level (Z > 4.76). For effects within the left or right STC, we also report activation at an uncorrected level of p < 0.001 because we had a priori hypotheses regarding the function of these regions.

Lateralization was assessed using the bootstrapping procedure implemented within the LI toolbox (Wilke and Schmithorst, 2006; Wilke and Lidzba, 2007) in SPM. This is a robust tool that deals with the threshold dependency of assessing laterality from neuroimaging data (Bradshaw et al., 2017). We assessed lateralization for a main effect of group and interactions of group and tasks. The contrasts used were as follows: (1) deaf > hearing, (2) deaf > hearing by phonological task > semantic task, and (3) deaf > hearing by phonological task > visual task. Ten-thousand lateralization indices (LIs) were calculated from 100 bootstrapped resamples of voxel values in each hemisphere, at multiple thresholds. This analysis does not require a fixed threshold or correction for multiple comparisons because it is based on a bootstrapping procedure. Resulting LIs were plotted and the weighted mean, which gives greater weighting to higher thresholds, was calculated. A built-in temporal mask, which covers the entire temporal cortices, was selected as an inclusive mask. No exclusion mask was used. Analyses were conducted without clustering or variance weighting. Weighted laterality values ≥0.2 (left) or ≤−0.2 (right) indicate significant lateralization (Wilke and Schmithorst, 2006; Wilke et al., 2006; Lebel and Beaulieu, 2009; Lidzba et al., 2011; Badcock et al., 2012; Nagel et al., 2013; Pahs et al., 2013; Gelinas et al., 2014; Norrelgen et al., 2015; Evans et al., 2016). We also report the trimmed mean, which is calculated from the central 50% of all the LIs, for completeness.

Results

Behavioral data

The d′ scores showed that there was a significant difference in response sensitivity as a function of tasks (F(2,88) = 397.189, p < 0.001, η2 = 0.900). Planned t tests confirmed that d′ for the BSL phonological task was significantly lower than the semantic task (t(47) = 20.386, p < 0.001, d = 2.943) and the visual task (t(47) = 26.924, p < 0.001, d = 3.885). In addition, d′ for the semantic task was significantly lower than the visual task (t(47) = 7.334, p < 0.001, d = 1.059). However, response sensitivity did not differ across hearing status (F(1,44) = 0.665, p = 0.419, η2 = 0.015) or age of acquisition (F(1,44) = 0.137, p = 0.713, η2 = 0.003) and the interaction of these two factors was not significant (F(1,44) = 3.243, p = 0.079, η2 = 0.069). Other interactions were also nonsignificant (all p > 0.267).

A main effect of task was also significant for RTs (F(1.559, 68.601) = 1530.809, p < 0.001, η2 = 0.972). The RTs were longer for the BSL phonological task than the semantic task (t(47) = 34.920, p < 0.001, d = 5.042) and the visual task (t(47) = 42.766, p < 0.001, d = 6.174) and for the semantic task than the visual task (t(47) = 24.457, p < 0.001. d = 3.532). There were no main effects of hearing status (F(1,44) = 1.362, p = 0.249, η2 = 0.030) or age of acquisition (F(1,44) = 3.205, p = 0.080, η2 = 0.068). In the RT data however, there was a significant task × age of acquisition interaction (F(1.56,68.60) = 3.828, p = 0.036, η2 = 0.080). Post hoc t tests confirmed that the participants who learnt BSL late (HL and DL) were significantly slower than those who learnt BSL early (HE and DE) on the BSL phonological task (2129.92 vs 1979.25, t(46) = 2.136, p = 0.038, d = 0.617) but not on the semantic (1201.17 vs 1127.75, t(46) = 1.227, p = 0.226, d = 0.354) or the visual tasks (744.38 vs 720.33, t(46) = 0.637, p = 0.527, d = 0.184). The behavioral data are illustrated in Figure 2. Although Figure 2 suggests that this interaction might be driven by the deaf participants, there was no significant three-way interaction (F(1.559,68.601) = 2.343, p = 0.116, η2 = 0.051). The interaction of hearing status and age of acquisition was also not significant (F(1,44) = 2.381, p = 0.130, η2 = 0.051).

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Behavioral results. Left, Response sensitivity (d′). Right, RTs (ms). Both show a main effect of task, and a significant task by age of acquisition interaction on the RTs only. PHON, BSL phonological task; SEM, semantic task; VIS, visual task.

In summary, the behavioral data suggest that the BSL phonological task was more demanding than the semantic task, which in turn was more demanding than the visual task. Moreover, the effect of learning BSL late was evident in reaction times during the BSL phonological task only. There was no effect of hearing status on behavioral performance on the tasks or interaction between hearing status and any other factors.

fMRI data

Left STC

There were group by task interactions in the left STC, significant at p < 0.05 FWE-corrected (for details, see Table 3). These indicated enhanced activation in deaf relative to hearing signers only for the BSL phonological task (Fig. 3). The location of the enhanced left STC activation was in the posterior superior temporal gyrus and sulcus and did not include Heschl's gyrus. Rather, activation was within the higher-order auditory cortex Te 3, defined by the SPM Anatomy Toolbox v2.2b (Eickhoff et al., 2005, 2006, 2007). Within the deaf participants, left STC activation was significantly greater for the BSL phonological task than the semantic task or the visual task. The difference in activation during the semantic and visual tasks was also significant (Table 3). The main effect of deafness, across the three tasks, was only significant in the left STC at the p < 0.001 uncorrected level (x = −66, y = −34, z = +5; Z = 3.55, k = 5).

View this table:
  • View inline
  • View popup
Table 3.

Statistical details for hearing status and task interactions in left STC

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

The main effect of deafness and the interaction of deafness and task at p < 0.05 FWE-corrected (red to yellow). At the FWE-corrected level, these effects in STC were task-independent on the right (top) and task-dependent on the left (bottom). The bar plots of parameter estimates at these peaks are also shown. Error bars indicate SE. PHON, Phonological task; SEM, semantic task; VIS, visual task.

A very different response pattern was observed in the left STC in hearing signers. During the BSL phonological task, hearing signers showed deactivation, although this was only significant at the p < 0.001 uncorrected level (x = −66, y = −31, z = +5; Z =−3.47, k = 1104). Although deactivation for the BSL phonological task was numerically greater than the semantic task, which in turn was numerically greater than the visual task, there was no significant difference across tasks (Table 3).

There was no main effect of age of acquisition in left STC (p > 0.001 uncorrected). There were no significant age of acquisition by task interactions and no three-way interactions between age of acquisition, group, and task.

Right STC

Across tasks, the right STC showed significantly greater activation in the deaf than hearing signers, (x = +66, y = −34, z = +8; Z = 5.35, p = 0.002, k = 14 FWE-corrected). This task-independent effect of deafness in the right STC was observed in the homolog to the region showing a task-dependent effect of deafness in the left STC (Fig. 3).

There were no significant group by task interactions at p < 0.05 FWE-corrected. However, these interactions were present at a lower threshold of p < 0.001 uncorrected (Table 4). The effect of age of acquisition (late > early) in the right STC, was significant only at p < 0.001 uncorrected (x = +57, y = −34, z = +11; Z = 3.19, k = 3). Late learners showed greater activation (deaf) or reduced deactivation (hearing) than early learners. None of the interactions between age of acquisition and task; age of acquisition and group; or age of acquisition, group, and task reached significance (p > 0.001 uncorrected).

View this table:
  • View inline
  • View popup
Table 4.

Statistical details for hearing status and task interactions in right STC

Hemispheric differences

At the corrected level (p < 0.05 FWE), the data demonstrated significant group by task interactions in the left STC (deaf > hearing in the phonological task only) and a significant group effect in the right (deaf > hearing in all 3 tasks). However, assessing laterality effects is, among other things, dependent on the statistical threshold used. Indeed, at the lower threshold of p < 0.001 uncorrected, we found group by task interactions in the right STC and a main effect of group in the left STC. To determine whether auditory experience differentially influences the function of left and right STC regardless of statistical thresholds, we performed additional analyses to directly test for the hemispheric differences in STC. Bootstrapped laterality analyses (Wilke and Schmithorst, 2006; Wilke and Lidzba, 2007) confirmed that the main effect of group was right lateralized (weighted mean = −0.53; trimmed mean = −0.35), whereas both interaction effects involving group and task were left lateralized (phonological > semantic: weighted mean = 0.49, trimmed mean = 0.27; phonological > visual: weighted mean = 0.53, trimmed mean = 0.32). Lateralization index values are plotted in Figure 4.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

LI values for top (deaf > hearing), middle (deaf > hearing by the BSL phonological task > the semantic task), and bottom (deaf > hearing by the BSL phonological task > the visual task) within temporal cortices.

Other regions

Deaf signers also showed greater activation than hearing signers, across all tasks, in visual processing regions (Table 5; Fig. 3) even though the stimuli, accuracy and response times did not differ for deaf and hearing participants. No regions were activated significantly more in hearing than deaf participants.

View this table:
  • View inline
  • View popup
Table 5.

Statistical details for the regions in which activation was greater for deaf than hearing signers across all tasks at p < 0.05 FWE-corrected (Z > 4.76)

Summary

Deaf participants showed increased activation relative to hearing participants in both left and right STC. This effect was greatest during the BSL phonological task in left STC. In contrast, enhanced activation in the deaf group was not task dependent in the right STC. Analyses directly testing the hemispheric differences confirmed that the interaction of deafness and task was more left lateralized, whereas the main effect of deafness was more right lateralized.

Discussion

Understanding how biological and environmental constraints influence neural plasticity is fundamental to a complete understanding of the brain. Unique insights into these questions can be gained from working with those who are born profoundly deaf. Unlike research with deaf animal models (Lomber et al., 2010; Kral et al., 2016), research with deaf humans must take into account the influence of accessing language primarily through the visual modality and the age of acquisition of that visuospatial language to fully understand experience-dependent neural plasticity (Campbell et al., 2014). Prior studies have shown that activation in the STC in response to sign language stimuli is significantly greater in deaf native signers than hearing native signers (MacSweeney et al., 2002, 2004). Here we investigated the functional role of the left and right STC in deaf signers by manipulating task demands and the age at which sign language was acquired.

Our results reveal that deaf and hearing signers show contrasting effects in the STC during BSL phonological decisions on pictures of objects. The region showing differential effects included the posterior superior temporal gyrus and sulcus but excluded Heschl's gyrus. Deaf signers showed STC activation, which was absent in hearing signers. These contrasting effects were observed even though the stimuli and task instructions were identical for all participants, and even though there was no significant difference in response times for the deaf and hearing participants, all of whom had similar sign language experience.

Our results also differentiate responses in the left and right STC. Specifically, left STC was more sensitive to task than deafness, whereas right STC was more sensitive to deafness regardless of task. We consider whether and how the left and right STC contribute to visual cognition, in those born deaf and in those born hearing.

Left STC function in those born deaf

The task-dependent effects in left STC provide clues to its computational function. Activation increases were strongest when the demands on visual imagery and visuospatial working memory were highest. This observation (x = −66, y = −31, z = +5 in MNI space) is consistent with prior evidence that deaf participants show increased activation in the similar part of STC (x = −51, y = −33, z = +6 in MNI space) during the maintenance and recognition phases of a visuospatial working memory task with nonverbal stimuli (Ding et al., 2015). It also falls within the cytoarchitectonic region (Te 3) where Bola et al. (2017) found enhanced STC activation in deaf participants during a visual rhythm working memory task involving sequences of flashes. The contribution of left STC to visuospatial processing in deaf participants might therefore explain responses observed in response to both verbal and nonverbal stimuli. In hearing people, in addition to speech recognition and phonological processing (Hickok, 2009; Okada et al., 2010; Evans et al., 2014), this part of the left STC has been implicated in auditory working memory (Leff et al., 2009) and auditory imagery (McNorgan, 2012). Demonstrating the involvement of the left STC in visuospatial processing in those born deaf complements what has been observed in congenitally deaf cats. For example, Lomber et al. (2010) has shown that parts of auditory cortex that are usually involved in identifying auditory location in hearing cats are recruited to identify visual location in deaf cats, whereas regions involved in identifying auditory movement in hearing cats are recruited to process visual motion in deaf cats.

We found no evidence for the influence of age of acquisition in the left STC activation. At first glance, this may appear to be inconsistent with prior studies showing early sign language acquisition can improve nonverbal working memory (Marshall et al., 2015) and sign language processing, particularly grammaticality judgements (Mayberry et al., 2011; Cormier et al., 2012; Henner et al., 2016). Earlier sign language acquisition has also been reported to be related to increased left STC activation (Mayberry et al., 2011). However, the effect of age of acquisition on both behavior and brain activation is highly task-dependent. For example, Mayberry et al. (2011) did not see an advantage of early sign language acquisition in behavioral performance when their participants were engaged in a phonemic-hand judgment task, nor an effect on brain activation during passive viewing of a still image of the signer. In addition, age of acquisition is often correlated with proficiency. In our study, we matched the sign language proficiency across those who learnt sign language early versus late, and this might explain why left STC activation was not influenced by age of acquisition in our participants. Future studies will need to dissociate effects that are related to age of sign language exposure and, separately, to sign language proficiency.

Left STC function in those born hearing

Although deaf signers showed enhanced left STC activation during the BSL phonological task relative to other tasks, hearing signers did not activate this region. This contrasting pattern was observed even though they had the same sign language experience and performance.

We propose that our hearing participants may have been suppressing distracting auditory information from the environment. Indeed, deactivation in sensory cortices when attending to another sensory input is a well-documented phenomena (Laurienti et al., 2002; but see Ding et al., 2015). For example, hearing non-signers have been shown to deactivate STC when performing a visual rhythm task (Bola et al., 2017) and also a visual imagery task (Zvyagintsev et al., 2013). Participants have also been shown to deactivate visual cortex while performing auditory spatial and pitch judgment tasks (Collignon et al., 2011). This modality-specific deactivation allows the down regulation of potentially distracting sensory activity in other modalities, for example, scanner noise in hearing participants doing a visually demanding task. Although deactivation in hearing signers in the current study did not reach the threshold for statistical significance a similar explanation may explain the pattern observed in this group.

It is interesting that although hearing signers in the current study and hearing non-signers in Bola et al. (2017) did not activate the STC, hearing non-signers tested by Ding et al. (2015) showed positive activation. The potential cause of the discrepancy in STC deactivation in hearing participants between studies is unclear and requires investigation.

Right STC function in those born deaf and those born hearing

Unlike the left STC, deaf participants activated right STC regardless of the task demands. Activation is therefore more likely to reflect bottom up, perceptual processing of visual stimuli than linguistic processing or visuospatial imagery or working memory demands. This is consistent with prior literature showing deafness related increases in right STC activation to a range of nonverbal visual stimuli such as moving dot arrays (Finney et al., 2001; Fine et al., 2005; Vachon et al., 2013) and static and moving sinusoidal gratings (Shiell et al., 2014). In contrast, hearing participants did not activate STC in response to any of the tasks.

There was also a main effect of age of sign language acquisition in the right STC (late > early). However, this had not been predicted and was significant only at an uncorrected level. Further studies are necessary to examine this potential effect.

Hemispheric differences in STC in deaf signers

Finally, we found that the main effect of group was right lateralized, with deaf signers demonstrating significantly greater activation than hearing signers. In contrast, interactions of group and task (deaf > hearing by BSL phonological task > semantic task; deaf > hearing by BSL phonological task > visual task) were left lateralized. These hemispheric differences were not reported in the Bola et al. (2017) study and only reported during the encoding phase of a visual memory task in the Ding et al. (2015) study. Because neither study used linguistic stimuli, it is likely that the hemispheric differences identified in the current study reflect the additional contribution of the left STC to the increased visuospatial processing demands of the BSL phonological task.

Conclusions

Together our results from deaf and hearing signers suggest that the function of posterior STC, which includes the posterior superior temporal gyrus and sulcus but excludes Heschl's gyrus, changes with auditory experience. In those born hearing, left and right STC primarily responds to auditory stimuli and is suppressed, to some extent, during visual tasks. In contrast, when the STCs do not receive auditory input, left STC participates in cognitive tasks including those that require visuospatial processing and right STC participates in low-level visual processing, regardless of visuospatial demands. As all our participants were proficient signers, future studies are now required to determine how sign language knowledge and importantly, sign language proficiency, influence the strong effect of deafness on visuospatial processing in STCs that we have described here.

Footnotes

  • This work was supported by Wellcome Trust Fellowships to M.M. (100229/Z/12/Z) and C.P. (097720/Z/11/Z) and the Economic and Social Research Council (Deafness Cognition and Language Research Centre; RES-620-28-0002) to T.T.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Dr. Mairéad MacSweeney, Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London, WC1N 3AZ. m.macsweeney{at}ucl.ac.uk

This is an open-access article distributed under the terms of the Creative Commons Attribution License Creative Commons Attribution 4.0 International, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.

References

  1. ↵
    1. Andersson JL,
    2. Hutton C,
    3. Ashburner J,
    4. Turner R,
    5. Friston K
    (2001) Modeling geometric deformations in EPI time series. Neuroimage 13:903–919. doi:10.1006/nimg.2001.0746 pmid:11304086
    OpenUrlCrossRefPubMed
  2. ↵
    1. Badcock NA,
    2. Bishop DV,
    3. Hardiman MJ,
    4. Barry JG,
    5. Watkins KE
    (2012) Co-localisation of abnormal brain structure and function in specific language impairment. Brain Lang 120:310–320. doi:10.1016/j.bandl.2011.10.006 pmid:22137677
    OpenUrlCrossRefPubMed
  3. ↵
    1. Bola Ł,
    2. Zimmermann M,
    3. Mostowski P,
    4. Jednoróg K,
    5. Marchewka A,
    6. Rutkowski P,
    7. Szwed M
    (2017) Task-specific reorganization of the auditory cortex in deaf humans. Proc Natl Acad Sci U S A 114:E600–E609. doi:10.1073/pnas.1609000114 pmid:28069964
    OpenUrlAbstract/FREE Full Text
  4. ↵
    1. Bradshaw AR,
    2. Bishop DVM,
    3. Woodhead ZVJ
    (2017) Methodological considerations in assessment of language lateralisation with fMRI: a systematic review. PeerJ 5:e3557. doi:10.7717/peerj.3557 pmid:28713656
    OpenUrlCrossRefPubMed
  5. ↵
    1. Brentari D
    (1998) A prosodic model of sign language phonology. Cambridge, MA: MIT.
  6. ↵
    1. Campbell R,
    2. MacSweeney M,
    3. Woll B
    (2014) Cochlear implantation (CI) for prelingual deafness: the relevance of studies of brain organization and the role of first language acquisition in considering outcome success. Front Hum Neurosci 8:834. doi:10.3389/fnhum.2014.00834 pmid:25368567
    OpenUrlCrossRefPubMed
  7. ↵
    1. Capek CM,
    2. Macsweeney M,
    3. Woll B,
    4. Waters D,
    5. McGuire PK,
    6. David AS,
    7. Brammer MJ,
    8. Campbell R
    (2008) Cortical circuits for silent speechreading in deaf and hearing people. Neuropsychologia 46:1233–1241. doi:10.1016/j.neuropsychologia.2007.11.026 pmid:18249420
    OpenUrlCrossRefPubMed
  8. ↵
    1. Capek CM,
    2. Woll B,
    3. MacSweeney M,
    4. Waters D,
    5. McGuire PK,
    6. David AS,
    7. Brammer MJ,
    8. Campbell R
    (2010) Superior temporal activation as a function of linguistic knowledge: insights from deaf native signers who speechread. Brain Lang 112:129–134. doi:10.1016/j.bandl.2009.10.004 pmid:20042233
    OpenUrlCrossRefPubMed
  9. ↵
    1. Cardin V,
    2. Orfanidou E,
    3. Rönnberg J,
    4. Capek CM,
    5. Rudner M,
    6. Woll B
    (2013) Dissociating cognitive and sensory neural plasticity in human superior temporal cortex. Nat Commun 4:1473. doi:10.1038/ncomms2463 pmid:23403574
    OpenUrlCrossRefPubMed
  10. ↵
    1. Collignon O,
    2. Vandewalle G,
    3. Voss P,
    4. Albouy G,
    5. Charbonneau G,
    6. Lassonde M,
    7. Lepore F
    (2011) Functional specialization for auditory–spatial processing in the occipital cortex of congenitally blind humans. Proc Natl Acad Sci U S A 108:4435–4440. doi:10.1073/pnas.1013928108 pmid:21368198
    OpenUrlAbstract/FREE Full Text
  11. ↵
    1. Cormier K,
    2. Schembri A,
    3. Vinson D,
    4. Orfanidou E
    (2012) First language acquisition differs from second language acquisition in prelingually deaf signers: evidence from sensitivity to grammaticality judgement in British sign language. Cognition 124:50–65. doi:10.1016/j.cognition.2012.04.003 pmid:22578601
    OpenUrlCrossRefPubMed
  12. ↵
    1. Ding H,
    2. Qin W,
    3. Liang M,
    4. Ming D,
    5. Wan B,
    6. Li Q,
    7. Yu C
    (2015) Cross-modal activation of auditory regions during visuo-spatial working memory in early deafness. Brain 138:2750–2765. doi:10.1093/brain/awv165 pmid:26070981
    OpenUrlCrossRefPubMed
  13. ↵
    1. Eickhoff SB,
    2. Stephan KE,
    3. Mohlberg H,
    4. Grefkes C,
    5. Fink GR,
    6. Amunts K,
    7. Zilles K
    (2005) A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data. Neuroimage 25:1325–1335. doi:10.1016/j.neuroimage.2004.12.034 pmid:15850749
    OpenUrlCrossRefPubMed
  14. ↵
    1. Eickhoff SB,
    2. Heim S,
    3. Zilles K,
    4. Amunts K
    (2006) Testing anatomically specified hypotheses in functional imaging using cytoarchitectonic maps. Neuroimage 32:570–582. doi:10.1016/j.neuroimage.2006.04.204 pmid:16781166
    OpenUrlCrossRefPubMed
  15. ↵
    1. Eickhoff SB,
    2. Paus T,
    3. Caspers S,
    4. Grosbras MH,
    5. Evans AC,
    6. Zilles K,
    7. Amunts K
    (2007) Assignment of functional activations to probabilistic cytoarchitectonic areas revisited. Neuroimage 36:511–521. doi:10.1016/j.neuroimage.2007.03.060 pmid:17499520
    OpenUrlCrossRefPubMed
  16. ↵
    1. Emmorey K,
    2. Weisberg J,
    3. McCullough S,
    4. Petrich JA
    (2013) Mapping the reading circuitry for skilled deaf readers: an fMRI study of semantic and phonological processing. Brain Lang 126:169–180. doi:10.1016/j.bandl.2013.05.001 pmid:23747332
    OpenUrlCrossRefPubMed
  17. ↵
    1. Evans S,
    2. Kyong JS,
    3. Rosen S,
    4. Golestani N,
    5. Warren JE,
    6. McGettigan C,
    7. Mourão-Miranda J,
    8. Wise RJ,
    9. Scott SK
    (2014) The pathways for intelligible speech: multivariate and univariate perspectives. Cereb Cortex 24:2350–2361. doi:10.1093/cercor/bht083 pmid:23585519
    OpenUrlCrossRefPubMed
  18. ↵
    1. Evans S,
    2. McGettigan C,
    3. Agnew ZK,
    4. Rosen S,
    5. Scott SK
    (2016) Getting the cocktail party started: masking effects in speech perception. J Cogn Neurosci 28:483–500. doi:10.1162/jocn_a_00913 pmid:26696297
    OpenUrlCrossRefPubMed
  19. ↵
    1. Fine I,
    2. Finney EM,
    3. Boynton GM,
    4. Dobkins KR
    (2005) Comparing the effects of auditory deprivation and sign language within the auditory and visual cortex. J Cogn Neurosci 17:1621–1637. doi:10.1162/089892905774597173 pmid:16269101
    OpenUrlCrossRefPubMed
  20. ↵
    1. Finney EM,
    2. Fine I,
    3. Dobkins KR
    (2001) Visual stimuli activate auditory cortex in the deaf. Nat Neurosci 4:1171–1173. doi:10.1038/nn763 pmid:11704763
    OpenUrlCrossRefPubMed
  21. ↵
    1. Gelinas JN,
    2. Fitzpatrick KP,
    3. Kim HC,
    4. Bjornson BH
    (2014) Cerebellar language mapping and cerebral language dominance in pediatric epilepsy surgery patients. Neuroimage 6:296–306. doi:10.1016/j.nicl.2014.06.016 pmid:25379442
    OpenUrlCrossRefPubMed
  22. ↵
    1. Henner J,
    2. Caldwell-Harris CL,
    3. Novogrodsky R,
    4. Hoffmeister R
    (2016) American sign language syntax and analogical reasoning skills are influenced by early acquisition and age of entry to signing schools for the deaf. Front Psychol 7:1982. doi:10.3389/fpsyg.2016.01982 pmid:28082932
    OpenUrlCrossRefPubMed
  23. ↵
    1. Hickok G
    (2009) The functional neuroanatomy of language. Phys Life Rev 6:121–143. doi:10.1016/j.plrev.2009.06.001 pmid:20161054
    OpenUrlCrossRefPubMed
  24. ↵
    1. Kaplan E,
    2. Goodglass H,
    3. Weintraub S
    (1983) Boston Naming Test. Philadelphia, PA: Lea and Febiger.
  25. ↵
    1. Kral A,
    2. Kronenberger WG,
    3. Pisoni DB,
    4. O'Donoghue GM
    (2016) Neurocognitive factors in sensory restoration of early deafness: a connectome model. Lancet Neurol 15:610–621. doi:10.1016/S1474-4422(16)00034-X pmid:26976647
    OpenUrlCrossRefPubMed
  26. ↵
    1. Laurienti PJ,
    2. Burdette JH,
    3. Wallace MT,
    4. Yen YF,
    5. Field AS,
    6. Stein BE
    (2002) Deactivation of sensory-specific cortex by cross-modal stimuli. J Cogn Neurosci 14:420–429. doi:10.1162/089892902317361930 pmid:11970801
    OpenUrlCrossRefPubMed
  27. ↵
    1. Lebel C,
    2. Beaulieu C
    (2009) Lateralization of the arcuate fasciculus from childhood to adulthood and its relation to cognitive abilities in children. Hum Brain Mapp 30:3563–3573. doi:10.1002/hbm.20779 pmid:19365801
    OpenUrlCrossRefPubMed
  28. ↵
    1. Leff AP,
    2. Schofield TM,
    3. Crinion JT,
    4. Seghier ML,
    5. Grogan A,
    6. Green DW,
    7. Price CJ
    (2009) The left superior temporal gyrus is a shared substrate for auditory short-term memory and speech comprehension: evidence from 210 patients with stroke. Brain 132:3401–3410. doi:10.1093/brain/awp273 pmid:19892765
    OpenUrlCrossRefPubMed
  29. ↵
    1. Lidzba K,
    2. Schwilling E,
    3. Grodd W,
    4. Krägeloh-Mann I,
    5. Wilke M
    (2011) Language comprehension vs. language production: age effects on fMRI activation. Brain Lang 119:6–15. doi:10.1016/j.bandl.2011.02.003 pmid:21450336
    OpenUrlCrossRefPubMed
  30. ↵
    1. Lomber SG,
    2. Meredith MA,
    3. Kral A
    (2010) Cross-modal plasticity in specific auditory cortices underlies visual compensations in the deaf. Nat Neurosci 13:1421–1427. doi:10.1038/nn.2653 pmid:20935644
    OpenUrlCrossRefPubMed
  31. ↵
    1. MacSweeney M,
    2. Cardin V
    (2015) What is the function of auditory cortex without auditory input? Brain 138:2468–2470. doi:10.1093/brain/awv197 pmid:26304150
    OpenUrlCrossRefPubMed
  32. ↵
    1. MacSweeney M,
    2. Woll B,
    3. Campbell R,
    4. McGuire PK,
    5. David AS,
    6. Williams SC,
    7. Suckling J,
    8. Calvert GA,
    9. Brammer MJ
    (2002) Neural systems underlying British sign language and audio-visual English processing in native users. Brain 125:1583–1593. doi:10.1093/brain/awf153 pmid:12077007
    OpenUrlCrossRefPubMed
  33. ↵
    1. MacSweeney M,
    2. Campbell R,
    3. Woll B,
    4. Giampietro V,
    5. David AS,
    6. McGuire PK,
    7. Calvert GA,
    8. Brammer MJ
    (2004) Dissociating linguistic and nonlinguistic gestural communication in the brain. Neuroimage 22:1605–1618. doi:10.1016/j.neuroimage.2004.03.015 pmid:15275917
    OpenUrlCrossRefPubMed
  34. ↵
    1. MacSweeney M,
    2. Waters D,
    3. Brammer MJ,
    4. Woll B,
    5. Goswami U
    (2008) Phonological processing in deaf signers and the impact of age of first language acquisition. Neuroimage 40:1369–1379. doi:10.1016/j.neuroimage.2007.12.047 pmid:18282770
    OpenUrlCrossRefPubMed
  35. ↵
    1. MacSweeney M,
    2. Brammer MJ,
    3. Waters D,
    4. Goswami U
    (2009) Enhanced activation of the left inferior frontal gyrus in deaf and dyslexic adults during rhyming. Brain 132:1928–1940. doi:10.1093/brain/awp129 pmid:19467990
    OpenUrlCrossRefPubMed
  36. ↵
    1. Marshall C,
    2. Jones A,
    3. Denmark T,
    4. Mason K,
    5. Atkinson J,
    6. Botting N,
    7. Morgan G
    (2015) Deaf children's non-verbal working memory is impacted by their language experience. Front Psychol 6:527. doi:10.3389/fpsyg.2015.00527 pmid:25999875
    OpenUrlCrossRefPubMed
  37. ↵
    1. Mayberry RI,
    2. Chen JK,
    3. Witcher P,
    4. Klein D
    (2011) Age of acquisition effects on the functional organization of language in the adult brain. Brain Lang 119:16–29. doi:10.1016/j.bandl.2011.05.007 pmid:21705060
    OpenUrlCrossRefPubMed
  38. ↵
    1. McNorgan C
    (2012) A meta-analytic review of multisensory imagery identifies the neural correlates of modality-specific and modality-general imagery. Front Hum Neurosci 6:285. doi:10.3389/fnhum.2012.00285 pmid:23087637
    OpenUrlCrossRefPubMed
  39. ↵
    1. Nagel BJ,
    2. Herting MM,
    3. Maxwell EC,
    4. Bruno R,
    5. Fair D
    (2013) Hemispheric lateralization of verbal and spatial working memory during adolescence. Brain Cogn 82:58–68. doi:10.1016/j.bandc.2013.02.007 pmid:23511846
    OpenUrlCrossRefPubMed
  40. ↵
    1. Norrelgen F,
    2. Lilja A,
    3. Ingvar M,
    4. Åmark P,
    5. Fransson P
    (2015) Presurgical language lateralization assessment by fMRI and dichotic listening of pediatric patients with intractable epilepsy. Neuroimage Clinical 7:230–239. doi:10.1016/j.nicl.2014.12.011 pmid:25610785
    OpenUrlCrossRefPubMed
  41. ↵
    1. Okada K,
    2. Rong F,
    3. Venezia J,
    4. Matchin W,
    5. Hsieh IH,
    6. Saberi K,
    7. Serences JT,
    8. Hickok G
    (2010) Hierarchical organization of human auditory cortex: evidence from acoustic invariance in the response to intelligible speech. Cereb Cortex 20:2486–2495. doi:10.1093/cercor/bhp318 pmid:20100898
    OpenUrlCrossRefPubMed
  42. ↵
    1. Oldfield RC
    (1971) The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia 9:97–113. doi:10.1016/0028-3932(71)90067-4 pmid:5146491
    OpenUrlCrossRefPubMed
  43. ↵
    1. Pahs G,
    2. Rankin P,
    3. Helen Cross J,
    4. Croft L,
    5. Northam GB,
    6. Liegeois F,
    7. Greenway S,
    8. Harrison S,
    9. Vargha-Khadem F,
    10. Baldeweg T
    (2013) Asymmetry of planum temporale constrains interhemispheric language plasticity in children with focal epilepsy. Brain 136:3163–3175. doi:10.1093/brain/awt225 pmid:24022474
    OpenUrlCrossRefPubMed
  44. ↵
    1. Sandler W,
    2. Lillo-Martin D
    (2006a) Sign language and linguistic universals. Cambridge, UK: Cambridge UP.
  45. ↵
    1. Sandler W,
    2. Lillo-Martin D
    (2006b) Derivational morphology. In: Sign language and linguistic universals, p 55. Cambridge, UK: Cambridge UP.
  46. ↵
    1. Schembri A,
    2. Cormier K,
    3. Johnston T,
    4. McKee D,
    5. McKee R,
    6. Woll B
    (2010) Sociolinguistic variation in British, Australian and New Zealand sign languages. In: Sign languages (Brentari D, ed), pp 479–501. Cambridge: Cambridge University Press.
  47. ↵
    1. Shiell MM,
    2. Champoux F,
    3. Zatorre RJ
    (2014) Enhancement of visual motion detection thresholds in early deaf people. PLoS One 9:e90498. doi:10.1371/journal.pone.0090498 pmid:24587381
    OpenUrlCrossRefPubMed
  48. ↵
    1. Snodgrass JG,
    2. Vanderwart M
    (1980) A standardized set of 260 pictures: norms for name agreement, image agreement, familiarity, and visual complexity. J Exp Psychol Hum Learn 6:174–215. doi:10.1037/0278-7393.6.2.174 pmid:7373248
    OpenUrlCrossRefPubMed
  49. ↵
    1. Stokoe WC
    (1960) Sign language structure: an outline of the visual communication systems of the American deaf. Buffalo, NY: University of Buffalo.
  50. ↵
    1. Vachon P,
    2. Voss P,
    3. Lassonde M,
    4. Leroux JM,
    5. Mensour B,
    6. Beaudoin G,
    7. Bourgouin P,
    8. Lepore F
    (2013) Reorganization of the auditory, visual and multimodal areas in early deaf individuals. Neuroscience 245:50–60. doi:10.1016/j.neuroscience.2013.04.004 pmid:23590908
    OpenUrlCrossRefPubMed
  51. ↵
    Vernon-Warden Reading Comprehension Test Revised (1996) Dyslexia Review 7:11–16.
    OpenUrl
  52. ↵
    1. Veltman DJ,
    2. Mechelli A,
    3. Friston KJ,
    4. Price CJ
    (2002) The importance of distributed sampling in blocked functional magnetic resonance imaging designs. NeuroImage 17:1203–1206. doi:10.1006/nimg.2002.1242
    OpenUrlCrossRefPubMed
  53. ↵
    1. Waters D,
    2. Campbell R,
    3. Capek CM,
    4. Woll B,
    5. David AS,
    6. McGuire PK,
    7. Brammer MJ,
    8. MacSweeney M
    (2007) Fingerspelling, signed language, text and picture processing in deaf native signers: the role of the mid-fusiform gyrus. Neuroimage 35:1287–1302. doi:10.1016/j.neuroimage.2007.01.025 pmid:17363278
    OpenUrlCrossRefPubMed
  54. ↵
    1. Wilke M
    (2014) Isolated assessment of translation or rotation severely underestimates the effects of subject motion in fMRI data. PLoS one 9:e106498. doi:10.1371/journal.pone.0106498 pmid:25333359
    OpenUrlCrossRefPubMed
  55. ↵
    1. Wilke M,
    2. Lidzba K
    (2007) LI-tool: a new toolbox to assess lateralization in functional MR-data. J Neurosci Methods 163:128–136. doi:10.1016/j.jneumeth.2007.01.026 pmid:17386945
    OpenUrlCrossRefPubMed
  56. ↵
    1. Wilke M,
    2. Schmithorst VJ
    (2006) A combined bootstrap/histogram analysis approach for computing a lateralization index from neuroimaging data. Neuroimage 33:522–530. doi:10.1016/j.neuroimage.2006.07.010 pmid:16938470
    OpenUrlCrossRefPubMed
  57. ↵
    1. Wilke M,
    2. Lidzba K,
    3. Staudt M,
    4. Buchenau K,
    5. Grodd W,
    6. Krägeloh-Mann I
    (2006) An fMRI task battery for assessing hemispheric language dominance in children. Neuroimage 32:400–410. doi:10.1016/j.neuroimage.2006.03.012 pmid:16651012
    OpenUrlCrossRefPubMed
  58. ↵
    1. Zvyagintsev M,
    2. Clemens B,
    3. Chechko N,
    4. Mathiak KA,
    5. Sack AT,
    6. Mathiak K
    (2013) Brain networks underlying mental imagery of auditory and visual information. Eur J Neurosci 37:1421–1434. doi:10.1111/ejn.12140 pmid:23383863
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 37 (39)
Journal of Neuroscience
Vol. 37, Issue 39
27 Sep 2017
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Advertising (PDF)
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
How Auditory Experience Differentially Influences the Function of Left and Right Superior Temporal Cortices
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
How Auditory Experience Differentially Influences the Function of Left and Right Superior Temporal Cortices
Tae Twomey, Dafydd Waters, Cathy J. Price, Samuel Evans, Mairéad MacSweeney
Journal of Neuroscience 27 September 2017, 37 (39) 9564-9573; DOI: 10.1523/JNEUROSCI.0846-17.2017

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
How Auditory Experience Differentially Influences the Function of Left and Right Superior Temporal Cortices
Tae Twomey, Dafydd Waters, Cathy J. Price, Samuel Evans, Mairéad MacSweeney
Journal of Neuroscience 27 September 2017, 37 (39) 9564-9573; DOI: 10.1523/JNEUROSCI.0846-17.2017
Reddit logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • deaf
  • language
  • plasticity
  • sign language
  • superior temporal cortex
  • visuo-spatial working memory

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Musical training facilitates exogenous temporal attention via delta phase entrainment within a sensorimotor network
  • Microglial Cytokines Mediate Plasticity Induced by 10 Hz Repetitive Magnetic Stimulation
  • Subgenual and hippocampal pathways in amygdala are set to balance affect and context processing
Show more Research Articles

Behavioral/Cognitive

  • Musical training facilitates exogenous temporal attention via delta phase entrainment within a sensorimotor network
  • Microglial Cytokines Mediate Plasticity Induced by 10 Hz Repetitive Magnetic Stimulation
  • Subgenual and hippocampal pathways in amygdala are set to balance affect and context processing
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
(JNeurosci logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.