Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
Articles, Behavioral/Systems/Cognitive

Signed Words in the Congenitally Deaf Evoke Typical Late Lexicosemantic Responses with No Early Visual Responses in Left Superior Temporal Cortex

Matthew K. Leonard, Naja Ferjan Ramirez, Christina Torres, Katherine E. Travis, Marla Hatrak, Rachel I. Mayberry and Eric Halgren
Journal of Neuroscience 11 July 2012, 32 (28) 9700-9705; DOI: https://doi.org/10.1523/JNEUROSCI.1002-12.2012
Matthew K. Leonard
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Naja Ferjan Ramirez
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Christina Torres
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Katherine E. Travis
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Marla Hatrak
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Rachel I. Mayberry
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Eric Halgren
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site

Abstract

Congenitally deaf individuals receive little or no auditory input, and when raised by deaf parents, they acquire sign as their native and primary language. We asked two questions regarding how the deaf brain in humans adapts to sensory deprivation: (1) is meaning extracted and integrated from signs using the same classical left hemisphere frontotemporal network used for speech in hearing individuals, and (2) in deafness, is superior temporal cortex encompassing primary and secondary auditory regions reorganized to receive and process visual sensory information at short latencies? Using MEG constrained by individual cortical anatomy obtained with MRI, we examined an early time window associated with sensory processing and a late time window associated with lexicosemantic integration. We found that sign in deaf individuals and speech in hearing individuals activate a highly similar left frontotemporal network (including superior temporal regions surrounding auditory cortex) during lexicosemantic processing, but only speech in hearing individuals activates auditory regions during sensory processing. Thus, neural systems dedicated to processing high-level linguistic information are used for processing language regardless of modality or hearing status, and we do not find evidence for rewiring of afferent connections from visual systems to auditory cortex.

Introduction

Neuropsychological and neuroimaging studies generally show that, when acquired as a native language from birth in congenitally deaf individuals, sign language is processed in a primarily left frontotemporal brain network, remarkably similar to the network used by hearing subjects to understand spoken words (Petitto et al., 2000; MacSweeney et al., 2008; Mayberry et al., 2011). Similarly, the N400, an event-related component correlated with lexicosemantic processing (Kutas and Federmeier, 2011), is similar when evoked by signs in deaf individuals and spoken or written words in hearing individuals (Kutas et al., 1987; Neville et al., 1997; Capek et al., 2009). Language deficits in deafness are more pronounced after lesions in the left hemisphere (Klima and Bellugi, 1979; Poizner et al., 1987; Hickok et al., 1996). Finally, direct cortical stimulation in left inferior frontal and posterior superior temporal regions in a deaf signer disrupted sign language production similar to speech disruptions in hearing individuals (Ojemann, 1983; Corina et al., 1999).

Left frontotemporal language areas include the cortex surrounding primary auditory cortex (Price, 2010), which is functionally deafferented in congenitally deaf individuals. In animal models, it has been demonstrated that afferent connections from the retina can be induced to connect with the medial geniculate nucleus of the thalamus (Sur et al., 1988), resulting in maps of visual space within primary auditory cortex (Roe et al., 1990; Barnes and Finnerty, 2010). Likewise, in congenitally deaf humans, auditory regions have been shown to exhibit hemodynamic and neurophysiological activation to low-level moving visual stimuli, particularly in the right hemisphere (Finney et al., 2001, 2003) and even to sign language narratives more than in hearing controls (Lambertz et al., 2005). However, other studies have not found such responses (Hickok et al., 1997) or have found extensive interindividual variability (Bavelier and Neville, 2002).

If auditory cortex is actually rewired in deaf individuals to receive visual input directly, then the similar activation patterns evoked by signed words in deaf signers and spoken words in hearing individuals would be a natural consequence of neural plasticity: in both groups, low-level sensory processing in auditory cortex should be projected to adjacent superior temporal areas, and thence to the broader left frontotemporal language network for lexicosemantic processing. Alternatively, activity in the region surrounding auditory cortex to signed words in deaf individuals and to spoken words in hearing individuals may reflect higher-level semantic encoding rather than sensory analysis. In this scenario, common activations in superior temporal cortex occur only after distinct modality-specific sensory processing for sign or speech. These alternatives can be dissociated based on the timing of the activity in superior temporal regions, information that is not available from hemodynamic measures, but can be obtained using MEG. Here we show that this activity is semantic, not sensory. Only speech in hearing individuals activates auditory areas during early sensory processing. However, both speech in hearing individuals and sign in deaf native signers activate similar temporal and frontal regions in the classical language network during later semantic processing stages.

Materials and Methods

Participants.

Twelve healthy right-handed congenitally deaf native signers (6 female; age range, 17–36 years) with no history of neurological or psychological impairment were recruited for participation (Table 1). All had profound hearing loss from birth and acquired American Sign Language (ASL) as their native language from their deaf parents. In addition, eight hearing controls from an analogous task with spoken English were included for comparison (5 female; age range, 21–29 years).

View this table:
  • View inline
  • View popup
Table 1.

Deaf and hearing participant information and task performance

Procedures.

Each deaf participant viewed single signs that were either congruously or incongruously paired with a preceding picture (Fig. 1). Stimuli were high-frequency concrete nouns in ASL presented as short video clips (range, 340–700 ms; mean, 515.3 ms). Since no frequency norms exist for ASL, the stimuli were selected from ASL developmental inventories (Schick, 1997; Anderson and Reilly, 2002) and picture naming data (Bates et al., 2003; Ferjan Ramirez et al., 2012). The signs were all concrete nouns representing highly imageable objects, and were reviewed by a panel of six deaf and hearing fluent signers to ensure they were accurately produced and highly familiar from an early age. Words that are typically fingerspelled or are compound signs were excluded. Each sign video began when all phonological parameters (handshape, location, movement, and orientation) were in place, and ended when the movement was completed. Each sign appeared in both the congruent and incongruent conditions, and if a trial from one condition was rejected due to artifacts in the MEG signal, the corresponding trial from the other condition was also rejected to ensure that sensory processing across congruent and incongruent trials was identical. Subjects were instructed to press a button when the sign matched the preceding picture in meaning; the response hand was counterbalanced across six blocks of 102 trials each. The hearing participants performed the same task, except that instead of viewing pictures and signs, subjects saw photos and then heard single auditory English words through earphones and pressed a button when they matched. The picture remained on the screen throughout the duration of the auditory word. Word duration ranged from 304 to 637 ms, with a mean of 445 ms. To analyze the response to pictures, we compared the deaf group to a different group of hearing participants who saw the same white-on-black line drawings in a separate but similar task.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Schematic diagram of task design. Each picture and sign appeared in both the congruent and incongruent conditions. Trials were presented pseudorandomly so that repetition of a given stimulus did not occur with fewer than eight intervening trials. Incongruent pairs were not related semantically or phonologically in ASL.

Neuroimaging.

While subjects performed the task, we recorded MEG from 204 planar gradiometer channels distributed over the scalp, at 1000 Hz with minimal filtering (0.1–200 Hz). Following the MEG session, each subject's structural MRI was acquired as a T1-weighted image. Sources were estimated by coregistering MEG and MRI data and using a linear minimum-norm approach, noise normalized to a prestimulus period, according to previously published procedures (Dale et al., 2000; Leonard et al., 2010; McDonald et al., 2010). Random-effects statistical analysis on the dynamic statistical parametric maps was performed using a cluster thresholding approach (Hagler et al., 2006; McDonald et al., 2010). Table 2 shows surface Talairach coordinates for peak vertices in the clusters. Two time windows were selected for analysis based on a grand average of the activity to signs and speech across both groups of participants. For the early (80–120 ms) time window, a grand average of all signed or spoken words was displayed on an average brain, and for the later time window (300–350 ms), a subtraction of congruous–incongruous words was displayed on the average brain. Regions with significant clusters (cluster threshold for signs 80–120 ms = 208.58 mm2, 300–350 ms = 212.32 mm2; cluster threshold for speech 80–120 ms = 238.60 mm2, 300–350 ms = 206.63 mm2) were selected for time course extraction (Fig. 2C,D, graphs).

View this table:
  • View inline
  • View popup
Table 2.

Talairach surface coordinates for selected ROIs shown in Figure 2

Results

Behavioral responses

Both groups of participants performed the task with high accuracy and fast reaction times (Table 1). Deaf participants were accurate on 94.3% of trials (SD = 3.93) and responded at 619.10 ms on average (SD = 97.5 ms). Hearing participants were accurate on 98.25% of trials (SD = 3.01) and responded at 561.23 ms on average (SD = 94.28 ms). The between-group reaction time difference was not significant (t test; p > 0.1).

Anatomically constrained MEG: early time window (80–120 ms)

During early sensory processing (80–120 ms), we examined the grand average of activity for all signed words in deaf participants and all spoken words in hearing participants. Responses to signs were significant in posterior occipital regions, including the occipital pole (Fig. 2A). Responses to spoken words were strongest in bilateral superior temporal cortex, including primary auditory areas on the superior temporal plane (Fig. 2B). An auditory peak in superior temporal channels that did not differentiate between congruent and incongruent conditions was visible in individual hearing subjects, but was not present in deaf subjects (Fig. 3). Thus, at early latencies, neural responses are confined to modality-specific sensory regions and do not differentiate between semantically congruent and incongruent trials. Crucially, signs do not evoke activity in auditory cortex at ∼100 ms in deaf native signers.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Superior temporal areas surrounding auditory cortex are active for both sign and speech during lexicosemantic processing, but only for speech during sensory processing. A, Grand average activity to signs at ∼100 ms in deaf subjects is localized to occipital cortex in calcarine and superior occipital sulci. B, Grand average activity to speech at ∼100 ms in hearing subjects is localized to posterior temporal cortex. C, Center, Grand average activity to incongruent–congruent signs at 300–350 ms (black arrow) in deaf subjects. Surrounding graphs, Regional time courses for congruent and incongruent conditions in five bilateral regions of interest from −100 to 600 ms (light blue arrow at 100 ms). D, Same as C for speech in hearing subjects. IPS, Intraparietal sulcus; PT, planum temporale; AI, anterior insula; STS, superior temporal sulcus; TP, temporal pole; V1, primary visual. All mapped activity is cluster thresholded dynamic statistical parametric maps, significantly greater than prestimulus baseline at p < 0.05, corrected.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Individual MEG sensors demonstrate the dissociation between early and late activity in auditory regions. A1, Head plot shows the location of a left superior temporal MEG channel showing significant incongruent > congruent activity in a deaf native signer. A2, The left superior temporal MEG channel shows the congruent versus incongruent difference for signs. B1, Head plot from a hearing participant. B2, The same channel shows a similar difference for speech in a single representative hearing participant. Both subjects begin to show a significant difference between conditions at ∼240 ms. C, The same channel shows a sensory peak at ∼100 ms for hearing (purple), but not deaf (green), subjects. Gray regions indicate significance at p < 0.01.

To determine whether auditory cortex activity differs between deaf and hearing individuals in response to visual stimuli, we compared the response to the pictures with that from a separate group of hearing subjects who saw the same line drawings. While both groups showed significant cluster-thresholded activity in posterior occipital cortex at ∼100 ms (minor localization differences between groups may be due to differences in the task design between the deaf group and this particular hearing group), neither group showed activity in auditory areas (Fig. 4).

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Direct comparison of response to pictures between deaf (A) and hearing (B) subjects. Both groups show significant activity at ∼100 ms in occipital visual areas, and neither shows activity in auditory cortex.

Anatomically constrained MEG: late time window (300–350 ms)

In contrast to early latencies, very high overlap was observed between the deaf and hearing groups during lexicosemantic processing. In both groups, the subtraction of congruent from incongruent trials revealed semantically modulated activity in the classical left hemisphere frontotemporal network around the a priori time window at 300–350 ms. Although words in both sign (Fig. 2C) and speech (Fig. 2D) activated some modality-specific areas [e.g., left intraparietal sulcus (IPS) for sign], most activity occurred within a shared network including the left planum temporale, superior temporal sulcus, temporal pole, and, to a lesser extent, the homologous areas in the right hemisphere. Representative single-subject waveforms from individual sensors revealed similar onset in the timing and location of the congruent versus incongruent difference in left superior temporal areas surrounding auditory cortex (Fig. 3), as determined by a random effects resampling statistic (Maris and Oostenveld, 2007).

Discussion

Sign languages possess the sublexical, word-level, syntactic, and semantic characteristics typical of spoken language (Emmorey, 2002; Sandler and Lillo-Martin, 2006). When a deaf child is reared by signing parents, the developmental trajectory of linguistic knowledge (including specific syntactic structures) follows that of spoken language in hearing children (Anderson and Reilly, 2002; Mayberry and Squires, 2006).

We examined two stages of signed and spoken word processing in deaf and hearing participants. While the early sensory processing stage (∼100 ms) is confined to modality-specific visual cortex for signs and auditory cortex for speech, both kinds of language activate an overlapping network of left hemisphere frontotemporal regions (including areas surrounding auditory cortex) during lexicosemantic processing (∼300 ms). The similarity between sign and speech during the later time window confirms the hypothesis that areas including anteroventral temporal, superior temporal, superior planar, and inferior prefrontal cortex are specialized for processing word meaning, regardless of modality. In contrast, the early differences between modalities provide evidence that visual afferents are not directed to auditory cortex for initial sensory processing to a greater extent in deafness. Rather, early sensory processing of signed words takes place in visual cortex.

The current study is among the first investigations of the spatiotemporal dynamics of sign processing. The timing of the activity in the present study reveals that speech in hearing participants and sign in deaf participants activates the classical left frontotemporal language network between ∼200–400 ms, well beyond short-latency sensory processes. These areas have been shown to be involved in processing high-level semantic information for both auditory and written words in normal individuals with fMRI (Patterson et al., 2007; Binney et al., 2010; Price, 2010; Binder et al., 2011), MEG (Marinkovic et al., 2003; Leonard et al., 2011), and in direct intracranial recordings in patients with medically intractable epilepsy (Chan et al., 2011), although there is evidence for functional and modality-specific specialization within anterior temporal subregions (Visser and Lambon Ralph, 2011). These same areas are deficient or damaged in patients with semantic dementia (Binney et al., 2010; Lambon Ralph et al., 2010; Mion et al., 2010). Lexicosemantic activity in anteroventral temporal and superior temporal areas is observed in both languages for bilinguals (Leonard et al., 2010, 2011) and in 12- to 18-month-old infants (Travis et al., 2011), further demonstrating their fundamental role in processing meaning. We found only relatively minor differences in active loci, including greater activity in IPS in deaf signers, possibly related to an inherently greater praxic and biological motion component to sign (Emmorey et al., 2002; Pobric et al., 2010). Activity in this network in congenitally deaf native signers processing a visuogestural language provides additional support for the hypothesis that this processing reflects abstract, supramodal representations of word meaning regardless of the input modality.

Capitalizing on the high spatiotemporal resolution of MEG constrained by individual cortical anatomy obtained by MRI, we also examined whether activity observed in auditory regions in congenitally deaf individuals (Finney et al., 2001, 2003; Lambertz et al., 2005) is caused by rewiring of visual sensory information to cortex that has been underutilized due to sensory deprivation for the individual's entire life. While previous MEG results indicated that hemodynamic activation in these regions, particularly in the right hemisphere, reflected early processing, the time window that was examined extended to 400 ms after stimulus onset, well beyond initial sensory processing for both visual and auditory stimuli (Finney et al., 2003). Furthermore, other investigations with single deaf subjects have failed to find evidence for the hypothesized cross-modal plasticity in auditory areas (Hickok et al., 1997; Nishimura et al., 1999). The present study investigated a sensory-specific, short-latency time window and found that during the first pass of sensory processing, auditory cortex is not active in deaf participants, whether they are viewing signs or static pictures. Rather, these areas show semantically modulated activity only well after first-pass sensory processing is thought to be completed. Lexicosemantic activity in the left anteroventral temporal lobe between ∼200–400 ms has been shown with laminar multi-microelectrode recordings from different cortical layers to reflect recurrent associative or second-pass processing (Halgren et al., 2006). The latency of the responses in superior temporal cortex in deaf signers indicates that they receive the output of a long chain of visual processing, instead of participating in the early encoding of sensory information (which is performed in primary and secondary visual areas).

Cortical plasticity is a hallmark of early development (Bates and Roe, 2001) and continues well into adulthood in the form of learning-induced cortical and synaptic changes (Buonomano and Merzenich, 1998). Experimental results with animals showing cross-modal plasticity in the context of sensory deprivation are intriguing and of great importance for understanding fundamental principles of neural organization (Sur et al., 1988; Roe et al., 1990; Sur, 2004; Barnes and Finnerty, 2010). While there is extensive and convincing evidence that auditory stimuli activate visual areas in blind individuals (Sadato et al., 1996; Cohen et al., 1997; Barnes and Finnerty, 2010), such clear evidence for a reorganization of auditory cortex in deafness is lacking in both human (Bavelier and Neville, 2002; Kral, 2007) and animal (Kral et al., 2003; Kral, 2007) studies. Factors such as the extent of hearing loss and age of onset of deafness may impact cortical reorganization and rewiring (Bavelier and Neville, 2002; Lambertz et al., 2005), and there may be functional distinctions between A1 and surrounding areas that do show plasticity, such as the anterior auditory field in cats (Lomber et al., 2010; Meredith and Lomber, 2011; Meredith et al., 2011). Additionally, some neurons in auditory regions may be involved in processing nonauditory information (particularly in multimodal contexts); however, the present results suggest that in humans who are born profoundly deaf and are native signers, unimodal responses in primary sensory and semantic systems remain intact.

Thus, in deaf signers who acquired sign language from birth from their deaf parents, signs are processed in a brain network that is strikingly similar to that for spoken words in hearing individuals. The timing of activity in the language network (including superior temporal regions surrounding auditory cortex) reveals that this is due to semantic encoding, rather than to a rerouting of visual-sensory input. This provides evidence that left frontotemporal regions, including the superior temporal plane surrounding the auditory cortex, are specialized for encoding word meaning regardless of input modality.

Footnotes

    • Received February 29, 2012.
    • Revision received May 21, 2012.
    • Accepted May 25, 2012.
  • This work was supported by NSF Grant BCS-0924539, NIH Grant T-32 DC00041, an innovative research award from the Kavli Institute for Brain and Mind, and a UCSD Chancellor's Collaboratories grant. We thank D. Hagler, A. Lieberman, P. Lott, A. Dale, and T. Brown for assistance.

  • Correspondence should be addressed to Naja Ferjan Ramirez, UCSD Department of Linguistics, 9500 Gilman Drive, La Jolla, CA 92093-0108. naja{at}ling.ucsd.edu
  • Copyright © 2012 the authors 0270-6474/12/329700-06$15.00/0

References

    1. Anderson D,
    2. Reilly J
    (2002) The MacArthur communicative development inventory: normative data for American Sign Language. J Deaf Stud Deaf Educ 7:83–106.
    1. Barnes SJ,
    2. Finnerty GT
    (2010) Sensory experience and cortical rewiring. Neuroscientist 16:186–198.
    1. Nelson CA,
    2. Luciana M
    1. Bates E,
    2. Roe K
    (2001) in Handbook of developmental cognitive neuroscience, Language development in children with unilateral brain injury, eds Nelson CA, Luciana M (MIT, Cambridge).
    1. Bates E,
    2. D'Amico S,
    3. Jacobsen T,
    4. Székely A,
    5. Andonova E,
    6. Devescovi A,
    7. Herron D,
    8. Lu CC,
    9. Pechmann T,
    10. Pléh C,
    11. Wicha N,
    12. Federmeier K,
    13. Gerdjikova I,
    14. Gutierrez G,
    15. Hung D,
    16. Hsu J,
    17. Iyer G,
    18. Kohnert K,
    19. Mehotcheva T,
    20. Orozco-Figueroa A,
    21. et al.
    (2003) Timed picture naming in seven languages. Psychon Bull Rev 10:344–380.
    1. Bavelier D,
    2. Neville HJ
    (2002) Cross-modal plasticity: where and how? Nat Rev Neurosci 3:443–452.
    1. Binder JR,
    2. Gross WL,
    3. Allendorfer JB,
    4. Bonilha L,
    5. Chapin J,
    6. Edwards JC,
    7. Grabowski TJ,
    8. Langfitt JT,
    9. Loring DW,
    10. Lowe MJ,
    11. Koenig K,
    12. Morgan PS,
    13. Ojemann JG,
    14. Rorden C,
    15. Szaflarski JP,
    16. Tivarus ME,
    17. Weaver KE
    (2011) Mapping anterior temporal lobe language areas with fMRI: A multicenter normative study. Neuroimage 54:1465–1475.
    1. Binney RJ,
    2. Embleton KV,
    3. Jefferies E,
    4. Parker GJ,
    5. Ralph MA
    (2010) The ventral and inferolateral aspects of the anterior temporal lobe are crucial in semantic memory: evidence from a novel direct comparison of distortion-correct fMRI, rTMS, and semantic dementia. Cereb Cortex 20:2728–2738.
    1. Buonomano DV,
    2. Merzenich MM
    (1998) Cortical plasticity: from synapses to maps. Annu Rev Neurosci 21:149–186.
    1. Capek CM,
    2. Grossi G,
    3. Newman AJ,
    4. McBurney SL,
    5. Corina D,
    6. Roeder B,
    7. Neville HJ
    (2009) Brain systems mediating semantic and syntactic processing in deaf native signers: biological invariance and modality specificity. Proc Natl Acad Sci U S A 106:8784–8789.
    1. Chan AM,
    2. Baker JM,
    3. Eskandar E,
    4. Schomer D,
    5. Ulbert I,
    6. Marinkovic K,
    7. Cash SS,
    8. Halgren E
    (2011) First-pass selectivity for semantic categories in human anteroventral temporal lobe. J Neurosci 31:18119–18129.
    1. Cohen LG,
    2. Celnik P,
    3. Pascual-Leone A,
    4. Corwell B,
    5. Falz L,
    6. Dambrosia J,
    7. Honda M,
    8. Sadato N,
    9. Gerloff C,
    10. Dolores Catalá MD,
    11. Hallett M
    (1997) Functional relevance of cross-modal plasticity in blind humans. Nature 389:180–183.
    1. Corina DP,
    2. McBurney SL,
    3. Dodrill C,
    4. Hinshaw K,
    5. Brinkley J,
    6. Ojemann G
    (1999) Functional roles of Broca's area and SMG: evidence from cortical stimulation mapping in a deaf signer. Neuroimage 10:570–581.
    1. Dale AM,
    2. Liu AK,
    3. Fischl BR,
    4. Buckner RL,
    5. Belliveau JW,
    6. Lewine JD,
    7. Halgren E
    (2000) Dynamic statistical parametric mapping: combining fMRI and MEG for high-resolution imaging of cortical activity. Neuron 26:55–67.
    1. Emmorey K
    (2002) Language, cognition and the brain: insights from sign language research (Lawrence Erlbaum, Mahway, NJ).
    1. Emmorey K,
    2. Damasio H,
    3. McCullough S,
    4. Grabowski T,
    5. Ponto LL,
    6. Hichwa RD,
    7. Bellugi U
    (2002) Neural systems underlying spatial language in American Sign Language. Neuroimage 17:812–824.
    1. Ferjan Ramirez N,
    2. Lieberman AM,
    3. Mayberry RI
    (2012) The initial stages of first-language acquisition begun in adolescence: when late looks early. J Child Lang 20:1–24.
    1. Finney EM,
    2. Fine I,
    3. Dobkins KR
    (2001) Visual stimuli activate auditory cortex in the deaf. Nat Neurosci 4:1171–1173.
    1. Finney EM,
    2. Clementz BA,
    3. Hickok G,
    4. Dobkins KR
    (2003) Visual stimuli activate auditory cortex in deaf subjects: evidence from MEG. Neuroreport 14:1425–1427.
    1. Hagler DJ Jr.,
    2. Saygin AP,
    3. Sereno MI
    (2006) Smoothing and cluster thresholding for cortical surface-based group analysis of fMRI data. Neuroimage 33:1093–1103.
    1. Halgren E,
    2. Wang C,
    3. Schomer DL,
    4. Knake S,
    5. Marinkovic K,
    6. Wu J,
    7. Ulbert I
    (2006) Processing stages underlying word recognition in the anteroventral temporal lobe. Neuroimage 30:1401–1413.
    1. Hickok G,
    2. Bellugi U,
    3. Klima ES
    (1996) The neurobiology of signed language and its implications for the neural organization of language. Nature 381:699–702.
    1. Hickok G,
    2. Poeppel D,
    3. Clark K,
    4. Buxton RB,
    5. Rowley HA,
    6. Roberts TP
    (1997) Sensory mapping in a congenitally deaf subject: MEG and fMRI studies of cross-modal non-plasticity. Hum Brain Mapp 5:437–444.
    1. Klima ES,
    2. Bellugi U
    (1979) The signs of language (Harvard UP, Cambridge).
    1. Kral A
    (2007) Unimodal and cross-modal plasticity in the ‘deaf’ auditory cortex. Int J Audiol 46:479–493.
    1. Kral A,
    2. Schröder JH,
    3. Klinke R,
    4. Engel AK
    (2003) Absence of cross-modal reorganization in the primary auditory cortex of congenitally deaf cats. Exp Brain Res 153:605–613.
    1. Kutas M,
    2. Federmeier KD
    (2011) Thirty years and counting: finding meaning in the N400 component of the event-related brain potential (ERP) Annu Rev Psychol 62:621–647.
    1. Kutas M,
    2. Neville H,
    3. Holcomb P
    (1987) A preliminary comparison of the N400 response to semantic anomalies during reading, listening and signing. Electroencephalogr Clin Neurophysiol [Suppl] 39:325–330.
    1. Lambertz N,
    2. Gizewski ER,
    3. de Greiff A,
    4. Forsting M
    (2005) Cross-modal plasticity in deaf subjects dependent on the extent of hearing loss. Brain Res Cogn Brain Res 25:884–890.
    1. Lambon Ralph MA,
    2. Sage K,
    3. Jones RW,
    4. Mayberry EJ
    (2010) Coherent concepts are computed in the anterior temporal lobes. Proc Natl Acad Sci U S A 107:2717–2722.
    1. Leonard MK,
    2. Brown TT,
    3. Travis KE,
    4. Gharapetian L,
    5. Hagler DJ Jr.,
    6. Dale AM,
    7. Elman JL,
    8. Halgren E
    (2010) Spatiotemporal dynamics of bilingual word processing. Neuroimage 49:3286–3294.
    1. Leonard MK,
    2. Torres C,
    3. Travis KE,
    4. Brown TT,
    5. Hagler DJ Jr.,
    6. Dale AM,
    7. Elman JL,
    8. Halgren E
    (2011) Language proficiency modulates the recruitment of non-classical language areas in bilinguals. PLoS One 6:e18240.
    1. Lomber SG,
    2. Meredith MA,
    3. Kral A
    (2010) Cross-modal plasticity in specific auditory cortices underlies visual compensations in the deaf. Nat Neurosci 13:1421–1427.
    1. MacSweeney M,
    2. Capek CM,
    3. Campbell R,
    4. Woll B
    (2008) The signing brain: the neurobiology of sign language. Trends Cogn Sci 12:432–440.
    1. Marinkovic K,
    2. Dhond RP,
    3. Dale AM,
    4. Glessner M,
    5. Carr V,
    6. Halgren E
    (2003) Spatiotemporal dynamics of modality-specific and supramodal word processing. Neuron 38:487–497.
    1. Maris E,
    2. Oostenveld R
    (2007) Nonparametric statistical testing of EEG- and MEG-data. J Neurosci Methods 164:177–190.
    1. Brown K
    1. Mayberry RI,
    2. Squires B
    (2006) in Encyclopedia of language and linguistics, Sign language: acquisition, ed Brown K (Elsevier, Oxford), 2nd Edition, pp 739–743.
    1. Mayberry RI,
    2. Chen JK,
    3. Witcher P,
    4. Klein D
    (2011) Age of acquisition effects on the functional organization of language in the adult brain. Brain Lang 119:16–29.
    1. McDonald CR,
    2. Thesen T,
    3. Carlson C,
    4. Blumberg M,
    5. Girard HM,
    6. Trongnetrpunya A,
    7. Sherfey JS,
    8. Devinsky O,
    9. Kuzniecky R,
    10. Dolye WK,
    11. Cash SS,
    12. Leonard MK,
    13. Hagler DJ Jr.,
    14. Dale AM,
    15. Halgren E
    (2010) Multimodal imaging of repetition priming: using fMRI, MEG, and intracranial EEG to reveal spatiotemporal profiles of word processing. Neuroimage 53:707–717.
    1. Meredith MA,
    2. Lomber SG
    (2011) Somatosensory and visual crossmodal plasticity in the anterior auditory field of early-deaf cats. Hear Res 280:38–47.
    1. Meredith MA,
    2. Kryklywy J,
    3. McMillan AJ,
    4. Malhotra S,
    5. Lum-Tai R,
    6. Lomber SG
    (2011) Crossmodal reorganization in the early deaf switches sensory, but not behavioral roles of auditory cortex. Proc Natl Acad Sci U S A 108:8856–8861.
    1. Mion M,
    2. Patterson K,
    3. Acosta-Cabronero J,
    4. Pengas G,
    5. Izquierdo-Garcia D,
    6. Hong YT,
    7. Fryer TD,
    8. Williams GB,
    9. Hodges JR,
    10. Nestor PJ
    (2010) What the left and right anterior fusiform gyri tell us about semantic memory. Brain 133:3256–3268.
    1. Neville HJ,
    2. Coffey SA,
    3. Lawson DS,
    4. Fischer A,
    5. Emmorey K,
    6. Bellugi U
    (1997) Neural systems mediating American Sign Language: effects of sensory experience and age of acquisition. Brain Lang 57:285–308.
    1. Nishimura H,
    2. Hashikawa K,
    3. Doi K,
    4. Iwaki T,
    5. Watanabe Y,
    6. Kusuoka H,
    7. Nishimura T,
    8. Kubo T
    (1999) Sign language ‘heard’ in the auditory cortex. Nature 397:116.
    1. Ojemann GA
    (1983) Brain organization for language from the perspective of electrical stimulation mapping. Behav Brain Sci 6:189–230.
    1. Patterson K,
    2. Nestor PJ,
    3. Rogers TT
    (2007) Where do you know what you know? The representation of semantic knowledge in the human brain. Nat Rev Neurosci 8:976–987.
    1. Petitto LA,
    2. Zatorre RJ,
    3. Gauna K,
    4. Nikelski EJ,
    5. Dostie D,
    6. Evans AC
    (2000) Speech-like cerebral activity in profoundly deaf people processing signed languages: implications for the neural basis of human language. Proc Natl Acad Sci U S A 97:13961–13966.
    1. Pobric G,
    2. Jefferies E,
    3. Lambon Ralph MA
    (2010) Category-specific versus category-general semantic impairment induced by transcranial magnetic stimulation. Curr Biol 20:964–968.
    1. Poizner H,
    2. Klima ES,
    3. Bellugi U
    (1987) What the hands reveal about the brain (MIT, Cambridge, MA).
    1. Price CJ
    (2010) The anatomy of language: a review of 100 fMRI studies published in 2009. Ann N Y Acad Sci 1191:62–88.
    1. Roe AW,
    2. Pallas SL,
    3. Hahm JO,
    4. Sur M
    (1990) A map of visual space induced in primary auditory cortex. Science 250:818–820.
    1. Sadato N,
    2. Pascual-Leone A,
    3. Grafman J,
    4. Ibañez V,
    5. Deiber MP,
    6. Dold G,
    7. Hallett M
    (1996) Activation of the primary visual cortex by Braille reading in blind subjects. Nature 380:526–528.
    1. Sandler W,
    2. Lillo-Martin D
    (2006) Sign language and linguistic universals (Cambridge UP, Cambridge).
    1. Schick B
    (1997) The American Sign Language vocabulary test (University of Colorado at Boulder, Boulder, CO).
    1. Calvert GA,
    2. Spence C,
    3. Stein BE
    1. Sur M
    (2004) in The handbook of multisensory processes, Rewiring cortex: cross-modal plasticity and its implications for cortical development and function, eds Calvert GA, Spence C, Stein BE (MIT, Cambridge, MA).
    1. Sur M,
    2. Garraghty PE,
    3. Roe AW
    (1988) Experimentally induced visual projections into auditory thalamus and cortex. Science 242:1437–1441.
    1. Travis KE,
    2. Leonard MK,
    3. Brown TT,
    4. Hagler DJ Jr.,
    5. Curran M,
    6. Dale AM,
    7. Elman JL,
    8. Halgren E
    (2011) Spatiotemporal neural dynamics of word understanding in 12- to 18-month-old-infants. Cereb Cortex 21:1832–1839.
    1. Visser M,
    2. Lambon Ralph MA
    (2011) Differential contributions of bilateral ventral anterior temporal lobe and left anterior superior temporal gyrus to semantic processes. J Cogn Neurosci 23:3121–3131.
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
(JNeurosci logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.