Abstract
Congenitally deaf individuals receive little or no auditory input, and when raised by deaf parents, they acquire sign as their native and primary language. We asked two questions regarding how the deaf brain in humans adapts to sensory deprivation: (1) is meaning extracted and integrated from signs using the same classical left hemisphere frontotemporal network used for speech in hearing individuals, and (2) in deafness, is superior temporal cortex encompassing primary and secondary auditory regions reorganized to receive and process visual sensory information at short latencies? Using MEG constrained by individual cortical anatomy obtained with MRI, we examined an early time window associated with sensory processing and a late time window associated with lexicosemantic integration. We found that sign in deaf individuals and speech in hearing individuals activate a highly similar left frontotemporal network (including superior temporal regions surrounding auditory cortex) during lexicosemantic processing, but only speech in hearing individuals activates auditory regions during sensory processing. Thus, neural systems dedicated to processing high-level linguistic information are used for processing language regardless of modality or hearing status, and we do not find evidence for rewiring of afferent connections from visual systems to auditory cortex.
Introduction
Neuropsychological and neuroimaging studies generally show that, when acquired as a native language from birth in congenitally deaf individuals, sign language is processed in a primarily left frontotemporal brain network, remarkably similar to the network used by hearing subjects to understand spoken words (Petitto et al., 2000; MacSweeney et al., 2008; Mayberry et al., 2011). Similarly, the N400, an event-related component correlated with lexicosemantic processing (Kutas and Federmeier, 2011), is similar when evoked by signs in deaf individuals and spoken or written words in hearing individuals (Kutas et al., 1987; Neville et al., 1997; Capek et al., 2009). Language deficits in deafness are more pronounced after lesions in the left hemisphere (Klima and Bellugi, 1979; Poizner et al., 1987; Hickok et al., 1996). Finally, direct cortical stimulation in left inferior frontal and posterior superior temporal regions in a deaf signer disrupted sign language production similar to speech disruptions in hearing individuals (Ojemann, 1983; Corina et al., 1999).
Left frontotemporal language areas include the cortex surrounding primary auditory cortex (Price, 2010), which is functionally deafferented in congenitally deaf individuals. In animal models, it has been demonstrated that afferent connections from the retina can be induced to connect with the medial geniculate nucleus of the thalamus (Sur et al., 1988), resulting in maps of visual space within primary auditory cortex (Roe et al., 1990; Barnes and Finnerty, 2010). Likewise, in congenitally deaf humans, auditory regions have been shown to exhibit hemodynamic and neurophysiological activation to low-level moving visual stimuli, particularly in the right hemisphere (Finney et al., 2001, 2003) and even to sign language narratives more than in hearing controls (Lambertz et al., 2005). However, other studies have not found such responses (Hickok et al., 1997) or have found extensive interindividual variability (Bavelier and Neville, 2002).
If auditory cortex is actually rewired in deaf individuals to receive visual input directly, then the similar activation patterns evoked by signed words in deaf signers and spoken words in hearing individuals would be a natural consequence of neural plasticity: in both groups, low-level sensory processing in auditory cortex should be projected to adjacent superior temporal areas, and thence to the broader left frontotemporal language network for lexicosemantic processing. Alternatively, activity in the region surrounding auditory cortex to signed words in deaf individuals and to spoken words in hearing individuals may reflect higher-level semantic encoding rather than sensory analysis. In this scenario, common activations in superior temporal cortex occur only after distinct modality-specific sensory processing for sign or speech. These alternatives can be dissociated based on the timing of the activity in superior temporal regions, information that is not available from hemodynamic measures, but can be obtained using MEG. Here we show that this activity is semantic, not sensory. Only speech in hearing individuals activates auditory areas during early sensory processing. However, both speech in hearing individuals and sign in deaf native signers activate similar temporal and frontal regions in the classical language network during later semantic processing stages.
Materials and Methods
Participants.
Twelve healthy right-handed congenitally deaf native signers (6 female; age range, 17–36 years) with no history of neurological or psychological impairment were recruited for participation (Table 1). All had profound hearing loss from birth and acquired American Sign Language (ASL) as their native language from their deaf parents. In addition, eight hearing controls from an analogous task with spoken English were included for comparison (5 female; age range, 21–29 years).
Procedures.
Each deaf participant viewed single signs that were either congruously or incongruously paired with a preceding picture (Fig. 1). Stimuli were high-frequency concrete nouns in ASL presented as short video clips (range, 340–700 ms; mean, 515.3 ms). Since no frequency norms exist for ASL, the stimuli were selected from ASL developmental inventories (Schick, 1997; Anderson and Reilly, 2002) and picture naming data (Bates et al., 2003; Ferjan Ramirez et al., 2012). The signs were all concrete nouns representing highly imageable objects, and were reviewed by a panel of six deaf and hearing fluent signers to ensure they were accurately produced and highly familiar from an early age. Words that are typically fingerspelled or are compound signs were excluded. Each sign video began when all phonological parameters (handshape, location, movement, and orientation) were in place, and ended when the movement was completed. Each sign appeared in both the congruent and incongruent conditions, and if a trial from one condition was rejected due to artifacts in the MEG signal, the corresponding trial from the other condition was also rejected to ensure that sensory processing across congruent and incongruent trials was identical. Subjects were instructed to press a button when the sign matched the preceding picture in meaning; the response hand was counterbalanced across six blocks of 102 trials each. The hearing participants performed the same task, except that instead of viewing pictures and signs, subjects saw photos and then heard single auditory English words through earphones and pressed a button when they matched. The picture remained on the screen throughout the duration of the auditory word. Word duration ranged from 304 to 637 ms, with a mean of 445 ms. To analyze the response to pictures, we compared the deaf group to a different group of hearing participants who saw the same white-on-black line drawings in a separate but similar task.
Neuroimaging.
While subjects performed the task, we recorded MEG from 204 planar gradiometer channels distributed over the scalp, at 1000 Hz with minimal filtering (0.1–200 Hz). Following the MEG session, each subject's structural MRI was acquired as a T1-weighted image. Sources were estimated by coregistering MEG and MRI data and using a linear minimum-norm approach, noise normalized to a prestimulus period, according to previously published procedures (Dale et al., 2000; Leonard et al., 2010; McDonald et al., 2010). Random-effects statistical analysis on the dynamic statistical parametric maps was performed using a cluster thresholding approach (Hagler et al., 2006; McDonald et al., 2010). Table 2 shows surface Talairach coordinates for peak vertices in the clusters. Two time windows were selected for analysis based on a grand average of the activity to signs and speech across both groups of participants. For the early (80–120 ms) time window, a grand average of all signed or spoken words was displayed on an average brain, and for the later time window (300–350 ms), a subtraction of congruous–incongruous words was displayed on the average brain. Regions with significant clusters (cluster threshold for signs 80–120 ms = 208.58 mm2, 300–350 ms = 212.32 mm2; cluster threshold for speech 80–120 ms = 238.60 mm2, 300–350 ms = 206.63 mm2) were selected for time course extraction (Fig. 2C,D, graphs).
Results
Behavioral responses
Both groups of participants performed the task with high accuracy and fast reaction times (Table 1). Deaf participants were accurate on 94.3% of trials (SD = 3.93) and responded at 619.10 ms on average (SD = 97.5 ms). Hearing participants were accurate on 98.25% of trials (SD = 3.01) and responded at 561.23 ms on average (SD = 94.28 ms). The between-group reaction time difference was not significant (t test; p > 0.1).
Anatomically constrained MEG: early time window (80–120 ms)
During early sensory processing (80–120 ms), we examined the grand average of activity for all signed words in deaf participants and all spoken words in hearing participants. Responses to signs were significant in posterior occipital regions, including the occipital pole (Fig. 2A). Responses to spoken words were strongest in bilateral superior temporal cortex, including primary auditory areas on the superior temporal plane (Fig. 2B). An auditory peak in superior temporal channels that did not differentiate between congruent and incongruent conditions was visible in individual hearing subjects, but was not present in deaf subjects (Fig. 3). Thus, at early latencies, neural responses are confined to modality-specific sensory regions and do not differentiate between semantically congruent and incongruent trials. Crucially, signs do not evoke activity in auditory cortex at ∼100 ms in deaf native signers.
To determine whether auditory cortex activity differs between deaf and hearing individuals in response to visual stimuli, we compared the response to the pictures with that from a separate group of hearing subjects who saw the same line drawings. While both groups showed significant cluster-thresholded activity in posterior occipital cortex at ∼100 ms (minor localization differences between groups may be due to differences in the task design between the deaf group and this particular hearing group), neither group showed activity in auditory areas (Fig. 4).
Anatomically constrained MEG: late time window (300–350 ms)
In contrast to early latencies, very high overlap was observed between the deaf and hearing groups during lexicosemantic processing. In both groups, the subtraction of congruent from incongruent trials revealed semantically modulated activity in the classical left hemisphere frontotemporal network around the a priori time window at 300–350 ms. Although words in both sign (Fig. 2C) and speech (Fig. 2D) activated some modality-specific areas [e.g., left intraparietal sulcus (IPS) for sign], most activity occurred within a shared network including the left planum temporale, superior temporal sulcus, temporal pole, and, to a lesser extent, the homologous areas in the right hemisphere. Representative single-subject waveforms from individual sensors revealed similar onset in the timing and location of the congruent versus incongruent difference in left superior temporal areas surrounding auditory cortex (Fig. 3), as determined by a random effects resampling statistic (Maris and Oostenveld, 2007).
Discussion
Sign languages possess the sublexical, word-level, syntactic, and semantic characteristics typical of spoken language (Emmorey, 2002; Sandler and Lillo-Martin, 2006). When a deaf child is reared by signing parents, the developmental trajectory of linguistic knowledge (including specific syntactic structures) follows that of spoken language in hearing children (Anderson and Reilly, 2002; Mayberry and Squires, 2006).
We examined two stages of signed and spoken word processing in deaf and hearing participants. While the early sensory processing stage (∼100 ms) is confined to modality-specific visual cortex for signs and auditory cortex for speech, both kinds of language activate an overlapping network of left hemisphere frontotemporal regions (including areas surrounding auditory cortex) during lexicosemantic processing (∼300 ms). The similarity between sign and speech during the later time window confirms the hypothesis that areas including anteroventral temporal, superior temporal, superior planar, and inferior prefrontal cortex are specialized for processing word meaning, regardless of modality. In contrast, the early differences between modalities provide evidence that visual afferents are not directed to auditory cortex for initial sensory processing to a greater extent in deafness. Rather, early sensory processing of signed words takes place in visual cortex.
The current study is among the first investigations of the spatiotemporal dynamics of sign processing. The timing of the activity in the present study reveals that speech in hearing participants and sign in deaf participants activates the classical left frontotemporal language network between ∼200–400 ms, well beyond short-latency sensory processes. These areas have been shown to be involved in processing high-level semantic information for both auditory and written words in normal individuals with fMRI (Patterson et al., 2007; Binney et al., 2010; Price, 2010; Binder et al., 2011), MEG (Marinkovic et al., 2003; Leonard et al., 2011), and in direct intracranial recordings in patients with medically intractable epilepsy (Chan et al., 2011), although there is evidence for functional and modality-specific specialization within anterior temporal subregions (Visser and Lambon Ralph, 2011). These same areas are deficient or damaged in patients with semantic dementia (Binney et al., 2010; Lambon Ralph et al., 2010; Mion et al., 2010). Lexicosemantic activity in anteroventral temporal and superior temporal areas is observed in both languages for bilinguals (Leonard et al., 2010, 2011) and in 12- to 18-month-old infants (Travis et al., 2011), further demonstrating their fundamental role in processing meaning. We found only relatively minor differences in active loci, including greater activity in IPS in deaf signers, possibly related to an inherently greater praxic and biological motion component to sign (Emmorey et al., 2002; Pobric et al., 2010). Activity in this network in congenitally deaf native signers processing a visuogestural language provides additional support for the hypothesis that this processing reflects abstract, supramodal representations of word meaning regardless of the input modality.
Capitalizing on the high spatiotemporal resolution of MEG constrained by individual cortical anatomy obtained by MRI, we also examined whether activity observed in auditory regions in congenitally deaf individuals (Finney et al., 2001, 2003; Lambertz et al., 2005) is caused by rewiring of visual sensory information to cortex that has been underutilized due to sensory deprivation for the individual's entire life. While previous MEG results indicated that hemodynamic activation in these regions, particularly in the right hemisphere, reflected early processing, the time window that was examined extended to 400 ms after stimulus onset, well beyond initial sensory processing for both visual and auditory stimuli (Finney et al., 2003). Furthermore, other investigations with single deaf subjects have failed to find evidence for the hypothesized cross-modal plasticity in auditory areas (Hickok et al., 1997; Nishimura et al., 1999). The present study investigated a sensory-specific, short-latency time window and found that during the first pass of sensory processing, auditory cortex is not active in deaf participants, whether they are viewing signs or static pictures. Rather, these areas show semantically modulated activity only well after first-pass sensory processing is thought to be completed. Lexicosemantic activity in the left anteroventral temporal lobe between ∼200–400 ms has been shown with laminar multi-microelectrode recordings from different cortical layers to reflect recurrent associative or second-pass processing (Halgren et al., 2006). The latency of the responses in superior temporal cortex in deaf signers indicates that they receive the output of a long chain of visual processing, instead of participating in the early encoding of sensory information (which is performed in primary and secondary visual areas).
Cortical plasticity is a hallmark of early development (Bates and Roe, 2001) and continues well into adulthood in the form of learning-induced cortical and synaptic changes (Buonomano and Merzenich, 1998). Experimental results with animals showing cross-modal plasticity in the context of sensory deprivation are intriguing and of great importance for understanding fundamental principles of neural organization (Sur et al., 1988; Roe et al., 1990; Sur, 2004; Barnes and Finnerty, 2010). While there is extensive and convincing evidence that auditory stimuli activate visual areas in blind individuals (Sadato et al., 1996; Cohen et al., 1997; Barnes and Finnerty, 2010), such clear evidence for a reorganization of auditory cortex in deafness is lacking in both human (Bavelier and Neville, 2002; Kral, 2007) and animal (Kral et al., 2003; Kral, 2007) studies. Factors such as the extent of hearing loss and age of onset of deafness may impact cortical reorganization and rewiring (Bavelier and Neville, 2002; Lambertz et al., 2005), and there may be functional distinctions between A1 and surrounding areas that do show plasticity, such as the anterior auditory field in cats (Lomber et al., 2010; Meredith and Lomber, 2011; Meredith et al., 2011). Additionally, some neurons in auditory regions may be involved in processing nonauditory information (particularly in multimodal contexts); however, the present results suggest that in humans who are born profoundly deaf and are native signers, unimodal responses in primary sensory and semantic systems remain intact.
Thus, in deaf signers who acquired sign language from birth from their deaf parents, signs are processed in a brain network that is strikingly similar to that for spoken words in hearing individuals. The timing of activity in the language network (including superior temporal regions surrounding auditory cortex) reveals that this is due to semantic encoding, rather than to a rerouting of visual-sensory input. This provides evidence that left frontotemporal regions, including the superior temporal plane surrounding the auditory cortex, are specialized for encoding word meaning regardless of input modality.
Footnotes
This work was supported by NSF Grant BCS-0924539, NIH Grant T-32 DC00041, an innovative research award from the Kavli Institute for Brain and Mind, and a UCSD Chancellor's Collaboratories grant. We thank D. Hagler, A. Lieberman, P. Lott, A. Dale, and T. Brown for assistance.
- Correspondence should be addressed to Naja Ferjan Ramirez, UCSD Department of Linguistics, 9500 Gilman Drive, La Jolla, CA 92093-0108. naja{at}ling.ucsd.edu