Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
PreviousNext
Articles, Behavioral/Cognitive

“Visual” Cortex of Congenitally Blind Adults Responds to Syntactic Movement

Connor Lane, Shipra Kanjlia, Akira Omaki and Marina Bedny
Journal of Neuroscience 16 September 2015, 35 (37) 12859-12868; DOI: https://doi.org/10.1523/JNEUROSCI.1256-15.2015
Connor Lane
1Departments of Psychological and Brain Sciences and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Shipra Kanjlia
1Departments of Psychological and Brain Sciences and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Akira Omaki
2Cognitive Science, Johns Hopkins University, Baltimore, Maryland 21218
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Akira Omaki
Marina Bedny
1Departments of Psychological and Brain Sciences and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Marina Bedny
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Human cortex is comprised of specialized networks that support functions, such as visual motion perception and language processing. How do genes and experience contribute to this specialization? Studies of plasticity offer unique insights into this question. In congenitally blind individuals, “visual” cortex responds to auditory and tactile stimuli. Remarkably, recent evidence suggests that occipital areas participate in language processing. We asked whether in blindness, occipital cortices: (1) develop domain-specific responses to language and (2) respond to a highly specialized aspect of language–syntactic movement. Nineteen congenitally blind and 18 sighted participants took part in two fMRI experiments. We report that in congenitally blind individuals, but not in sighted controls, “visual” cortex is more active during sentence comprehension than during a sequence memory task with nonwords, or a symbolic math task. This suggests that areas of occipital cortex become selective for language, relative to other similar higher-cognitive tasks. Crucially, we find that these occipital areas respond more to sentences with syntactic movement but do not respond to the difficulty of math equations. We conclude that regions within the visual cortex of blind adults are involved in syntactic processing. Our findings suggest that the cognitive function of human cortical areas is largely determined by input during development.

SIGNIFICANCE STATEMENT Human cortex is made up of specialized regions that perform different functions, such as visual motion perception and language processing. How do genes and experience contribute to this specialization? Studies of plasticity show that cortical areas can change function from one sensory modality to another. Here we demonstrate that input during development can alter cortical function even more dramatically. In blindness a subset of “visual” areas becomes specialized for language processing. Crucially, we find that the same “visual” areas respond to a highly specialized and uniquely human aspect of language–syntactic movement. These data suggest that human cortex has broad functional capacity during development, and input plays a major role in determining functional specialization.

  • blindness
  • language
  • plasticity
  • syntax

Introduction

The human brain consists of distinct functional networks that support language processing, face perception, and motor control. How do genes and experience produce this functional specialization? Studies of experience-based plasticity provide unique insights into this question. In blindness, the visual system responds to auditory and tactile stimuli (e.g., Hyvärinen et al., 1981). Visual cortices are active when blind adults localize sounds, hear auditory motion, and discriminate tactile patterns (Weeks et al., 2000; Merabet et al., 2004; Saenz et al., 2008). Analogously, the auditory cortex of deaf individuals responds to visual and somatosensory stimuli (Finney et al., 2001, 2003; Karns et al., 2012).

If intrinsic physiology narrowly constrains cortical function, we would expect a close correspondence between the cortical area's new cross-modal function and its typical function. For example, in deaf cats, visual localization of objects in space is in part supported by auditory areas that typically perform sound localization (Lomber et al., 2010). In blind humans, the middle temporal visual motion complex responds to moving sounds (Saenz et al., 2008; Wolbers et al., 2011). Similarly, the visual word form area is recruited during Braille reading (Büchel et al., 1998b; Reich et al., 2011). Such findings are consistent with a limited role for experience in shaping cortical function.

One case of cross-modal plasticity seems to break from this pattern. In blind humans, the visual cortex is recruited during language processing. Occipital areas are active when blind people read Braille, generate verbs to heard nouns, and listen to spoken sentences (Sadato et al., 1996, 1998; Büchel et al., 1998a, 1998b; Röder et al., 2000, 2002; Burton et al., 2002a, b; Amedi et al., 2003; Reich et al., 2011; Watkins et al., 2012). Occipital cortex responds more to lists of words than meaningless sounds, and more to sentences than unconnected lists of words (Bedny et al., 2011). Responses to language are observed both in secondary visual areas and in primary visual cortex (Amedi et al., 2003; Burton, 2003; Bedny et al., 2012). Occipital plasticity contributes to behavior. Transcranial magnetic stimulation to the occipital pole impairs blind individuals' ability to read Braille and produce semantically appropriate verbs to aurally presented nouns (Cohen et al., 1997; Amedi et al., 2004).

Visual cortex plasticity for language is striking in light of the cognitive and evolutionary differences between vision and language. A key open question that we address in this study is whether visual cortex supports language-specific operations or domain general operations that contribute to language (Makuuchi et al., 2009; Fedorenko et al., 2011, 2012; Monti et al., 2012).

We also test the hypothesis that visual cortex processes aspects of language that are uniquely human and highly specialized. Language contains many levels of representation, including phonology, morphology, semantics, and syntax. One possibility is that intrinsic physiology restricts the kinds of information occipital cortex can process within language. In particular, syntactic structure building is thought to require specialized cortical circuitry (Pinker and Bloom, 1990; Hauser et al., 2002; Fitch and Hauser, 2004). Does occipital cortex participate in syntactic structure building?

Evidence for this possibility comes from a study by Röder et al. (2002) who found larger occipital responses to German sentences with noncanonical word orders (with scrambling), as well as larger occipital responses to sentences than matched jabberwocky speech. A key question left open by this study is whether the responses to syntactic complexity are specific to language. Previous studies have shown that a subset of areas within prefrontal and lateral temporal cortex are sensitive to linguistic content, but not to difficulty of working memory tasks (Fedorenko et al., 2011). Does this form of selectivity exist within visual cortex?

To address these questions, we conducted two experiments with congenitally blind participants. We compared occipital activity during sentence comprehension with activity during verbal sequence memory and symbolic math. Like sentences, the control tasks involve familiar symbols, tracking order information, and hierarchical structures. We predicted that occipital areas would respond more during sentence comprehension than the control tasks. Crucially, we manipulated the syntactic complexity of the sentences, half of the sentences contained syntactic movement. We also manipulated the difficulty of math equations. We predicted that regions of occipital cortex would respond more to syntactically complex sentences but would be insensitive to math difficulty.

Materials and Methods

Participants.

Nineteen congenitally blind individuals (13 females; 3 left-handed, 2 ambidextrous) and 18 age- and education-matched controls (8 females; 2 left-handed, 2 ambidextrous) contributed data to Experiments 1 and 2. All blind participants had at most minimal light perception since birth. Blindness was due to an abnormality anterior to the optic chiasm and not due to brain damage (Table 1).

View this table:
  • View inline
  • View popup
Table 1.

Participant demographic information

Participants were between 21 and 75 years of age (Table 1). Three additional sighted participants were scanned but excluded due to lack of brain volume coverage during the scan. We also excluded participants who failed to perform above chance on sentence comprehension in either of the two experiments. For both experiments, we set the performance criterion to be the 75th percentile of the binomial “chance performance” distribution (Experiment 1, 53.7% correct; Experiment 2, 54.2% correct). This resulted in three blind participants and zero sighted participants being excluded from further analyses. None of the participants suffered from any known cognitive and neurological disabilities. All participants gave written informed consent and were compensated $30 per hour.

Stimuli and procedure.

In both experiments, participants listened to stimuli presented over Sensimetrics MRI-compatible earphones (http://www.sens.com/products/model-s14/). The stimuli were presented at the maximum comfortable volume for each participant (average sound pressure level 76–84 dB). All participants were blindfolded for the duration of the study.

In Experiment 1, participants heard sentences and sequences of nonwords. On sentence trials, participants heard a sentence, followed by a yes/no question. Comprehension questions required participants to attend to thematic relations of words in the sentence (i.e., who did what to whom), and could not be answered based on recognition of individual words. Participants indicated their responses by pressing buttons on a button pad. We measured response time to the question and comprehension accuracy.

We manipulated the syntactic complexity of the sentences in Experiment 1. The manipulation focused on an aspect of syntactic representation that has received special attention in linguistic theory: syntactic movement (Chomsky, 1957, 1995). Sentences with syntactic movement dependencies require distant words or phrases to be related during comprehension. For example, in the sentence “The farmer that the teacher knew __ bought a car,” the verb “knew” and its object “the farmer” are separated by “that the teacher.” Sentences with movement are more difficult to process as measured by comprehension accuracy, reading times, and eye movements during reading (King and Just, 1991; Gibson, 1998; Gordon et al., 2001; Chen et al., 2005; Staub, 2010).

The sentence in each trial appeared in one of two conditions. In the +MOVE condition, sentences contained a syntactic movement dependency in the form of an object-extracted relative clause (e.g., “The actress [that the creator of the gritty HBO crime series admires __] often improvises her lines.”) In the −MOVE condition, sentences contained identical content words, and had similar meanings, but did not contain any movement dependencies. The −MOVE condition contained an embedded sentential complement clause, so that the number of clauses was identical in +MOVE and −MOVE conditions (e.g., “The creator of the gritty HBO crime series admires [that the actress often improvises her lines].”) The sentences were counterbalanced across two lists, such that each participant saw only one version of the sentence. Some of the sentences were adapted from a published set of stimuli (Gordon et al., 2001).

On nonword sequence memory trials, participants heard a sequence of nonwords (the target) followed by a shorter sequence (the probe), made up of some of the nonwords from the original set. Participants judged whether the nonwords in the probe were in the same order as in the target. On “match” trials, the probe consisted of consecutive nonwords from the target. On “nonmatch” trials, the nonwords in the probe were chosen from random positions in the target and presented in a shuffled order. On nonmatch trials, no two nonwords that occurred consecutively in the target sequence appeared in the same order in the probe sequence.

There were a total of 54 trials of each type: +MOVE, −MOVE, and nonword sequence, divided evenly into 6 runs. The items of each trial type were presented in a random order for each participant. Condition order was counterbalanced within each run. The sentence and nonword trials were both 16 s long. Each trial began with a tone, followed by a 6.7 s sentence or nonword sequence and a 2.9 s probe/question. Participants had until the end of the 16 s period to respond. The sentences and target nonword sequences were matched in number of words (sentence = 17.9, nonword = 17.8; p > 0.3), number of syllables per word (sentence = 1.61, nonword = 1.59; p > 0.3), and mean bigram frequency per word (sentence = 2342, nonword = 2348; p > 0.3) (Duyck et al., 2004).

Experiment 2 contained two primary conditions: sentences and math equations (Monti et al., 2012). There were a total 48 sentence trials and 96 math trials in the experiment. On sentence trials, participants heard pairs of sentences. The task was to decide whether the two sentences had the same meaning. One of the sentences in each pair was in active voice (e.g., “The receptionist that married the driver brought the coffee.”), whereas the other was in passive voice (“The coffee was brought by the receptionist that married the driver.”) On “same” trials, the roles and relations were maintained across both sentences (as in the previous example). On “different” trials, the roles of the people in the sentences were reversed in the second sentence (e.g., “The coffee was brought by the driver that married the receptionist.”)

On math trials, participants heard pairs of spoken subtraction equations involving two numbers and a variable X. The task was to decide whether the value of X was the same in both equations. Across trials, X could occur either as an operand (e.g., X − 5 = 3) or as the answer (e.g., 8 − 5 = X). Equations occurred in one of two conditions: difficult or easy. Difficult equations involved double-digit operands and answers (e.g., 28 − 14 = X), whereas easy equations involved single-digit numbers (e.g., 8 − 4 = X). Both conditions appeared equally often.

The sentence and math trials were both 13.5 s long. The pairs of stimuli were each 3.5 s long and were separated by a 2.5 s interstimulus interval. Participants had 4 s after the offset of the second stimulus to enter their response.

MRI data acquisition and cortical surface analysis.

MRI structural and functional data of the whole brain were collected on a 3 Tesla Phillips scanner. T1-weighted structural images were collected in 150 axial slices with 1 mm isotropic voxels. Functional, BOLD images were collected in 36 axial slices with 2.4 × 2.4 × 3 mm voxels and TR = 2 s. Data analyses were performed using FSL, Freesurfer, the HCP workbench, and custom software (Dale et al., 1999; Smith et al., 2004; Glasser et al., 2013).

All analyses were surface-based. Cortical surface models were created for each subject using the standard Freesurfer pipeline. During preprocessing, functional data were motion corrected, high pass filtered with a 128 s cutoff, and resampled to the cortical surface. Once on the surface, the data were smoothed with a 10 mm FWHM Gaussian kernel. The data from the cerebellum and subcortical structures were not analyzed.

A GLM was used to analyze BOLD activity as a function of condition for each subject. Fixed-effects analyses were used to combine runs within subject. Data were prewhitened to remove temporal autocorrelation. Covariates of interest were convolved with a standard hemodynamic response function. Covariates of no interest included a single regressor to model trials on which the participant did not respond and individual regressors to model time points with excessive motion (blind: 1.7 drops per run, SD = 2.7; sighted: 1.3 drops per run, SD = 2.8). Temporal derivatives for all but the motion covariates were also included in the model.

Because of lack of coverage, one blind and three sighted subjects were missing data in a small subset of occipital vertices. For vertices missing data in some but not all runs, a 4-D “voxelwise” regressor was used to model out the missing runs during fixed-effects analysis. The remaining missing vertices were filled in from neighboring vertices within a 10 mm radius (area filled in for each subject: 5, 39, 11, and 82 mm2).

Group-level random-effects analyses were corrected for multiple comparisons using a combination of vertex-wise and cluster-based thresholding. p value maps were first thresholded at the vertex level p < 0.05 false discovery rate (FDR) (Genovese et al., 2002). Nonparametric permutation testing was then used to cluster-correct at p < 0.05 family-wise error rate (FWE).

Visual inspection of the data suggested that language-related activity was less likely to be left lateralized among blind participants (for similar observations, see Röder et al., 2000, 2002). To account for this difference between groups, cortical surface and ROI analyses were conducted in each subject's language dominant hemisphere. For Experiment 1 analyses, Experiment 2 (sentence > math) data were used to determine language laterality. For Experiment 2 analyses, Experiment 1 (sentence > nonword) was used. Laterality indices were calculated using the formula (L − R)/(L + R), where L and R denote the sum of positive z-statistics >2.3 (p < 0.01 uncorrected) in the left and right hemisphere, respectively. For the purposes of analyses, participants with laterality indices >0 were operationalized as left hemisphere language dominant, and right hemisphere dominant otherwise. On this measure, 12 of 19 blind and 15 of 18 sighted participants were left hemisphere dominant for the sentence > nonword contrast. For the sentence > math contrast, 11 of 19 blind and 16 of 18 sighted participants were left hemisphere dominant. To align data across subjects, cortical surface analyses were conducted in the left hemisphere with data for right-lateralized subjects reflected to the left hemisphere. For ROI analyses, we extracted percent signal change (PSC) from either the left or right hemisphere for each subject, depending on which hemisphere was dominant for language in that participant. The laterality analysis procedure was orthogonal to condition and group and thus could not bias the results. To ensure that results do not depend on the laterality procedure, all analyses were also conducted in the left and right hemisphere separately.

ROI analyses.

We used orthogonal group functional ROIs to test for effects of syntactic movement in the visual cortex in the blind and sighted groups. Group ROIs were used because individual subject ROIs could not be defined in occipital cortex of sighted subjects.

Group ROIs were based on the Experiment 2 sentence > math contrast. We defined four ROIs, one for each activation peak in the blind average map: lateral occipital, fusiform, cuneus, and lingual. We first created a probabilistic overlap map across blind participants, where the value at each vertex is the fraction of blind participants who show activity at that vertex (p < 0.01, uncorrected) (Fedorenko et al., 2010). The overlap map was then smoothed at 5 mm FWHM. We divided the overlap map into four search spaces, one surrounding each activation peak, by manually tracing the natural boundaries between peaks. The top 20% of vertices with highest overlap were selected as the ROI for each search space.

We additionally defined a separate group functional ROI within V1. The anatomical boundaries of V1 were based on previously published retinotopic data that were aligned with the brains of the current participants using cortical folding patterns (Hadjikhani et al., 1998; Van Essen, 2005). This alignment procedure has previously been shown to accurately identify V1 in sighted adults (Hinds et al., 2008). As with the other ROIs, the top 20% of vertices within V1 showing greatest intersubject overlap for the sentence > math contrast were selected as the group ROI.

Orthogonal individual subject functional ROIs were defined in left dorsolateral prefrontal cortex (LPFC) to test for effects of math difficulty. The LPFC ROIs were defined as the 20% most active vertices for the math > sentences contrast of Experiment 2. ROIs were defined within a prefrontal region that was activated for math > sentences across blind and sighted participants (p < 0.01 FDR corrected).

In perisylvian cortex, we defined two sets of individual subject ROIs, located in the inferior frontal gyrus and middle-posterior lateral temporal cortex. Following Fedorenko et al. (2010), we defined the ROIs using a combination of group-level search spaces and individual subject functional data. For search spaces, we selected two parcels from the published set of functional areas (Fedorenko et al., 2010), previously shown to respond to linguistic content (inferior frontal gyrus, middle-posterior lateral temporal cortex). Within each search space, ROIs were defined based on each subject's Experiment 2 sentence > math activation map, in their language dominant hemisphere. The individual ROIs were defined as the 20% most active vertices in each search space.

PSC in each ROI was calculated relative to rest after averaging together each vertex's time-series across the ROI. Only trials on which the participant made a response contributed to the PSC calculation. Statistical comparisons were performed on the average PSC responses during the predicted peak window for each experiment (6–12 s for Experiment 1, 8–14 s for Experiment 2).

Results

Behavioral performance

In Experiment 1, both blind and sighted participants were more accurate on sentence trials than on nonword sequence memory trials (group × condition ANOVA, main effect of condition, F(1,35) = 93.89, p < 0.001). Accuracy on sentence trials was higher in the blind group, but the group × condition interaction was not significant (main effect of group, F(1,35) = 1.77, p = 0.19, group × condition interaction, F(1,35) = 2.4, p = 0.13) (Fig. 1).

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Behavioral performance. Percent correct and response times for blind (B) and sighted (S) participants in Experiments 1 and 2. Error bars indicate the within-subjects SEM (Morey, 2008).

Blind participants were faster at responding on sentence trials than on nonword sequence trials (t(18) = −2.89, p = 0.01). There was no difference in response time (RT) between conditions in the sighted group (t(17) = −0.88, p = 0.39). In a group × condition ANOVA, there was a main effect of condition (F(1,35) = 7.69, p = 0.009), no effect of group (F(1,35) = 0.98, p = 0.33), and a marginal group × condition interaction (F(1,35) = 2.75, p = 0.11).

Within the sentence condition of Experiment 1, blind and sighted participants made more errors and were slower on +MOVE than −MOVE sentences (group × condition ANOVA, accuracy: main effect of condition, F(1,35) = 94.15, p < 0.001, group × condition interaction, F(1,35) = 0.62, p = 0.44; RT: main effect of condition, F(1,35) = 61.16, p < 0.001, group × condition interaction: F(1,35) = 0.14, p > 0.5). Blind participants were slightly more accurate at answering comprehension questions (main effects of group, accuracy on sentence trials: F(1,35) = 3.96, p = 0.05; RT on sentence trials: F(1,35) = 0.1, p > 0.5).

In Experiment 2, blind participants performed numerically better on sentence than math trials, whereas sighted participants performed better on math than sentence trials (blind: t(18) = 1.22, p = 0.24; sighted: t(17) = −1.81, p = 0.09; group × condition interaction, F(1,35) = 4.7, p = 0.04). Response times were not different between the sentence and math conditions in either group (group × condition ANOVA, main effect of condition, F(1,35) = 0.53, p = 0.47, main effect of group F(1,35) = 0.12, p > 0.5, group × condition interaction, F(1,35) = 0.97, p = 0.33).

Within the math condition of Experiment 2, participants made more errors and were slower to respond on the hard (double-digit) than easy (single-digit) trials (group × condition ANOVA, main effect of condition for accuracy: F(1,35) = 11.11, p = 0.002, main effect of condition for RT: F(1,35) = 7.83, p = 0.008). There were no effects of group or group × condition interactions in either accuracy or RT (accuracy: main effect of group, F(1,35) = 1.72, p = 0.2, group × condition interaction, F(1,35) = 2.33, p = 0.14; RT: main effect of group, F(1,35) = 0.01, p > 0.5, group × condition interaction, F(1,35) = 0.81, p = 0.38).

“Visual” cortex of blind adults responds more to sentences than nonword sequences or math equations

In the blind group, we observed greater responses to sentences than nonword sequences in lateral occipital cortex, the calcarine sulcus, the cuneus, the lingual gyrus, and the fusiform (p < 0.05 FWE corrected). We observed responses in the superior occipital sulcus at a more lenient threshold (p < 0.05 FDR corrected) (Fig. 2; Table 2). Relative to published visuotopic maps (Hadjikhani et al., 1998; Tootell and Hadjikhani, 2001; Van Essen, 2005), responses to sentences were centered on ventral V2, extending into V1 and V3, on the medial surface, V4 and V8 on the ventral surface, and the middle temporal visual motion complex in lateral occipital cortex (Fig. 3).

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Language responses in sighted and blind individuals in each subject's language dominant hemisphere. Left, Responses to sentences > nonword sequences. Right, Responses to sentences > math equations. p < 0.05 (FDR corrected; 90 mm2 cluster threshold).

View this table:
  • View inline
  • View popup
Table 2.

Language responsive brain regions in blind and sighted individualsa

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Activation for sentences > nonword sequences in the blind group, relative to visuotopic boundaries (p < 0.05 FDR corrected; 90 mm2 cluster threshold) (Hadjikhani et al., 1998; Tootell and Hadjikhani, 2001; Van Essen, 2005).

Occipital responses to sentences were specific to the blind group. A group × condition interaction analysis of the sentences > nonwords contrast, in blind > sighted identified activity in the fusiform gyrus, the lingual gyrus, the superior occipital sulcus, and lateral occipital cortex (p < 0.05 FWE corrected).

A similar but spatially less extensive pattern of activation was observed for sentences relative to math equations in Experiment 2 (Fig. 2; Table 2). Occipital areas that were more active in blind than sighted participants included the cuneus, anterior lingual gyrus, and lateral occipital cortex (p < 0.05 FWE corrected).

To verify that results were not dependent on the laterality procedure, we conducted the same analyses in the left and right hemisphere separately. The pattern of results was similar to the language dominant hemisphere analyses. We again observed occipital responses to sentences in both experiments in the blind but not sighted subjects. Effects were somewhat weaker, suggesting that the laterality procedure reduced variability across participants. The left/right hemisphere analysis also revealed that, on average, the bind group responses to sentences > nonwords were slightly larger in the left hemisphere, whereas responses to sentences > math were larger in the right hemisphere.

Visual cortex of blind adults responds more to sentences with syntactic movement

Above, we identified occipital areas that respond more to sentences than memory for nonword sequences and symbolic math, including lateral occipital cortex, cuneus, lingual gyrus, and fusiform. The same occipital regions showed a reliable effect of syntactic movement in the blind group, but not in the sighted group (ROI × movement ANOVA in blind group, main effect of syntactic movement, F(1,18) = 10.46, p = 0.005, ROI × movement interaction, F(3,54) = 0.96, p = 0.42, main effect of ROI, F(3,54) = 21.03, p < 0.001; ROI × movement ANOVA in sighted group, main effect of syntactic movement, F(1,17) = 0.01, p > 0.5, ROI × movement interaction, F(3,51) = 0.97, p = 0.41, main effect of ROI, F(3,51) = 10.89, p < 0.001; group × ROI × movement ANOVA, group × movement interaction, F(1,35) = 5.22, p = 0.03) (Fig. 4).

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Responses to syntactic movement in occipital group ROIs in blind (B) and sighted (S) participants. For each subject, PSC was extracted from the left or right hemisphere, depending on which was dominant for language. Error bars indicate the within-subjects SEM (Morey, 2008).

In addition, we conducted the same analysis in the left and right hemisphere separately. Similar to the language dominant hemisphere analysis, we found a significant effect of movement and a movement × group interaction in the left hemisphere, and a weaker but similar trend in the right hemisphere (ROI × movement ANOVA in blind group, main effect of syntactic movement, LH: F(1,18) = 9.19, p = 0.007, RH: F(1,18) = 3.13, p = 0.09; group × ROI × movement ANOVA, group × movement interaction, LH: F(1,35) = 4.94, p = 0.03, RH: F(1,35) = 2.4, p = 0.13).

The language-responsive visual cortex ROIs did not respond more to difficult than simple math equations (blind group paired t tests, p > 0.16). The lack of response to math difficulty in visual cortex was not due to the failure of the math difficulty manipulation to drive brain activity. Left dorsolateral prefrontal cortex responded more to double-digit (difficult) than single-digit (easy) math equations across sighted and blind groups (group × difficulty ANOVA, main effect of difficulty, F(1,35) = 9.38, p = 0.004, main effect of group, F(1,35) = 0.93, p = 0.34, difficulty × group interaction, F(1,35) = 0.27, p > 0.5) (Fig. 5).

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

PSC in individual subject frontal and temporal ROIs in blind (B) and sighted (S) groups. Left, Responses during Experiment 1 in inferior frontal gyrus (IFG) and middle-posterior lateral temporal (MPT) language-responsive ROIs. For each subject, PSC was extracted from the left or right hemisphere, depending on which was dominant for language. Right, Responses to math difficulty in a left hemisphere math-responsive dorsolateral prefrontal ROI. Pictured above the graphs are the search spaces used to define the ROIs. Error bars indicate the within-subjects SEM (Morey, 2008).

Visual cortex activity predicts comprehension performance in blind adults

Blind participants who showed greater sensitivity to the movement manipulation in occipital cortex (+MOVE > −MOVE; percent signal change extracted from correct trials only) also performed better at comprehending +MOVE sentences (Fig. 6). We observed the strongest relationship between brain activation and behavior in the fusiform and lateral occipital ROIs (fusiform: r = 0.74, p = 0.001; lateral occipital: r = 0.55, p = 0.03, FDR corrected for the number of ROIs). The correlations with behavior were in the same direction but not reliable for regions on medial surface (cuneus: r = 0.35, p = 0.19; lingual: r = 0.23, p = 0.33, FDR corrected). The relationship between movement-related occipital activity and comprehension accuracy was specific to the blind group (sighted, r2 < 0.06, p > 0.35). The relationship was also specific to visual cortex and was not found in inferior frontal and lateral temporal ROIs (blind and sighted, r2 < 0.07, p > 0.3). Finally, the correlation was specific to the +Move > −Move neural differences. The amount of language activity in visual cortex (sentences > nonwords) was not a good predictor of comprehension accuracy (blind, r2 < 0.09, p > 0.22).

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

PSC in occipital group ROIs correlated with sentence comprehension accuracy on the complex +MOVE sentences. Top, Correlations in the blind group. Bottom, Correlations in the sighted group. PSC shown is the difference between +MOVE and −MOVE conditions. PSC was extracted from correct trials only, and in each subject's language dominant hemisphere. Data points indicate individual subjects.

Responses to language in V1

We next asked whether voxels within the boundaries of V1 responded to syntactic movement. We identified a V1 ROI that responded more to sentences than math in the blind group (Fig. 4). In this region, we saw an effect of syntactic movement in the blind but not sighted group (paired t test in the blind group, t(18) = 2.85, p = 0.01; sighted group, t(17) = 0.86, p = 0.4, group × movement ANOVA, group × movement interaction, F(1,35) = 2.59, p = 0.12, main effect of movement F(1,35) = 7.37, p = 0.01). There was no effect of math difficulty in this V1 ROI (blind group paired t test, t(18) = 0.29, p > 0.5). This analysis suggests that in the blind group a region within V1 is sensitive to syntactic movement but not math difficulty. Notably, however, the relationship between neural activity and performance observed in secondary visual areas was not present in V1 (r = 0.18, p = 0.45).

Responses to language and syntactic movement in perisylvian cortex are preserved in blindness

In sighted individuals, listening to sentences relative to sequences of nonwords activated a network of areas in prefrontal, lateral temporal, and temporoparietal cortex (Fig. 2; Table 2). We observed sentence-related activity in the inferior and superior frontal gyri, and along the extent of the middle and superior temporal gyri (p < 0.05 FWE corrected). The posterior aspect of the middle frontal gyrus was active at a more lenient threshold (p < 0.05 FDR corrected). A similar pattern of activation was observed for sentences relative to math equations, except with reduced response in the middle frontal gyrus, and more extensive temporal activation (p < 0.05 FWE corrected).

Blind individuals activated the same areas of prefrontal and lateral temporal cortex during sentence comprehension, relative to both nonword sequence memory and math calculation (p < 0.05, FWE corrected). A group × condition interaction analysis did not reveal any brain areas that were more active in sighted than blind participants, for either the sentences > nonword or sentence > math contrasts (p < 0.05, FDR corrected). However, in blind participants, responses were less likely to be left-lateralized and more likely to either be bilateral or right lateralized.

Across blind and sighted groups, inferior frontal and mid-posterior lateral temporal ROIs had higher activity for +MOVE than −MOVE sentences (group × movement × ROI ANOVA, main effect of movement, F(1,35) = 35.39, p < 0.001; movement × group interaction, F(1,35) = 0.14, p > 0.5; movement × ROI interaction F(1,35) = 0.15, p > 0.5) (Fig. 5).

Discussion

We find that, within the visual cortices of congenitally blind adults, a subset of regions is selective for linguistic content and sensitive to grammatical structure. Our findings replicate and extend the results of a prior study (Röder et al., 2002) showing responses to syntactic complexity in visual cortex of blind adults. Here we show that such responses are selective to language in that (1) they are observed in cortical areas that respond to language more than sequence memory and symbolic math tasks and (2) these same regions are not sensitive to the complexity of math equations. We further find that responses to syntactic complexity in visual cortex predict sentence comprehension performance across blind participants.

Occipital language areas are sensitive to syntactic structure

Sensitivity to syntactic movement in visual cortex is surprising, in light of evolutionary theories positing specific neural machinery for syntax (Pinker and Bloom, 1990; Hauser et al., 2002; Fitch and Hauser, 2004; Makuuchi et al., 2009). It has been suggested that Broca's area is specialized for processing syntactic movement dependencies (Grodzinsky, 2000; Grodzinsky and Santi, 2008). Understanding sentences with syntactic movement requires maintaining syntactic information in memory in the presence of distractors. It is unlikely that occipital cortex is evolutionarily adapted either for maintaining information over delays or for representing syntactic structure. Our findings suggest that language-specific adaptations are not required for a brain area to participate in syntactic processing.

How are we to reconcile the idea that nonspecialized brain areas participate in syntax with the fact that language is uniquely human (Terrace et al., 1979)? On the one hand, it could be argued that language is a cultural, rather than biological, adaptation, and there are no brain networks that are innately specialized for language processing (Christiansen and Chater, 2008; Tomasello, 2009). On the other hand, there is evidence that evolution enabled the human brain for language (Goldin-Meadow and Feldman, 1977; Enard et al., 2002; Vargha-Khadem et al., 2005). Consistent with this idea, the functional profile of perisylvian cortex is preserved across deafness and blindness (e.g., Neville et al., 1998). One possibility is that biological adaptations are only needed to initiate language acquisition. A subset of perisylvian areas may contain the “seeds” of language-processing capacity. During the course of development, the capacities of these specialized regions may spread to nonspecialized cortical areas. In blindness, this process of colonization expands into occipital territory. On this view, small evolutionary adaptations have cascading effects when combined with uniquely human experience.

An important open question concerns the relative behavioral contribution of occipital and perisylvian cortex to language. Having more cortical tissue devoted to sentence processing could improve sentence comprehension. Consistent with this idea, blind participants were slightly better at sentence comprehension tasks in the current experiments. We also found that blind participants with greater sensitivity to movement in occipital cortex were better at comprehending sentences. This suggests that occipital plasticity for language could be behaviorally relevant. However, multiple alternative possibilities remain open. Sentence processing behavior might depend entirely on perisylvian areas. Although prior work has shown that occipital cortex is behaviorally relevant for Braille reading and verb generation, relevance of occipital activity for sentence processing is not established. Occipital regions could also be performing redundant computations that have no impact on linguistic behavior. Finally, occipital cortex might actually hinder performance, perhaps because occipital cytoarchitecture and connectivity are suboptimal for language. Future studies could adjudicate among these possibilities using techniques, such as transcranial magnetic stimulation. If occipital cortex contributes to sentence comprehension, then transient disruption with transcranial magnetic stimulation should impair performance.

Developmental origins of language responses in occipital cortex

Our findings raise questions about the developmental mechanisms of language-related plasticity in blindness. With regard to timing, there is some evidence that occipital plasticity for language has a critical period. One study found that only individuals who lost their vision before age 9 show occipital responses to language (Bedny et al., 2012). This time course differs from other kinds of occipital plasticity, which occur in late blindness and even within several days of blindfolding (Merabet et al, 2008).

One possibility is that blindness prevents the pruning of exuberant projections from language areas to “visual” cortex. Language information could reach occipital cortex from language regions in prefrontal cortex, lateral temporal cortex, or both. In support of the prefrontal source hypothesis, blind individuals have increased resting state correlations between prefrontal cortex and occipital cortex (Liu et al., 2007; Bedny et al., 2010, 2011; Watkins et al., 2012). Prefrontal cortex is connected with occipital cortex by the fronto-occipital fasciculus (Martino et al., 2010) and to posterior and inferior temporal regions by the arcuate fasciculus (Rilling et al., 2008). In blindness, these projections may extend into visuotopic regions, such as middle temporal visual motion complex. Alternatively, temporal-lobe language areas may themselves expand posteriorly into visuotopic cortex. Language information could then reach primary visual areas (V1, V2) through feedback projections.

Domain-specific responses to language in occipital cortex

We find that areas within occipital cortex respond to sentence processing demands more than to memory for nonword sequences, or symbolic math. Like the control tasks, sentence processing requires maintaining previously heard items and their order in memory, and building hierarchical structures from symbols. Furthermore, both the nonwords and math tasks were harder than the sentence comprehension tasks. Despite this, regions within visual cortex responded more during the sentence comprehension tasks than the control tasks. This result is consistent with our prior findings that language-responsive regions of visual cortex are sensitive to linguistic content but not task difficulty per se (Bedny et al., 2011). Our findings suggest that occipital cortex develops domain-specific responses to language, mirroring specialization in frontal and temporal cortex (Makuuchi et al., 2009; Fedorenko et al., 2011, 2012; Monti et al., 2012).

Occipital plasticity for language has implications for theories on how domain specificity emerges in the human brain. Domain-specificity is often thought to result from an intrinsic match between cortical microcircuitry and cognitive computations. By contrast, our findings suggest that input during development can cause specialization for a cognitive domain in the absence of preexisting adaptations. A similar conclusion is supported by recent studies of development in object-selective occipitotemporal cortex. The ventral visual stream develops responses to object categories that do not have an evolutionary basis. For example, the visual word form area responds selectively to written words and letters (Cohen et al., 2000; McCandliss et al., 2003). Like other ventral stream areas, such as the fusiform face area and the parahippocampal place area, the visual word form area falls in a systematic location across individuals. In macaques, long-term training with novel visual objects (cartoon faces, Helvetica font, and Tetris shapes) leads to the development of cortical patches that are selective for these objects (Srihasam et al., 2014).

The present data go beyond these findings in one important respect. Specialization for particular visual objects is thought to build on existing innate mechanisms for object perception in the ventral visual stream. Even the location of the new object-selective areas within the ventral stream is thought to depend on intrinsic predispositions for processing specific shapes (jagged or curved lines) or particular parts of space (foveal as opposed to peripheral parts of the visual field) (Srihasam et al., 2014). The present findings demonstrate that domain specificity emerges even without such predispositions.

Why would selectivity emerge in the absence of innate predisposition? One possibility is that there are computational advantages to segregating different domains of information into different cortical areas (Cosmides and Tooby, 1994). Some inputs may be particularly effective colonizers of cortex, forcing out other processing. Other cognitive domains might be capable of sharing cortical circuits. Domain specificity could also be a natural outcome of the developmental process, without conferring any computational advantage.

The current results demonstrate that regions within the occipital cortex of blind adults show domain-specific responses to language. However, they leave open the possibility that other regions within visual cortex serve domain general functions. Evidence for this idea comes from a recent study that found increased functional connectivity between visual cortices and domain general working memory areas in blind adults (Deen et al., 2015). Such domain general responses may coexist with domain-specific responses to language.

It is also important to point out that visual cortices of blind individuals are likely to have other functions aside from language. There is ample evidence that the visual cortex is active during nonlinguistic tasks, including spatial localization of sounds, tactile mental rotation, and somatosensory tactile and auditory discrimination tasks (e.g., Rösler et al., 1993; Röder et al., 1996; Collignon et al., 2011). We hypothesize that some of the heterogeneity of responses observed across studies is attributable to different functional profiles across regions within the visual cortices. In sighted individuals, the visual cortices are subdivided into a variety of functional areas. Such subspecialization is also likely present in blindness.

Notes

Supplemental material for this article is available at http://pbs.jhu.edu/research/bedny/publications/Lane_2015_Supplement.pdf. Supplemental material includes a figure showing language responses for sighted and blind individuals in each subject's language dominant and non–language-dominant hemisphere. This material has not been peer reviewed.

Footnotes

  • This work was supported in part by the Science of Learning Institute at Johns Hopkins University. We thank the Baltimore blind community for making this research possible; and the F.M. Kirby Research Center for Functional Brain Imaging at the Kennedy Krieger Institute for their assistance in data collection.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Dr. Marina Bedny, 3400 N Charles Street, Ames Hall 232, Baltimore, MD 21218. marina.bedny{at}jhu.edu

References

  1. ↵
    1. Amedi A,
    2. Raz N,
    3. Pianka P,
    4. Malach R,
    5. Zohary E
    (2003) Early “visual” cortex activation correlates with superior verbal memory performance in the blind. Nat Neurosci 6:758–766, doi:10.1038/nn1072, pmid:12808458.
    OpenUrlCrossRefPubMed
  2. ↵
    1. Amedi A,
    2. Floel A,
    3. Knecht S,
    4. Zohary E,
    5. Cohen LG
    (2004) Transcranial magnetic stimulation of the occipital pole interferes with verbal processing in blind subjects. Nat Neurosci 7:1266–1270, doi:10.1038/nn1328, pmid:15467719.
    OpenUrlCrossRefPubMed
  3. ↵
    1. Bedny M,
    2. Konkle T,
    3. Pelphrey K,
    4. Saxe R,
    5. Pascual-Leone A
    (2010) Sensitive period for a multimodal response in human visual motion area MT/MST. Curr Biol 20:1900–1906, doi:10.1016/j.cub.2010.09.044, pmid:20970337.
    OpenUrlCrossRefPubMed
  4. ↵
    1. Bedny M,
    2. Pascual-Leone A,
    3. Dodell-Feder D,
    4. Fedorenko E,
    5. Saxe R
    (2011) Language processing in the occipital cortex of congenitally blind adults. Proc Natl Acad Sci U S A 108:4429–4434, doi:10.1073/pnas.1014818108, pmid:21368161.
    OpenUrlAbstract/FREE Full Text
  5. ↵
    1. Bedny M,
    2. Pascual-Leone A,
    3. Dravida S,
    4. Saxe R
    (2012) A sensitive period for language in the visual cortex: distinct patterns of plasticity in congenitally versus late blind adults. Brain Lang 122:162–170, doi:10.1016/j.bandl.2011.10.005, pmid:22154509.
    OpenUrlCrossRefPubMed
  6. ↵
    1. Büchel C,
    2. Price C,
    3. Frackowiak RS,
    4. Friston K
    (1998a) Different activation patterns in the visual cortex of late and congenitally blind subjects. Brain 121:409–419, doi:10.1093/brain/121.3.409, pmid:9549517.
    OpenUrlAbstract/FREE Full Text
  7. ↵
    1. Büchel C,
    2. Price C,
    3. Friston K
    (1998b) A multimodal language region in the ventral visual pathway. Nature 394:274–277, doi:10.1038/28389, pmid:9685156.
    OpenUrlCrossRefPubMed
  8. ↵
    1. Burton H
    (2003) Visual cortex activity in early and late blind people. J Neurosci 23:4005–4011, pmid:12764085.
    OpenUrlFREE Full Text
  9. ↵
    1. Burton H,
    2. Snyder AZ,
    3. Conturo TE,
    4. Akbudak E,
    5. Ollinger JM,
    6. Raichle ME
    (2002a) Adaptive changes in early and late blind: a fMRI study of Braille reading. J Neurophysiol 87:589–607, pmid:11784773.
    OpenUrlAbstract/FREE Full Text
  10. ↵
    1. Burton H,
    2. Snyder AZ,
    3. Diamond JB,
    4. Raichle ME
    (2002b) Adaptive changes in early and late blind: a FMRI study of verb generation to heard nouns. J Neurophysiol 88:3359–3371, doi:10.1152/jn.00129.2002, pmid:12466452.
    OpenUrlAbstract/FREE Full Text
  11. ↵
    1. Chen E,
    2. Gibson E,
    3. Wolf F
    (2005) Online syntactic storage costs in sentence comprehension. J Mem Lang 52:144–169, doi:10.1016/j.jml.2004.10.001.
    OpenUrlCrossRef
  12. ↵
    1. Chomsky N
    (1957) Syntactic structures (Mouton, The Hague, The Netherlands).
  13. ↵
    1. Chomsky N
    (1995) The minimalist program (Massachusetts Institute of Technology, Cambridge, MA).
  14. ↵
    1. Christiansen MH,
    2. Chater N
    (2008) Language as shaped by the brain. Behav Brain Sci 31:489–508, doi:10.1017/S0140525X08004998, pmid:18826669, discussion 509–558.
    OpenUrlCrossRefPubMed
  15. ↵
    1. Cohen LG,
    2. Celnik P,
    3. Pascual-Leone A,
    4. Corwell B,
    5. Falz L,
    6. Dambrosia J,
    7. Honda M,
    8. Sadato N,
    9. Gerloff C,
    10. Catalá MD,
    11. Hallett M
    (1997) Functional relevance of cross-modal plasticity in blind humans. Nature 389:180–183, doi:10.1038/38278, pmid:9296495.
    OpenUrlCrossRefPubMed
  16. ↵
    1. Cohen L,
    2. Dehaene S,
    3. Naccache L,
    4. Lehéricy S,
    5. Dehaene-Lambertz G,
    6. Hénaff MA,
    7. Michel F
    (2000) The visual word form area: spatial and temporal characterization of an initial stage of reading in normal subjects and posterior split-brain patients. Brain 123:291–307, doi:10.1093/brain/123.2.291, pmid:10648437.
    OpenUrlAbstract/FREE Full Text
  17. ↵
    1. Collignon O,
    2. Vandewalle G,
    3. Voss P,
    4. Albouy G,
    5. Charbonneau G,
    6. Lassonde M,
    7. Lepore F
    (2011) Functional specialization for auditory-spatial processing in the occipital cortex of congenitally blind humans. Proc Natl Acad Sci U S A 108:4435–4440, doi:10.1073/pnas.1013928108, pmid:21368198.
    OpenUrlAbstract/FREE Full Text
  18. ↵
    1. Cosmides L,
    2. Tooby J
    (1994) Mapping the mind: domain specificity in cognition and culture, Origins of domain specificity: the evolution of functional organization (Cambridge UP, Cambridge, UK), pp 84–116.
  19. ↵
    1. Dale AM,
    2. Fischl B,
    3. Sereno MI
    (1999) Cortical surface-based analysis: I. Segmentation and surface reconstruction. Neuroimage 9:179–194, doi:10.1006/nimg.1998.0395, pmid:9931268.
    OpenUrlCrossRefPubMed
  20. ↵
    1. Deen B,
    2. Saxe R,
    3. Bedny M
    (2015) Occipital cortex of blind individuals is functionally coupled with executive control areas of frontal cortex. J Cogn Neurosci 27:1633–1647, doi:10.1162/jocn_a_00807, pmid:25803598.
    OpenUrlCrossRefPubMed
  21. ↵
    1. Duyck W,
    2. Desmet T,
    3. Verbeke LP,
    4. Brysbaert M
    (2004) WordGen: a tool for word selection and nonword generation in Dutch, English, German, and French. Behav Res Methods Instrum Comput 36:488–499, doi:10.3758/BF03195595, pmid:15641437.
    OpenUrlCrossRefPubMed
  22. ↵
    1. Enard W,
    2. Przeworski M,
    3. Fisher SE,
    4. Lai CS,
    5. Wiebe V,
    6. Kitano T,
    7. Monaco AP,
    8. Pääbo S
    (2002) Molecular evolution of FOXP2, a gene involved in speech and language. Nature 418:869–872, doi:10.1038/nature01025, pmid:12192408.
    OpenUrlCrossRefPubMed
  23. ↵
    1. Fedorenko E,
    2. Hsieh PJ,
    3. Nieto-Castañón A,
    4. Whitfield-Gabrieli S,
    5. Kanwisher N
    (2010) New method for fMRI investigations of language: defining ROIs functionally in individual subjects. J Neurophysiol 104:1177–1194, doi:10.1152/jn.00032.2010, pmid:20410363.
    OpenUrlAbstract/FREE Full Text
  24. ↵
    1. Fedorenko E,
    2. Behr MK,
    3. Kanwisher N
    (2011) Functional specificity for high-level linguistic processing in the human brain. Proc Natl Acad Sci U S A 108:16428–16433, doi:10.1073/pnas.1112937108, pmid:21885736.
    OpenUrlAbstract/FREE Full Text
  25. ↵
    1. Fedorenko E,
    2. Duncan J,
    3. Kanwisher N
    (2012) Language-selective and domain-general regions lie side by side within Broca's area. Curr Biol 22:2059–2062, doi:10.1016/j.cub.2012.09.011, pmid:23063434.
    OpenUrlCrossRefPubMed
  26. ↵
    1. Finney EM,
    2. Fine I,
    3. Dobkins KR
    (2001) Visual stimuli activate auditory cortex in the deaf. Nat Neurosci 4:1171–1173, doi:10.1038/nn763, pmid:11704763.
    OpenUrlCrossRefPubMed
  27. ↵
    1. Finney EM,
    2. Clementz BA,
    3. Hickok G,
    4. Dobkins KR
    (2003) Visual stimuli activate auditory cortex in deaf subjects: evidence from MEG. Neuroreport 14:1425–1427, doi:10.1097/00001756-200308060-00004, pmid:12960757.
    OpenUrlCrossRefPubMed
  28. ↵
    1. Fitch WT,
    2. Hauser MD
    (2004) Computational constraints on syntactic processing in a nonhuman primate. Science 303:377–380, doi:10.1126/science.1089401, pmid:14726592.
    OpenUrlAbstract/FREE Full Text
  29. ↵
    1. Genovese CR,
    2. Lazar NA,
    3. Nichols T
    (2002) Thresholding of statistical maps in functional neuroimaging using the false discovery rate. Neuroimage 15:870–878, doi:10.1006/nimg.2001.1037, pmid:11906227.
    OpenUrlCrossRefPubMed
  30. ↵
    1. Gibson E
    (1998) Linguistic complexity: locality of syntactic dependencies. Cognition 68:1–76, doi:10.1016/S0010-0277(98)00034-1, pmid:9775516.
    OpenUrlCrossRefPubMed
  31. ↵
    1. Glasser MF,
    2. Sotiropoulos SN,
    3. Wilson JA,
    4. Coalson TS,
    5. Fischl B,
    6. Andersson JL,
    7. Xu J,
    8. Jbabdi S,
    9. Webster M,
    10. Polimeni JR,
    11. Van Essen DC,
    12. Jenkinson M
    (2013) The minimal preprocessing pipelines for the Human Connectome Project. Neuroimage 80:105–124, doi:10.1016/j.neuroimage.2013.04.127, pmid:23668970.
    OpenUrlCrossRefPubMed
  32. ↵
    1. Goldin-Meadow S,
    2. Feldman H
    (1977) The development of language-like communication without a language model. Science 197:401–403, doi:10.1126/science.877567, pmid:877567.
    OpenUrlAbstract/FREE Full Text
  33. ↵
    1. Gordon PC,
    2. Hendrick R,
    3. Johnson M
    (2001) Memory interference during language processing. J Exp Psychol Learn Mem Cogn 27:1411–1423, doi:10.1037/0278-7393.27.6.1411, pmid:11713876.
    OpenUrlCrossRefPubMed
  34. ↵
    1. Grodzinsky Y,
    2. Santi A
    (2008) The battle for Broca's region. Trends Cogn Sci 12:474–480, doi:10.1016/j.tics.2008.09.001, pmid:18930695.
    OpenUrlCrossRefPubMed
  35. ↵
    1. Grodzinsky Y
    (2000) The neurology of syntax: language use without Broca's area. Behav Brain Sci 23:1–21, pmid:11303337, discussion 21–71.
    OpenUrlCrossRefPubMed
  36. ↵
    1. Hadjikhani N,
    2. Liu AK,
    3. Dale AM,
    4. Cavanagh P,
    5. Tootell RB
    (1998) Retinotopy and color sensitivity in human visual cortical area V8. Nat Neurosci 1:235–241, doi:10.1038/681, pmid:10195149.
    OpenUrlCrossRefPubMed
  37. ↵
    1. Hauser MD,
    2. Chomsky N,
    3. Fitch WT
    (2002) The faculty of language: what is it, who has it, and how did it evolve? Science 298:1569–1579, doi:10.1126/science.298.5598.1569, pmid:12446899.
    OpenUrlAbstract/FREE Full Text
  38. ↵
    1. Hinds OP,
    2. Rajendran N,
    3. Polimeni JR,
    4. Augustinack JC,
    5. Wiggins G,
    6. Wald LL,
    7. Diana Rosas H,
    8. Potthast A,
    9. Schwartz EL,
    10. Fischl B
    (2008) Accurate prediction of V1 location from cortical folds in a surface coordinate system. Neuroimage 39:1585–1599, doi:10.1016/j.neuroimage.2007.10.033, pmid:18055222.
    OpenUrlCrossRefPubMed
  39. ↵
    1. Hyvärinen J,
    2. Carlson S,
    3. Hyvärinen L
    (1981) Early visual deprivation alters modality of neuronal responses in area 19 of monkey cortex. Neurosci Lett 26:239–243, doi:10.1016/0304-3940(81)90139-7, pmid:7322437.
    OpenUrlCrossRefPubMed
  40. ↵
    1. Karns CM,
    2. Dow MW,
    3. Neville HJ
    (2012) Altered cross-modal processing in the primary auditory cortex of congenitally deaf adults: a visual-somatosensory fMRI study with a double-flash illusion. J Neurosci 32:9626–9638, doi:10.1523/JNEUROSCI.6488-11.2012, pmid:22787048.
    OpenUrlAbstract/FREE Full Text
  41. ↵
    1. King J,
    2. Just MA
    (1991) Individual differences in syntactic processing: the role of working memory. J Mem Lang 30:580–602, doi:10.1016/0749-596X(91)90027-H.
    OpenUrlCrossRef
  42. ↵
    1. Liu Y,
    2. Yu C,
    3. Liang M,
    4. Li J,
    5. Tian L,
    6. Zhou Y,
    7. Qin W,
    8. Li K,
    9. Jiang T
    (2007) Whole brain functional connectivity in the early blind. Brain 130:2085–2096, doi:10.1093/brain/awm121, pmid:17533167.
    OpenUrlAbstract/FREE Full Text
  43. ↵
    1. Lomber SG,
    2. Meredith MA,
    3. Kral A
    (2010) Cross-modal plasticity in specific auditory cortices underlies visual compensations in the deaf. Nat Neurosci 13:1421–1427, doi:10.1038/nn.2653, pmid:20935644.
    OpenUrlCrossRefPubMed
  44. ↵
    1. Makuuchi M,
    2. Bahlmann J,
    3. Anwander A,
    4. Friederici AD
    (2009) Segregating the core computational faculty of human language from working memory. Proc Natl Acad Sci U S A 106:8362–8367, doi:10.1073/pnas.0810928106, pmid:19416819.
    OpenUrlAbstract/FREE Full Text
  45. ↵
    1. Martino J,
    2. Brogna C,
    3. Robles SG,
    4. Vergani F,
    5. Duffau H
    (2010) Anatomic dissection of the inferior fronto-occipital fasciculus revisited in the lights of brain stimulation data. Cortex 46:691–699, doi:10.1016/j.cortex.2009.07.015, pmid:19775684.
    OpenUrlCrossRefPubMed
  46. ↵
    1. McCandliss BD,
    2. Cohen L,
    3. Dehaene S
    (2003) The visual word form area: expertise for reading in the fusiform gyrus. Trends Cogn Sci 7:293–299, doi:10.1016/S1364-6613(03)00134-7, pmid:12860187.
    OpenUrlCrossRefPubMed
  47. ↵
    1. Merabet L,
    2. Thut G,
    3. Murray B,
    4. Andrews J,
    5. Hsiao S,
    6. Pascual-Leone A
    (2004) Feeling by sight or seeing by touch? Neuron 42:173–179, doi:10.1016/S0896-6273(04)00147-3, pmid:15066274.
    OpenUrlCrossRefPubMed
  48. ↵
    1. Merabet L,
    2. Hamilton R,
    3. Schlaug G,
    4. Swisher J,
    5. Kiriakopoulos E,
    6. Pitskel N,
    7. Kauffman T,
    8. Pascual-Leone A
    (2008) Rapid and reversible recruitment of early visual cortex for touch. PLoS One 3:e3046, doi:10.1371/journal.pone.0003046, pmid:18728773.
    OpenUrlCrossRefPubMed
  49. ↵
    1. Monti MM,
    2. Parsons LM,
    3. Osherson DN
    (2012) Thought beyond language: neural dissociation of algebra and natural language. Psychol Sci 23:914–922, doi:10.1177/0956797612437427, pmid:22760883.
    OpenUrlAbstract/FREE Full Text
  50. ↵
    1. Morey RD
    (2008) Confidence intervals from normalized data: a correction to Cousineau (2005). Tutor Quant Methods Psychol 4:61–64.
    OpenUrl
  51. ↵
    1. Neville HJ,
    2. Bavelier D,
    3. Corina D,
    4. Rauschecker J,
    5. Karni A,
    6. Lalwani A,
    7. Braun A,
    8. Clark V,
    9. Jezzard P,
    10. Turner R
    (1998) Cerebral organization for language in deaf and hearing subjects: biological constraints and effects of experience. Proc Natl Acad Sci U S A 95:922–929, doi:10.1073/pnas.95.3.922, pmid:9448260.
    OpenUrlAbstract/FREE Full Text
  52. ↵
    1. Pinker S,
    2. Bloom P
    (1990) Natural language and natural selection. Behav Brain Sci 13:707–727, doi:10.1017/S0140525X00081061.
    OpenUrlCrossRef
  53. ↵
    1. Reich L,
    2. Szwed M,
    3. Cohen L,
    4. Amedi A
    (2011) A ventral visual stream reading center independent of visual experience. Curr Biol 22:363–368, doi:10.1016/j.cub.2011.01.040, pmid:22326025.
    OpenUrlCrossRefPubMed
  54. ↵
    1. Rilling JK,
    2. Glasser MF,
    3. Preuss TM,
    4. Ma X,
    5. Zhao T,
    6. Hu X,
    7. Behrens TE
    (2008) The evolution of the arcuate fasciculus revealed with comparative DTI. Nat Neurosci 11:426–428, doi:10.1038/nn2072, pmid:18344993.
    OpenUrlCrossRefPubMed
  55. ↵
    1. Röder B,
    2. Rösler F,
    3. Hennighausen E,
    4. Näcker F
    (1996) Event-related potentials during auditory and somatosensory discrimination in sighted and blind human subjects. Cogn Brain Res 4:77–93, doi:10.1016/0926-6410(96)00024-9, pmid:8883921.
    OpenUrlCrossRefPubMed
  56. ↵
    1. Röder B,
    2. Rösler F,
    3. Neville HJ
    (2000) Event-related potentials during auditory language processing in congenitally blind and sighted people. Neuropsychologia 38:1482–1502, doi:10.1016/S0028-3932(00)00057-9, pmid:10906374.
    OpenUrlCrossRefPubMed
  57. ↵
    1. Röder B,
    2. Stock O,
    3. Bien S,
    4. Neville H,
    5. Rösler F
    (2002) Speech processing activates visual cortex in congenitally blind humans. Eur J Neurosci 16:930–936, doi:10.1046/j.1460-9568.2002.02147.x, pmid:12372029.
    OpenUrlCrossRefPubMed
  58. ↵
    1. Rösler F,
    2. Röder B,
    3. Heil M,
    4. Hennighausen E
    (1993) Topographic differences of slow event-related brain potentials in blind and sighted adult human subjects during haptic mental rotation. Brain Res Cogn Brain Res 1:145–159, doi:10.1016/0926-6410(93)90022-W, pmid:8257870.
    OpenUrlCrossRefPubMed
  59. ↵
    1. Sadato N,
    2. Pascual-Leone A,
    3. Grafman J,
    4. Ibañez V,
    5. Deiber MP,
    6. Dold G,
    7. Hallett M
    (1996) Activation of the primary visual cortex by Braille reading in blind subjects. Nature 380:526–528, doi:10.1038/380526a0, pmid:8606771.
    OpenUrlCrossRefPubMed
  60. ↵
    1. Sadato N,
    2. Pascual-Leone A,
    3. Grafman J,
    4. Deiber MP,
    5. Ibañez V,
    6. Hallett M
    (1998) Neural networks for Braille reading by the blind. Brain 121:1213–1229, doi:10.1093/brain/121.7.1213, pmid:9679774.
    OpenUrlAbstract/FREE Full Text
  61. ↵
    1. Saenz M,
    2. Lewis LB,
    3. Huth AG,
    4. Fine I,
    5. Koch C
    (2008) Visual motion area MT+/V5 responds to auditory motion in human sight-recovery subjects. J Neurosci 28:5141–5148, doi:10.1523/JNEUROSCI.0803-08.2008, pmid:18480270.
    OpenUrlAbstract/FREE Full Text
  62. ↵
    1. Smith SM,
    2. Jenkinson M,
    3. Woolrich MW,
    4. Beckmann CF,
    5. Behrens TEJ,
    6. Johansen-Berg H,
    7. Bannister PR,
    8. De Luca M,
    9. Drobnjak I,
    10. Flitney DE,
    11. Niazy RK,
    12. Saunders J,
    13. Vickers J,
    14. Zhang Y,
    15. De Stefano N,
    16. Brady JM,
    17. Matthews PM
    (2004) Advances in functional and structural MR image analysis and implementation as FSL. Neuroimage 23(Suppl 1):S208–S219.
    OpenUrlCrossRefPubMed
  63. ↵
    1. Srihasam K,
    2. Vincent JL,
    3. Livingstone MS
    (2014) Novel domain formation reveals proto-architecture in inferotemporal cortex. Nat Neurosci 17:1776–1783, doi:10.1038/nn.3855, pmid:25362472.
    OpenUrlCrossRefPubMed
  64. ↵
    1. Staub A
    (2010) Eye movements and processing difficulty in object relative clauses. Cognition 116:71–86, doi:10.1016/j.cognition.2010.04.002, pmid:20427040.
    OpenUrlCrossRefPubMed
  65. ↵
    1. Terrace HS,
    2. Petitto LA,
    3. Sanders RJ,
    4. Bever TG
    (1979) Can an ape create a sentence? Science 206:891–902, doi:10.1126/science.504995, pmid:504995.
    OpenUrlAbstract/FREE Full Text
  66. ↵
    1. Tomasello M
    (2009) The cultural origins of human cognition (Harvard UP, Cambridge, MA).
  67. ↵
    1. Tootell RB,
    2. Hadjikhani N
    (2001) Where is “dorsal V4” in human visual cortex? Retinotopic, topographic and functional evidence. Cereb Cortex 11:298–311, doi:10.1093/cercor/11.4.298, pmid:11278193.
    OpenUrlAbstract/FREE Full Text
  68. ↵
    1. Van Essen DC
    (2005) A Population-Average, Landmark- and Surface-based (PALS) atlas of human cerebral cortex. Neuroimage 28:635–662, doi:10.1016/j.neuroimage.2005.06.058, pmid:16172003.
    OpenUrlCrossRefPubMed
  69. ↵
    1. Vargha-Khadem F,
    2. Gadian DG,
    3. Copp A,
    4. Mishkin M
    (2005) FOXP2 and the neuroanatomy of speech and language. Nat Rev Neurosci 6:131–138, doi:10.1038/nrn1605, pmid:15685218.
    OpenUrlCrossRefPubMed
  70. ↵
    1. Watkins KE,
    2. Cowey A,
    3. Alexander I,
    4. Filippini N,
    5. Kennedy JM,
    6. Smith SM,
    7. Ragge N,
    8. Bridge H
    (2012) Language networks in anophthalmia: maintained hierarchy of processing in “visual” cortex. Brain 135:1566–1577, doi:10.1093/brain/aws067, pmid:22427328.
    OpenUrlAbstract/FREE Full Text
  71. ↵
    1. Weeks R,
    2. Horwitz B,
    3. Aziz-Sultan A,
    4. Tian B,
    5. Wessinger CM,
    6. Cohen LG,
    7. Hallett M,
    8. Rauschecker JP
    (2000) A positron emission tomographic study of auditory localization in the congenitally blind. J Neurosci 20:2664–2672, pmid:10729347.
    OpenUrlAbstract/FREE Full Text
  72. ↵
    1. Wolbers T,
    2. Zahorik P,
    3. Giudice NA
    (2011) Decoding the direction of auditory motion in blind humans. Neuroimage 56:681–687, doi:10.1016/j.neuroimage.2010.04.266, pmid:20451630.
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 35 (37)
Journal of Neuroscience
Vol. 35, Issue 37
16 Sep 2015
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Advertising (PDF)
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
“Visual” Cortex of Congenitally Blind Adults Responds to Syntactic Movement
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
“Visual” Cortex of Congenitally Blind Adults Responds to Syntactic Movement
Connor Lane, Shipra Kanjlia, Akira Omaki, Marina Bedny
Journal of Neuroscience 16 September 2015, 35 (37) 12859-12868; DOI: 10.1523/JNEUROSCI.1256-15.2015

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
“Visual” Cortex of Congenitally Blind Adults Responds to Syntactic Movement
Connor Lane, Shipra Kanjlia, Akira Omaki, Marina Bedny
Journal of Neuroscience 16 September 2015, 35 (37) 12859-12868; DOI: 10.1523/JNEUROSCI.1256-15.2015
del.icio.us logo Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Notes
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • blindness
  • language
  • plasticity
  • syntax

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Articles

  • Choice Behavior Guided by Learned, But Not Innate, Taste Aversion Recruits the Orbitofrontal Cortex
  • Maturation of Spontaneous Firing Properties after Hearing Onset in Rat Auditory Nerve Fibers: Spontaneous Rates, Refractoriness, and Interfiber Correlations
  • Insulin Treatment Prevents Neuroinflammation and Neuronal Injury with Restored Neurobehavioral Function in Models of HIV/AIDS Neurodegeneration
Show more Articles

Behavioral/Cognitive

  • Brain functional connectivity mapping of behavioral flexibility in rhesus monkeys
  • Accumulation System: Distributed Neural Substrates of Perceptual Decision Making Revealed by fMRI Deconvolution
  • Recruitment of control and representational components of the semantic system during successful and unsuccessful access to complex factual knowledge
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
(JNeurosci logo)
(SfN logo)

Copyright © 2022 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.