Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

Domain Selectivity in the Parahippocampal Gyrus Is Predicted by the Same Structural Connectivity Patterns in Blind and Sighted Individuals

Xiaoying Wang, Chenxi He, Marius V. Peelen, Suyu Zhong, Gaolang Gong, Alfonso Caramazza and Yanchao Bi
Journal of Neuroscience 3 May 2017, 37 (18) 4705-4716; DOI: https://doi.org/10.1523/JNEUROSCI.3622-16.2017
Xiaoying Wang
1State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, People's Republic of China,
2Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, People's Republic of China,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Chenxi He
1State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, People's Republic of China,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Marius V. Peelen
3Center for Mind/Brain Sciences, University of Trento, 38068 Rovereto, Italy, and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Marius V. Peelen
Suyu Zhong
1State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, People's Republic of China,
2Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, People's Republic of China,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Gaolang Gong
1State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, People's Republic of China,
2Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, People's Republic of China,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Alfonso Caramazza
3Center for Mind/Brain Sciences, University of Trento, 38068 Rovereto, Italy, and
4Department of Psychology, Harvard University, Cambridge, Massachusetts 02138
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Yanchao Bi
1State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, People's Republic of China,
2Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, People's Republic of China,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Human ventral occipital temporal cortex contains clusters of neurons that show domain-preferring responses during visual perception. Recent studies have reported that some of these clusters show surprisingly similar domain selectivity in congenitally blind participants performing nonvisual tasks. An important open question is whether these functional similarities are driven by similar innate connections in blind and sighted groups. Here we addressed this question focusing on the parahippocampal gyrus (PHG), a region that is selective for large objects and scenes. Based on the assumption that patterns of long-range connectivity shape local computation, we examined whether domain selectivity in PHG is driven by similar structural connectivity patterns in the two populations. Multiple regression models were built to predict the selectivity of PHG voxels for large human-made objects from white matter (WM) connectivity patterns in both groups. These models were then tested using independent data from participants with similar visual experience (two sighted groups) and using data from participants with different visual experience (blind and sighted groups). Strikingly, the WM-based predictions between blind and sighted groups were as successful as predictions between two independent sighted groups. That is, the functional selectivity for large objects of a PHG voxel in a blind participant could be accurately predicted by its WM pattern using the connection-to-function model built from the sighted group data, and vice versa. Regions that significantly predicted PHG selectivity were located in temporal and frontal cortices in both sighted and blind populations. These results show that the large-scale network driving domain selectivity in PHG is independent of vision.

SIGNIFICANCE STATEMENT Recent studies have reported intriguingly similar domain selectivity in sighted and congenitally blind individuals in regions within the ventral visual cortex. To examine whether these similarities originate from similar innate connectional roots, we investigated whether the domain selectivity in one population could be predicted by the structural connectivity pattern of the other. We found that the selectivity for large objects of a PHG voxel in a blind participant could be predicted by its structural connectivity pattern using the connection-to-function model built from the sighted group data, and vice versa. These results reveal that the structural connectivity underlying domain selectivity in the PHG is independent of visual experience, providing evidence for nonvisual representations in this region.

  • domain selectivity
  • PPA
  • ventral visual cortex
  • visual experience
  • white matter

Introduction

The ventral occipital temporal cortex (VOTC), the higher-order “what” visual pathway, contains clusters showing differential sensitivity to visual stimuli of various domains of objects, such as scenes, animals, or tools, with a general broad animate–inanimate distinction (Kriegeskorte et al., 2008; Kanwisher, 2010; Konkle and Caramazza, 2013; Grill-Spector and Weiner, 2014). A recent wave of studies reported similar domain selectivity across input modalities in sighted and congenitally blind individuals, including selectivity for scenes, bodies, words, tools, and large human-made objects (Mahon et al., 2009; Wolbers et al., 2011; Striem-Amit et al., 2012; He et al., 2013; Peelen et al., 2013; Kitada et al., 2014; Striem-Amit and Amedi, 2014; for review, see Ricciardi et al., 2014; Bi et al., 2016). These results indicate that visual experience is not necessary for producing selectivity to object domains in the VOTC, which is in contrast to the common view that such selectivity is driven by visual experience (Ricciardi et al., 2014).

What drives the similar domain selectivity in blind and sighted individuals? One way to address the nature of the similar domain preference across blind and sighted groups is to examine whether they are shaped by similar innate structural connectional constraints. It has been hypothesized that the connectivity pattern of a region determines what is represented/computed there (Passingham et al., 2002; Mahon and Caramazza, 2011). Recently, it has been found that the extent of the domain selectivity of a voxel can be predicted from its white matter (WM) connectivity pattern, especially in regions where the domain selectivity is strong (Saygin et al., 2011, 2016; Osher et al., 2016). Taking advantage of this approach, we tested whether the domain selectivity in blind and sighted individuals in one particular VOTC region—the parahippocampal gyrus (PHG)—is shaped by similar connectional constraints. Among the various subregions of the VOTC, the PHG exhibits particularly robust multimodal selectivity for scenes, buildings, and large objects (for review, see Ricciardi et al., 2014; Bi et al., 2016). He et al. (2013) reported that when blind subjects and sighted subjects listened to the names of large objects such as “couch” or “refrigerator” in contrast to small tools and animals, the PHG was significantly more strongly activated, and this activation overlapped well with the parahippocampal place area (Epstein and Kanwisher, 1998; Epstein, 2008). These findings were replicated in the study by Wang et al. (2015) and are consistent with the findings in the study by Wolbers et al. (2011), who reported that in both blind and sighted subjects, haptic exploration of Lego scenes elicited stronger activation in the PHG than did the exploration of Lego objects. We thus used the selectivity for large objects in PHG as a test case for the effect of visual experience in the mapping between white matter connectivity and regional functional preferences.

We collected task-based functional responses and diffusion-weighted imaging (DWI) data from the following three groups of participants: a congenitally blind group and two independent sighted groups. The key question is whether the accuracy of the sighted–blind prediction (i.e., building the connection-to-function prediction model in one group and testing it in another group) is comparable to that of the predictions between the two sighted groups, and whether these predictions relied on similar connections.

Materials and Methods

Participants

A group of congenitally blind individuals and two independent groups of sighted individuals underwent both the functional MRI and the DWI sessions.

For the congenitally blind group, 16 individuals participated in the experiments, but 2 were excluded from the data analysis due to unexpected pre-existing brain abnormalities discovered by MRI. The remaining 14 blind participants were between the ages of 26 and 60 years (mean age, 44 ± 10 years; 7 females), all were right handed and had completed 9–15 years of education (mean, 11 ± 2 years of education). All participants were from the subject cohort in our previous studies (He et al., 2013; Peelen et al., 2013). All blind participants reported that they had been blind since birth. Because medical records concerning the onset of blindness were not available for most participants, it cannot be ruled out that some of the participants may have had vision very early in life. None of the participants remembered ever having been able to visually recognize patterns. Ten blind subjects were examined by an ophthalmologist to confirm their blindness and to establish the causes, if possible. Five blind participants reported having had faint light perception in the past. Eight participants reported having faint light perception at the time of testing.

For the sighted groups, the S1 group consisted of 15 college students between the ages of 18 and 25 years (mean age, 22 ± 2 years; 10 females), and the S2 group included 7 sighted adults (2 females) whose age and number of years of education were matched with the blind participants (mean age, 42 ± 10 years; age range, 26–54 years; compared with the blind group: t(19) = −0.64, p > 0.47; mean length of education, 12 ± 2 years; range of length of education, 9–15; compared with the blind group: t(19) = 0.25, p > 0.80). All sighted participants were right handed.

All participants were native Mandarin Chinese speakers. None had experienced psychiatric or neurological disorders, had ever sustained a head injury, or were receiving treatment with any psychoactive medication. All participants completed a written informed consent form that was approved by the institutional review board of Beijing Normal University Imaging Center for Brain Research and received monetary compensation.

Data acquisition parameters

All data were collected with a 3 T Siemens Trio Tim scanner at the BNU MRI center, using identical parameters across all participants.

A high-resolution 3D structural dataset was acquired by using a 3D-MPRAGE sequence in the sagittal plane (TR, 2530 ms; TE, 3.39 ms; flip angle, 7°; matrix size, 256 × 256; 144 slices; voxel size, 1.33 × 1 × 1.33 mm; acquisition time, 8.07 min).

BOLD signals were measured with an EPI sequence (TR, 2000 ms; TE, 30 ms; flip angle, 90°; matrix size, 64 × 64; voxel size, 3.125 × 3.125 × 4 mm; interslice distance, 4.6 mm; 33 slices; axial slice orientation).

DWI data were collected through two separate scans with identical parameters (75 slices; TR, 10,000 ms; TE, 91 ms; FOV, 256 × 256 mm; flip angle, 90°; voxel size, 2 × 2 × 2 mm; matrix size, 128 × 128; diffusion weighting isotropically distributed along 30 directions; the first volume was designated the b0 volume, and the b value of other volumes was 1000 s/mm2). The two separate scans were averaged in the subsequent analysis.

DWI tractography

We first parcellated each participant's 3D structural T1 image into 90 cerebral regions according to the Automatic Anatomical Labeling (AAL) atlas (Tzourio-Mazoyer et al., 2002) using the FMRIB Software Library (FSL; RRID:SCR_002823; http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/; Smith et al., 2004; Jenkinson et al., 2012). After parcellation, each participant's T1 image was registered to his or her DWI image.

Previous studies have reported that the bilateral PHG exhibited “multimodal” functional selectivity for large objects and scenes across sighted and congenitally blind populations (Wolbers et al., 2011; He et al., 2013). To directly test whether a shared algorithm underlies the functional selectivity profile of the PHG between populations with and without visual experience, the left and right PHG parcels were defined as seed regions of interest (ROIs). Probabilistic tractography was conducted from each voxel in either of the two seed ROIs (i.e., the left or the right PHG), targeting the remaining 89 AAL parcels (including the contralateral PHG parcel). For the blind group and the age-matched S2 group, the probabilistic diffusion tractography was performed with 25,000 streamline samples per seed voxel using FSL-probtrackx (Behrens et al., 2003). For the S1 group, the number of streamline samples was set to 5000 (Osher et al., 2016). For each voxel within the seed ROI, the probability of its connection to each of the 89 target parcels was calculated. Thus, a vector of 89 connection probabilities was obtained for each seed voxel to reflect its structural connectivity pattern.

Procedures and analyses for fMRI experiments

The functional imaging data from the congenitally blind group and the age-matched S2 group were collected in a previous study (He et al., 2013) and were reanalyzed here. Data from the S1 group were collected specifically for the present study using the same procedures and scanning parameters. In the scanner, all subjects participated in an auditory object size judgment experiment. The sighted groups additionally participated in a passive picture-viewing experiment.

The same objects were used for the auditory and visual experiments. Stimuli were from three object categories—30 large human-made objects, 30 tools, and 30 animals. Large objects included furniture, appliances, cars, buildings, and other common large, nonmanipulable objects; tools included kitchen utensils, farm implements, and common household tools; animals included mammals, birds, insects, and reptiles (for detailed information, see He et al., 2013).

In the auditory experiment, stimuli were presented in blocks of five words, all from the same domain. Participants were instructed to think about the size of the first object and to compare the subsequent objects to the first one. If all five objects were comparable in size, participants responded by pressing a button with the left index finger; if at least one of the last four objects was different in size from the first one, participants pressed a button with the right index finger. Participants were asked to respond immediately after a response cue (auditory tone; duration, 200 ms) presented after the offset of the last item of a block. For each block, each item lasted 2 s, and the last item was followed by a 4 s silent period for response. Thus, each block lasted 14 s. Each block was followed by a 14 s period of silence. Participants took four runs of the size judgment task. Each run lasted 4 min and 40 s and contained 10 blocks. The first block of each run was for practice and thus was excluded from data analysis, leaving a total of 36 blocks (12 repetitions per domain). The order of blocks was pseudorandomized with the restriction that no two consecutive blocks were from the same domain.

In the picture-viewing experiment, sighted participants passively viewed grayscale photographs of objects corresponding to the items used in the auditory experiment. The pictures were presented sequentially (667 ms; inter-stimulus interval = 0) in blocks of 30 items from the same domain (i.e., domain block). Each domain block lasted 20 s, followed by a 20 s fixation period. Each domain block was repeated four times in pseudorandomized order, with the restriction that no two consecutive blocks were from the same domain. The single-run passive picture-viewing task began with a 10 s fixation and lasted 8 min and 10 s in total.

fMRI data were analyzed using BrainVoyager QX version 2.3 (RRID:SCR_013057). The first 28 s in each run of the auditory size judgment task (the practice block) and 10 s in the run of the passive picture-viewing task (fixation) were discarded. Preprocessing of the functional data included 3D motion correction with respect to the first (remaining) volume of the run scanned closest to the 3D structural data for each experiment, spatial smoothing (Gaussian filter, 6 mm full-width at half-maximum), and temporal filtering [high-pass (general linear model [GLM]-Fourier) filter: 0.006 Hz for the one-back picture-viewing experiment and 0.003 Hz for other experiments]. For each participant, functional data were then registered to her/his own anatomical data. For the connectivity-based prediction of functional response (see below), the functional and anatomical volumes were kept in each individual's native space for further analyses. Meanwhile, for predictions based on group-average responses, the functional and anatomical volumes were transformed into the standard Talairach space (Talairach and Tournoux, 1988) to generate group-average activation maps. We used a GLM to fit the preprocessed functional time series. Three predictors of interest corresponding to the three object domain conditions and six predictors of no interest corresponding to six head motion parameters were included. For each participant in either fMRI experiment, t statistic images were generated for the contrast of responses to large objects versus the average of the responses to tools and animals. The t value in each voxel indicated the degree of preference of the voxel for large objects relative to the other two corresponding object categories.

The t statistic images were resampled into voxels of 2 × 2 × 2 mm in size and registered to the corresponding individual's DWI images using FSL (Smith et al., 2004). As in the studies by Saygin et al. (2011) and Osher et al. (2016), we were interested in predicting relative activation values that were independent of task-specific parameters (e.g., degrees of freedom), so we standardized the t values across voxels within each ROI into z-scores for each participant. Specifically, the mean value of the t statistics across all voxels in the corresponding ROI was subtracted from the t value of each voxel, and the difference was then divided by the SD.

Connectivity-based prediction using linear regression

Largely following Saygin et al. (2011) and Osher et al. (2016), the relationship between the functional selectivity of each voxel and its structural connectivity pattern was pursued through linear regression. The left and right PHGs were analyzed separately.

Within-group modeling.

Within-group modeling was first conducted to test whether the selectivity for large objects in the PHG voxels was predicted by its structural connectivity pattern in the experiments with the blind group and the two sighted groups (Fig. 1). Analyzing each participant group in either experiment independently, we adopted a leave-one-subject-out cross-validation (LOOCV) routine to train the linear regression model to learn the association between the functional selectivity of each voxel (i.e., the standardized t value for the contrast of large objects vs the average of tools and animals) and its structural connectivity pattern (i.e., the vector of connection probabilities for the remaining 89 target parcels). Specifically, we concatenated data from all voxels of N-1 participants to train the model. A stepwise regression algorithm, which fits the model by beginning with an initial model and then adding or removing predictors according to whether the inclusion or exclusion of the predictor significantly changes the explanatory power of the model, was used to determine the model coefficients. We then applied the regression coefficients obtained from training to predict the remaining participant's functional selectivity for each PHG voxel, given its connectivity pattern (Fig. 1A,B). Only the regression coefficients of the predictors identified as significant (p < 0.05) by the stepwise regression algorithm were applied during prediction, leaving those of unselected predictors set to zero. Note that before modeling, each feature (the connection probability between the seed voxel and the target parcel) was standardized across voxels within each subject to keep in accord with the standardization of functional selectivity t values and so that the model coefficients of different predictors would be at the same scale. This routine was repeated for each participant in each group from each experiment.

Between-group modeling.

The critical analyses are the between-group predictions (Fig. 1C), which were performed to investigate whether the underlying algorithm of the connectivity constraint on the local function of the PHG was similar between sighted and congenitally blind participants. We first performed predictions between sighted and congenitally blind populations by establishing the model based on data from all participants belonging to one population in a particular experiment (e.g., sighted students or sighted controls in the auditory or the visual experiment) and applied the obtained model to the other population for prediction (e.g., congenitally blind in the auditory experiment) as well as for the opposite direction. Predictions between the two sighted groups were conducted in the same manner (Fig. 1C). The same stepwise linear regression algorithm was used for modeling as in the within-group modeling.

To test whether visual experience affects the structural connectional constraints on the functional selectivity of the PHG, we compared the performances of connectivity-based predictions between the congenitally blind and the two sighted groups (i.e., between-blind-sighted) with those between the two sighted groups (i.e., between-S1-S2) using bootstrap resampling. The bootstrap resampling method (Efron and Tibshirani, 1993; Kriegeskorte et al., 2008) was chosen because it does not require the data being tested to meet specific distributional assumptions. Specifically, we bootstrap resampled the between-blind-sighted and between-S1-S2 prediction performances, recomputing the mean difference between them. The bootstrap resampling was repeated 5000 times using SPSS Statistics Software version 19.0 (IBM; RRID:SCR_002865).

For all predictions, the performance was measured using the mean absolute difference [mean absolute error (MAE)] between the predicted t values and the actual t values across voxels within the ROI parcel separately for each participant.

Characterizing important structural connections shared across models.

We investigated the specific structural connections that are important in predicting the selectivity for large objects in the PHG across all models (i.e., blind auditory, sighted auditory, sighted visual). Instead of using LOOCV, which was used in the within-group predictions, here we built each model by using data from all participants from the corresponding experiment. To identify the critical predictors for each model, the correlation matrix of the structural connections was first computed and the model coefficients were left multiplied by this correlation matrix. This was done to avoid omitting important predictors in case of high covariance between different connections (Haufe et al., 2014). The newly generated model coefficients were then ranked separately for positive and negative predictors. Those with top 10 high coefficients were considered to be important predictors. The predictors that were shared across all models were then identified. The sighted models were established based on data from the S1 group (N = 15) to match with the sample size of the blind group (N = 14).

Comparison with group-average models

We tested the validity of the connectivity model by comparing its performance to a corresponding group-average benchmark model (Saygin et al., 2011; Osher et al., 2016). That is, we evaluated whether the prediction accuracy for the functional selectivity of a voxel for large objects according to the structural connectivity pattern is greater than that from the group-average model results.

For within-group predictions, the group-average model was established via the same LOOCV routine as in the connectivity-based modeling. Specifically, each participant's functional data were spatially normalized into Talairach space. A random-effect test was performed using BrainVoyager on contrast images for the large objects > average of animals and tools contrast from N-1 participants, and the resulting t map was then registered to the remaining participant's DWI space, generating a prediction map for that participant. MAE was then computed between the group-average model predicted t values and the actual t values (both standardized) within the PHG parcel separately for each participant. We compared the MAEs of the connectivity model with those of the group-average model across participants using the bootstrap resampling method, which was repeated 5000 times on the paired MAEs of the connectivity model and those of the group-average benchmark model using SPSS Statistics Software version 19.0 (IBM).

For between-group predictions, the group average-based predictions within the sighted visual experiments (collapsing the two sighted groups) were taken as the benchmark model for all types of between-group connectivity-based predictions, given that the within-visual group-average predictions exhibited the best performances (i.e., lowest MAEs) across all benchmark models and thus were considered to be the most stringent baseline. Specifically, the MAEs of connectivity-based predictions between the auditory experiment with the blind subjects and the auditory experiments with the two sighted groups, the MAEs between the blind auditory experiment and visual experiments of the two sighted groups, and the MAEs between the two sighted visual experiments (collapsing all prediction combinations and directions) were compared separately with the MAEs of group average-based within-group predictions using bootstrap resampling for independent samples.

Connectivity-based predictions using nonlinear support vector regression

Considering that the relationship between the function and the connectivity pattern may be nonlinear, we explored the potential nonlinear relationships by applying a nonlinear support vector regression (SVR) algorithm to the connectivity-based prediction of the functional selectivity of each voxel. Specifically, we examined whether the prediction performances could be improved in general and, more critically, whether differences between within-population (i.e., between the two sighted groups) and between-population (i.e., between blind and sighted populations) decoding emerged in the bilateral PHG. These predictions followed procedures similar to those for linear regression predictions, except that we trained the connectivity model using LIBSVM (Chang and Lin, 2011) with the nonlinear radial basis function kernel (http://www.csie.ntu.edu.tw/∼cjlin/papers/guide/guide.pdf).

Validation analyses

We evaluated our results in different measurements of model performance and further examined whether our main results were explained by anatomical geometry by making predictions based on distance models. The procedures were as described below.

Model performances measured by correlations.

In addition to MAEs, we also used Pearson correlations between predicted and actual t values across all voxels in the PHG parcels to measure the prediction performances. We computed the Pearson r values for the predictions using the linear connectivity models. Similar to the main analyses, the performances of connectivity-based predictions between the congenitally blind and the two sighted groups were compared with those between the two sighted visual experiments to test whether visual experience affects the pattern of structural connectional constraints on the functional selectivity of the PHG. Similar bootstrap resampling methods were used for statistical comparisons. The r values were Fisher transformed before comparison.

Comparisons to distance models.

Considering that the connection probabilities might be biased by the distance between the seed voxel and target regions (Hilgetag and Kaiser, 2004; Sporns and Zwi, 2004), we further tested whether the results in the connection model could be explained by covariations between local function and anatomical geometry. We computed prediction accuracies based on distance models and compared the performances of the connectivity models using bootstrap resampling. The distance models were identical to the connectivity models except that the predictors were the Euclidean distance between the PHG voxels and the center of mass of the target brain parcel. Similar to the connectivity models, features were first standardized across voxels within each participant before modeling.

Results

The bilateral PHG in the AAL template (Tzourio-Mazoyer et al., 2002), containing clusters showing selectivity for large objects, were chosen as ROIs for analyses designed to examine the effect of visual experience on WM connectivity and its mapping with functional selectivity. We largely followed the procedures used in the studies by Saygin et al. (2011) and Osher et al. (2016) to predict functional selectivity from WM connectivity with linear regression models. We also used nonlinear SVR models to capture potential effects in the structure-to-function mapping that are nonlinear, and obtained results that are largely similar to those found using linear regression models. Furthermore, we validated our main results by comparing them to distance models and using additional model performance measurement. The critical focus here is on the degree to which visual experience influences the pattern of WM connectional constraints on functional selectivity [i.e., the comparison of prediction accuracies between groups that shared (two independent sighted groups) or differed in (sighted vs blind groups) visual experience (Fig. 1, analytical scheme)].

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Analytical scheme. A, B, The mapping algorithm f(x) between functional selectivity and WM connectivity pattern was established through A and tested through B. For within-group predictions, an LOOCV procedure was followed, with the model built based on data from N-1 subjects and tested on the data from the remaining subject from the same group. For between-group predictions, the model was built based on data from all subjects from one group and tested on each subject from a different group. C, Critical comparisons of between-group connectivity-based predictions were performed to test the effect of visual experience and visual input on the connectivity constraints of local functional selectivity.

WM connectivity of PHG voxels can predict functional selectivity within both sighted and blind groups

The structural connectivity pattern of each PHG voxel was defined as a vector of its connection probabilities with the 89 target AAL parcels, as obtained from probabilistic tractography. Adopting the LOOCV routine, we trained the linear regression model to predict the functional selectivity of each seed voxel for large objects (i.e., the response difference between large objects and the average of animals and tools measured as a t statistic) based on its whole-brain WM connectivity pattern. The absolute error between the t value predicted from the WM connectivity pattern and the actual t statistic was calculated per voxel, and the MAE was obtained by averaging all voxels in each PHG ROI for each participant to measure the performance of the model. For a benchmark model, predictions of the selectivity of PHG voxels for large objects were made based on group-average activations using the same LOOCV procedure following the studies by Saygin et al. (2011) and Osher et al. (2016) (for detailed information, see Materials and Methods). Bootstrap resampling was conducted between the MAEs of connectivity-based predictions and those of group average-based predictions across participants within each experiment (for sighted participants, the two independent groups were collapsed).

As presented in Table 1, for the left PHG, the WM connectivity-based models significantly outperformed the group-average benchmark models in predicting the large-object selectivity in all experiments with the blind and the sighted groups; for the right PHG, a similar pattern was observed. As an example, Figure 2 shows the voxelwise map of functional selectivity for large objects predicted by the WM connectivity, along with the actual functional selectivity maps in one blind participant (Fig. 2A) and one sighted participant (Fig. 2B). The values in these maps are normalized to the relative extent of functional selectivity across the voxels in each PHG ROI, such that the negative values (Fig. 2, cold-colored voxels) do not necessarily indicate a functional preference for animals and/or tools compared with large objects. These results suggest a significant effect of WM connection properties in determining the functional selectivity to large objects in bilateral PHG regions.

View this table:
  • View inline
  • View popup
Table 1.

Comparison of within-group prediction performances between the linear regression connectivity model and the group-average model

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

A, B, Actual and predicted fMRI activation to large objects > (animal + tools) in the bilateral PHG of one congenitally blind participant (A) and one sighted participant (B) from the S1 group. For each participant, actual and predicted activation images (t statistic values for large objects > animals + tools) were registered from the DWI structural image to the same participant's T1 structural scan and then registered and projected to the brain surface in the standard MNI space. Each heading indicates the experiment from which the prediction model was built. Activation projection was achieved using BrainNet Viewer (RRID:SCR_009446; Xia et al., 2013). Color scales are in standardized units.

To rule out the possibility of false-positive results due to the potential overfitting problem of the LOOCV method (Schreiber and Krekelberg, 2013; Osher et al., 2016), we further validated the within-group prediction results by training and testing models on independent groups. Specifically, for sighted participants, we built the model from connectional and functional data of the S1 group and tested it on each participant from the S2 group, and vice versa (see Between-group modeling in Materials and Methods), while for the blind group we did a split-half analysis to build the model from half of the participants and test it on the other half of the participants. Similar to the results obtained using the LOOCV method, we observed significantly better or trends of better performances of the connectivity-based predictions compared with corresponding group-average benchmark-based predictions using bootstrap resampling in the bilateral PHG regions for both visual (left PHG: mean difference = −0.049 ± 0.029, p < 0.059; right PHG: mean difference = −0.076 ± 0.023, p < 0.007) and auditory experiments (left PHG: mean difference = −0.097 ± 0.020, p < 0.001; right PHG: mean difference = −0.172 ± 0.030, p < 0.001) of sighted participants, and also in the auditory experiments of the blind group (left PHG: mean difference = −0.077 ± 0.032, p < 0.017; right PHG: mean difference = −0.164 ± 0.021, p < 0.001).

The performances of predictions between blind and sighted groups were as good as those of the predictions between the two sighted groups

To examine whether visual input influences the pattern of WM connectional constraints on functional selectivity, we compared the performances of between-blind-sighted connectivity-based predictions with those of the predictions between visual experiments of the two sighted groups using bootstrap resampling. We found that the MAEs of connectivity-based predictions between blind and sighted auditory experiments were not statistically different from those of predictions between visual experiments in the two sighted groups for both PHG ROIs (left PHG: mean difference = −0.008 ± 0.027, p = 0.776; right PHG: mean difference = 0.024 ± 0.03, p = 0.404; Fig. 3). A similar pattern was observed when connectivity-based predictions between blind auditory and sighted visual experiments were compared with those between the two sighted visual experiments (left PHG: mean difference = −0.019 ± 0.027, p = 0.484; right PHG: mean difference = 0.008 ± 0.029, p = 0.770; Fig. 3). The voxel-based functional selectivity maps for large objects in the bilateral PHG that were generated through blind-sighted predictions and S1-S2 predictions were visualized in Figure 2 for one blind participant (Fig. 2A) and one sighted student participant (Fig. 2B).

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Bar plots of between-group and between-modality connectivity-based prediction performances using linear regression models of the bilateral PHG. The y-axis indicates the MAE of the predictions. Error bars indicate the SE of the MAE. Asterisks and pound signs indicate the significance levels of differences between the connectivity-based predictions (gray) and the group-average benchmark-based predictions (dashed). *p < 0.05; #0.05 < p < 0.1.

Note that the sample sizes were not fully matched in the between-blind-sighted and between-S1-S2 comparison. We thus matched the sample sizes in these two types of predictions, performing between-blind-sighted predictions with the 14 blind participants and only the 7 participants from the S2 group. The results were the same (blind auditory-S2 auditory compared with S1-S2 visual: left PHG: 0.76 ± 0.013 vs 0.79 ± 0.023, mean difference = −0.026 ± 0.027, p = 0.352; right PHG: 0.77 ± 0.014 vs 0.79 ± 0.026, mean difference = −0.020 ± 0.029, p = 0.489; blind auditory-S2 visual compared with S1-S2 visual: left PHG: 0.75 ± 0.016 vs 0.79 ± 0.023, mean difference = −0.042 ± 0.029, p = 0.158; right PHG: 0.78 ± 0.017 vs 0.79 ± 0.026, mean difference = 0.018 ± 0.030, p = 0.535).

We also compared the performances of between-modality (sighted visual-sighted auditory) WM connectivity-based predictions with those of the connectivity-based predictions between the two sighted visual experiments. Comparable prediction performances were observed in the bilateral PHG for the between-modality predictions and predictions between the two sighted visual experiments (Fig. 3; between-modality vs between-visual: left PHG: mean difference = 0.015 ± 0.030, p = 0.646; right PHG: mean difference = 0.029 ± 0.033, p = 0.407).

In addition, we also validated the between-group connectivity models by comparing the performance of the between-group connectivity-based predictions with that of the group-average benchmark model. The group average-based predictions within the sighted visual experiments were taken as the benchmark model for all types of the between-group connectivity-based predictions. As shown in Figure 3, the MAEs of connectivity-based predictions between the visual experiments of the two sighted groups were significantly lower than those of the within-visual benchmark model in bilateral PHG ROIs (connectivity vs benchmark: left PHG: mean difference = −0.066 ± 0.033, p < 0.04; right PHG: mean difference = −0.098 ± 0.030, p < 0.003; one-tailed test). Performances significantly better than the benchmark were also observed for the connectivity-based predictions between the blind auditory and the two sighted visual experiments (connectivity vs benchmark: left PHG: mean difference = −0.085 ± 0.050, p < 0.05; right PHG: mean difference = −0.089 ± 0.044, p < 0.03; one-tailed test). Trends of better performances of connectivity model compared with the group-average benchmark model were observed for the predictions between the blind auditory and the two sighted auditory experiments (connectivity vs benchmark: left PHG: mean difference = −0.074 ± 0.049, p < 0.08; right PHG: mean difference = −0.073 ± 0.048, p < 0.07; one-tailed test) as well as for the between-modality predictions in the two sighted groups (connectivity vs benchmark: left PHG: mean difference = −0.052 ± 0.051, p < 0.16; right PHG: mean difference = −0.069 ± 0.052, p < 0.10; one-tailed test).

We also performed the above analyses in more restricted parahippocampal functional ROIs, which showed selectivity to large nonmanipulable objects (uncorrected p < 0.05) across the blind auditory, sighted auditory, and sighted visual experiments. These ROIs were considered as they exhibited multimodal domain selectivity and thus were more directly related to our hypothesis. The same pattern of results was observed when using these multimodal large-object-selective parahippocampal ROIs as seeds.

Characterizing critical WM connectional constraints of the selectivity of the PHG for large objects

We further revealed the communal WM connectional constraints on the selectivity of the PHG for large objects in the blind and sighted groups. Taking the potential collinearity of different predictors into consideration (see Materials and Methods), we identified positive and negative predictors with top 10 highly ranked model coefficients separately for the models built from the auditory experiment in the blind group and the visual and auditory experiments in the S1 group, respectively (Fig. 4, Table 2). The connections with top-ranked coefficients tended to be consistent across the three models, as follows: for the left PHG, the top positive predictors in all models were its connections with the lingual, hippocampus, and inferior temporal cortices in the left hemisphere; the top negative predictors were the connections with the left superior and middle part of the temporal pole, the orbital part of the left inferior frontal cortex, the left amygdala, and the bilateral olfactory cortices; and for the right PHG, the top positive predictors across all models included its connections with the fusiform gyrus, lingual gyrus, hippocampus, and the precuneus in the right hemisphere.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Important connection predictors for the blind auditory, sighted auditory, and sighted visual experiments. The connectivity model coefficients of the top 10 important predictors were projected onto the brain for the bilateral PHG. Warm colors indicate positive model weights, and cold colors indicate negative weights.

View this table:
  • View inline
  • View popup
Table 2.

List of important predictors

Connectivity-based predictions using nonlinear SVR

Thus far, we have used linear models throughout, based on the procedures in the studies by Saygin et al. (2011) and Osher et al. (2016). The advantage of a linear model is that the predictor coefficients are mathematically interpretable, such that we can identify the structural connections that provide contributions that are important for predicting its local function. Nonetheless, the relationship between the function and the connectivity pattern may well be nonlinear. We thus also explored the potential nonlinear relationships by applying a nonlinear SVR model in the connectivity-based predictions of the functional selectivity of each ROI.

First, we examined whether the nonlinear SVR model made significantly better predictions than the group-average benchmark model. As presented in Table 3, all of the within-experiment predictions based on the nonlinear SVR connectivity model significantly outperformed those based on the corresponding group-average model in the bilateral PHG ROIs. Validation analyses that trained and tested models using independent groups confirmed the similar pattern of results for both ROIs (connectivity vs group-average benchmark: p values <0.034).

View this table:
  • View inline
  • View popup
Table 3.

Comparison of within-group prediction performances between nonlinear SVR connectivity model and group-average benchmark model

Between-group (as well as between-modality) connectivity-based predictions using nonlinear SVR were then conducted (Table 4). Similar to the results obtained using linear regression, the prediction performances between blind and sighted auditory experiments were comparable to those of the predictions between visual experiments of the two sighted groups for bilateral PHG ROIs (blind auditory-sighted auditory compared with S1-S2 visual: left PHG: mean difference = −0.001 ± 0.027, p = 0.961; right PHG: mean difference = 0.036 ± 0.033, p = 0.267). A similar pattern was observed when predictions between blind auditory and sighted visual experiments were compared with the predictions between the two sighted visual experiments (blind auditory-sighted visual compared with S1-S2 visual: left PHG: mean difference = −0.011 ± 0.027, p = 0.675; right PHG: mean difference = 0.016 ± 0.031, p = 0.617). The comparison of the MAEs of predictions between the visual and auditory experiments of the two sighted groups and those of the predictions between the two sighted visual experiment indicated a similar pattern in the bilateral PHG (between-modality compared with S1-S2 visual: left PHG: mean difference = 0.017 ± 0.030, p = 0.563; right PHG: mean difference = 0.034 ± 0.036, p = 0.349).

View this table:
  • View inline
  • View popup
Table 4.

Performances of between-group and between-modality connectivity-based predictions using nonlinear SVR

We further tested whether the nonlinear SVR algorithm outperformed the linear regression models that we used in the main analyses to explore any potential existence of a nonlinear relationship between the connectivity pattern and the local functional profile. Bootstrap resampling was performed on the MAEs of predictions using the nonlinear SVR algorithms and those of the predictions using linear regression models for each prediction combination. As shown in Table 5, although generally lower MAEs were observed for connectivity-based predictions using nonlinear SVR than with the linear regression algorithm across all prediction types, only the following comparisons showed statistically significant differences: (1) the predictions of the functional selectivity of the bilateral PHG between the visual experiments with the two sighted groups and between the auditory and visual experiments with the two sighted groups; (2) the prediction of the functional selectivity of the right PHG within the two sighted visual experiments; and (3) the prediction of the functional selectivity of the right PHG between the blind auditory and sighted visual experiments.

View this table:
  • View inline
  • View popup
Table 5.

Comparison of the performances (MAE) of connectivity-based predictions using nonlinear SVR and linear regression algorithms

Together, there indeed seems to be some kind of complex and nonlinear relationship between structural connectivity and local function, as we observed some (although not consistent) improvement of the predictions using nonlinear SVR algorithm relative to those using linear regression. Importantly here, the performances of the blind-sighted and the between-modality predictions in bilateral PHG were still comparably successful even with such a model, suggesting the existence of similar underlying algorithms for the function of the PHG in blind and sighted individuals.

Validation results

Testing the model using Pearson r values as measurement for performances

Using the correlation to measure model performance yielded results similar to those reported in the main results using MAE as performance measures, as follows: all of the between-blind-sighted (as well as between-modality) connectivity predictions were statistically comparable to those of the predictions between visual experiments in the two sighted groups (Table 6). We also performed between-blind-sighted predictions with the 14 blind participants and only the 7 participants from S2 group to matched sample sizes with the between-S1-S2 predictions. The results remained the same.

View this table:
  • View inline
  • View popup
Table 6.

Performances measured as Pearson r values of between-group and between-modality connectivity-based predictions using linear regression

Comparisons with distance model

To ensure that our results were not simply explained by anatomical distance, we compared the connectivity model with a distance model in which the Euclidean distances between PHG voxels and the center of mass of other target brain parcels rather than the connectivity probabilities were taken as predictors. The distance models performed poorly in both within-group and between-group predictions of the functional selectivity of the bilateral PHG [left PHG: MAEs > 2.74 (SEM = 0.29); right PHG: MAEs > 3.39 (SEM = 0.46)]. The connectivity model significantly outperformed the distance model in all predictions in both PHG ROIs (p values <0.034) except for the within-group predictions of the functional selectivity of the right PHG in the visual experiments of the two sighted groups, which approached significance (p < 0.06).

Discussion

By examining the mapping between the white matter connectivity pattern and domain selectivity in congenitally blind and sighted populations, we showed that the connectivity-to-function predictions between blind and sighted groups were as successful as predictions between two sighted groups, suggesting that the way in which the WM pattern predicts large-object selectivity in the PHG is not affected by visual experience. Specifically, the functional selectivity of a voxel in blind subjects can be predicted by its WM pattern using the WM-to-function prediction model built from the sighted group data, and vice versa. The linear regression prediction model further revealed that the critical WM connection predictors for the selectivity of the PHG for large human-made objects in both the sighted and blind groups were its connections with adjacent regions in the medial temporal and occipital cortices.

Our results provide an important step forward from the recent line of studies reporting similar domain-specific effects in congenitally blind and sighted participants (Mahon et al., 2009; Wolbers et al., 2011; He et al., 2013; Peelen et al., 2013; Kitada et al., 2014; Striem-Amit and Amedi, 2014; for review, see Ricciardi et al., 2014). While it has been argued that these results generally indicate that the relevant regions are supramodal or multimodal rather than visual (Ricciardi et al., 2014) and that they may be constrained by innate connections (Mahon and Caramazza, 2011; see also Reich et al., 2011), an important counterargument for the implications of studies in special populations is that the results cannot be generalized to the typical population (Gainotti, 2015). Although both the blind and sighted groups show similar domain specificity, different types of representations might be involved (but see Peelen et al., 2014, which shows similar shape representation for small artifacts in the VOTC for blind and sighted groups). For instance, it is possible that for sighted people, clusters in the PHG process certain visual features that are most strongly associated with large objects and scenes, whereas the PHG in the blind group undergoes plastic changes and processes certain nonvisual aspects that are most strongly associated with large objects and scenes. Our results challenge this hypothesis by showing that the functional characteristics of PHG have similar roots in WM connectivity in the two groups; that is, a similar set of structural connections are relevant for the computations in PHG independent of visual input and experience.

Based on the assumption that structural connections constrain local function (Passingham et al., 2002; Mahon and Caramazza, 2011; Saygin et al., 2011; Osher et al., 2016), one way to test whether visual experience modulates the local functional representation is to directly compare the white matter connectional profiles in sighted and blind participants. Several previous studies that examined the effect of visual deprivation on white matter structures did not observe a significant alteration of WM connectivity in PHG. When the fractional anisotropy of the white matter was compared between sighted and blind adults across the whole brain, a significant decrease in fractional anisotropy associated with blindness was only observed for the geniculocalcarine tract (Shu et al., 2009a; Wang et al., 2013). Shu et al. (2009b) compared multiple white matter network properties between early blind and sighted control subjects and observed plastic changes only in the primary systems, with reduced connectivity strength and communication efficiency in the primary visual cortex and increased connectivity strength and efficiency in the primary motor and somatosensory areas. Additional decreases in connectivity strength and communication efficiency were later observed in frontal regions (Li et al., 2013).

However, similar WM connectional patterns of PHG in sighted and blind groups do not necessarily mean that the WM connectivity pattern constrains its function in the same manner. For example, the same subjects may produce different functional response profiles in a particular brain region when performing different tasks. In the current context, Wang et al. (2015) showed that sighted subjects had different category response profiles in posterior lateral fusiform and inferior occipital gyrus when viewing pictures and when listening to names, indicating that different tasks can recruit different sets of connections within an identical white matter background.

Our results, showing that the predictions of the selectivity of the PHG for large objects based on its WM pattern are as successful across sighted and blind populations as the predictions between the two sighted groups, provide direct evidence that the functional selectivity of the PHG shares highly similar structural connectivity origins with and without visual input. That is, the functional responses to large objects recruit similar sets of connections with similar weight settings. Across experiments, the structural connections of the PHG that have strong predictive power on its functional selectivity are those with adjacent temporal and occipital regions (positive function) and those with the superior and middle parts of the temporal pole, inferior frontal cortex, and amygdala (negative function). The observed role of connections with lingual gyrus and precuneus cortex, which is near the retrosplenial cortex, is consistent with the hypothesis that the preference of the PHG for large objects is related to navigation properties, as the retrosplenial cortex is also consistently found to be associated with navigation (Epstein, 2008). However, the effects of connections with inferior frontal, temporal pole, and amygdala remain to be further understood. Note that, although it was not significant, a consistent trend of better predictions performance (smaller MAEs) was observed for the left relative to the right PHG (Fig. 3). In addition, more overlapping important connections across groups were identified for the left compared with the right PHG. These results suggest interesting potential differences between left and right PHG, which needs further investigation.

Finally, there are a few methodological issues to be further addressed. First, we tested two potential ways in which the structural connectivity pattern may relate to local functions—linear and nonlinear. Consistent with previous findings (Saygin et al., 2011; Osher et al., 2016), the linear model was successful for all within-group and between-group predictions, suggesting that at least some proportion of the correspondence between structural connectivity and local function follows a linear relationship. The models built using nonlinear kernel SVR machine learning did show some trend for improvement in performance over the linear models, although this improvement was not stable across analyses. This suggests that there is a potentially nonlinear relationship between the structural connectivity pattern and local function that merits consideration in future studies. Second, based on studies with sighted individuals (Saygin et al., 2011; Osher et al., 2016), we considered the structural connectivity pattern on the basis of voxel-to-region mapping using an anatomically based regional parcellation (i.e., AAL), neglecting potential differences within each region. For instance, the peripheral-to-fovea distribution within the primary system cannot be revealed at this scale. Recent analyses of resting-state functional connectivity revealed that the peripheral/fovea retinotopic distribution correspondence from V1 to V2/V3 or even to higher-level cortices such as the fusiform face area and the parahippocampal place area is preserved in the blind (Bock et al., 2015; Striem-Amit et al., 2015). Thus, voxel-based structural connectivity construction merits consideration in the future. Third, our results are based on DWI data, which are prone to errors at crossing fibers. Convergent evidence from other approaches dealing with these issues is desirable, particularly using tract tracing in nonhuman primates.

In conclusion, we observed that the selectivity for large objects of a voxel in bilateral PHG in blind subjects can be predicted by its WM pattern using the connectivity-to-function prediction model built from the sighted group data, and vice versa. The connectivity-to-function predictions between blind and sighted groups were as successful as predictions between two sighted groups, indicating that the way in which the WM pattern predicts large-object selectivity in the PHG is not affected by visual experience. Our results provide further evidences for the vision independence of the selectivity of the bilateral PHG for large human-made objects by showing that the functional profiles of PHG have similar roots in white matter connectivity in blind and sighted individuals.

Footnotes

  • This work was supported by the National Key Basic Research Program of China (Grants 2013CB837300 and 2014CB846100 to Y.B.), the National Natural Science Foundation of China (Grant 31500882 to X.W.; Grants 4131671128 and 31521063 to Y.B.; and Grant 81671772 to G.G.), the National Program for Special Support of Top-Notch Young Professionals (to Y.B.), and the Beijing Brain Project (Grants Z16110100020000 and Z161100000216125 to Y.B.). A.C. was supported by the Fondazione Cassa di Risparmio di Trento e Rovereto. We thank the Beijing Normal University-CNLab members for assistance in data collection; Professor Xiaohui Zhang and Dr. Tao Wei for helpful comments on earlier versions of the manuscript; and Yangwen Xu for helpful discussion during the revision of the manuscript.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Yanchao Bi, State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, People's Republic of China. ybi{at}bnu.edu.cn

References

  1. ↵
    1. Behrens TE,
    2. Woolrich MW,
    3. Jenkinson M,
    4. Johansen-Berg H,
    5. Nunes RG,
    6. Clare S,
    7. Matthews PM,
    8. Brady JM,
    9. Smith SM
    (2003) Characterization and propagation of uncertainty in diffusion-weighted MR imaging. Magn Reson Med 50:1077–1088. doi:10.1002/mrm.10609 pmid:14587019
    OpenUrlCrossRefPubMed
  2. ↵
    1. Bi Y,
    2. Wang X,
    3. Caramazza A
    (2016) Object domain and modality in the ventral visual pathway. Trends Cogn Sci 20:282–290. doi:10.1016/j.tics.2016.02.002 pmid:26944219
    OpenUrlCrossRefPubMed
  3. ↵
    1. Bock AS,
    2. Binda P,
    3. Benson NC,
    4. Bridge H,
    5. Watkins KE,
    6. Fine I
    (2015) Resting-state retinotopic organization in the absence of retinal input and visual experience. J Neurosci 35:12366–12382. doi:10.1523/JNEUROSCI.4715-14.2015 pmid:26354906
    OpenUrlAbstract/FREE Full Text
  4. ↵
    1. Chang C,
    2. Lin C
    (2011) LIBSVM. ACM Trans Intell Syst Technol 2:1–27. doi:10.1145/1961189.1961199
    OpenUrlCrossRef
  5. ↵
    1. Efron B,
    2. Tibshirani RJ
    (1993) An introduction to the bootstrap. Boca Raton, FL: CRC.
  6. ↵
    1. Epstein RA
    (2008) Parahippocampal and retrosplenial contributions to human spatial navigation. Trends Cogn Sci 12:388–396. doi:10.1016/j.tics.2008.07.004 pmid:18760955
    OpenUrlCrossRefPubMed
  7. ↵
    1. Epstein R,
    2. Kanwisher N
    (1998) A cortical representation of the local visual environment. Nature 392:598–601. doi:10.1038/33402 pmid:9560155
    OpenUrlCrossRefPubMed
  8. ↵
    1. Gainotti G
    (2015) Inborn and experience-dependent models of categorical brain organization. A position paper. Front Hum Neurosci 9:2. doi:10.3389/fnhum.2015.00002 pmid:25667570
    OpenUrlCrossRefPubMed
  9. ↵
    1. Grill-Spector K,
    2. Weiner KS
    (2014) The functional architecture of the ventral temporal cortex and its role in categorization. Nat Rev Neurosci 15:536–548. doi:10.1038/nrn3747 pmid:24962370
    OpenUrlCrossRefPubMed
  10. ↵
    1. Haufe S,
    2. Dähne S,
    3. Nikulin VV
    (2014) Dimensionality reduction for the analysis of brain oscillations. Neuroimage 101:583–597. doi:10.1016/j.neuroimage.2014.06.073 pmid:25003816
    OpenUrlCrossRefPubMed
  11. ↵
    1. He C,
    2. Peelen MV,
    3. Han Z,
    4. Lin N,
    5. Caramazza A,
    6. Bi Y
    (2013) Selectivity for large nonmanipulable objects in scene-selective visual cortex does not require visual experience. Neuroimage 79:1–9. doi:10.1016/j.neuroimage.2013.04.051 pmid:23624496
    OpenUrlCrossRefPubMed
  12. ↵
    1. Hilgetag CC,
    2. Kaiser M
    (2004) Clustered organization of cortical connectivity. Neuroinformatics 2:353–360. doi:10.1385/NI:2:3:353 pmid:15365196
    OpenUrlCrossRefPubMed
  13. ↵
    1. Jenkinson M,
    2. Beckmann CF,
    3. Behrens TE,
    4. Woolrich MW,
    5. Smith SM
    (2012) FSL. Neuroimage 62:782–790. doi:10.1016/j.neuroimage.2011.09.015 pmid:21979382
    OpenUrlCrossRefPubMed
  14. ↵
    1. Kanwisher N
    (2010) Functional specificity in the human brain: a window into the functional architecture of the mind. Proc Natl Acad Sci U S A 107:11163–11170. doi:10.1073/pnas.1005062107 pmid:20484679
    OpenUrlAbstract/FREE Full Text
  15. ↵
    1. Kitada R,
    2. Yoshihara K,
    3. Sasaki AT,
    4. Hashiguchi M,
    5. Kochiyama T,
    6. Sadato N
    (2014) The brain network underlying the recognition of hand gestures in the blind: the supramodal role of the extrastriate body area. J Neurosci 34:10096–10108. doi:10.1523/JNEUROSCI.0500-14.2014 pmid:25057211
    OpenUrlAbstract/FREE Full Text
  16. ↵
    1. Konkle T,
    2. Caramazza A
    (2013) Tripartite organization of the ventral stream by animacy and object size. J Neurosci 33:10235–10242. doi:10.1523/JNEUROSCI.0983-13.2013 pmid:23785139
    OpenUrlAbstract/FREE Full Text
  17. ↵
    1. Kriegeskorte N,
    2. Mur M,
    3. Ruff DA,
    4. Kiani R,
    5. Bodurka J,
    6. Esteky H,
    7. Tanaka K,
    8. Bandettini PA
    (2008) Matching categorical object representations in inferior temporal cortex of man and monkey. Neuron 60:1126–1141. doi:10.1016/j.neuron.2008.10.043 pmid:19109916
    OpenUrlCrossRefPubMed
  18. ↵
    1. Li J,
    2. Liu Y,
    3. Qin W,
    4. Jiang J,
    5. Qiu Z,
    6. Xu J,
    7. Yu C,
    8. Jiang T
    (2013) Age of onset of blindness affects brain anatomical networks constructed using diffusion tensor tractography. Cereb Cortex 23:542–551. doi:10.1093/cercor/bhs034 pmid:22371309
    OpenUrlAbstract/FREE Full Text
  19. ↵
    1. Mahon BZ,
    2. Caramazza A
    (2011) What drives the organization of object knowledge in the brain? Trends Cogn Sci 15:97–103. doi:10.1016/j.tics.2011.01.004 pmid:21317022
    OpenUrlCrossRefPubMed
  20. ↵
    1. Mahon BZ,
    2. Anzellotti S,
    3. Schwarzbach J,
    4. Zampini M,
    5. Caramazza A
    (2009) Category-specific organization in the human brain does not require visual experience. Neuron 63:397–405. doi:10.1016/j.neuron.2009.07.012 pmid:19679078
    OpenUrlCrossRefPubMed
  21. ↵
    1. Osher DE,
    2. Saxe RR,
    3. Koldewyn K,
    4. Gabrieli JD,
    5. Kanwisher N,
    6. Saygin ZM
    (2016) Structural connectivity fingerprints predict cortical selectivity for multiple visual categories across cortex. Cereb Cortex 26:1668–1683. doi:10.1093/cercor/bhu303 pmid:25628345
    OpenUrlAbstract/FREE Full Text
  22. ↵
    1. Passingham RE,
    2. Stephan KE,
    3. Kötter R
    (2002) The anatomical basis of functional localization in the cortex. Nat Rev Neurosci 3:606–616. doi:10.1038/nrn893 pmid:12154362
    OpenUrlCrossRefPubMed
  23. ↵
    1. Peelen MV,
    2. Bracci S,
    3. Lu X,
    4. He C,
    5. Caramazza A,
    6. Bi Y
    (2013) Tool selectivity in left occipitotemporal cortex develops without vision. J Cogn Neurosci 25:1225–1234. doi:10.1162/jocn_a_00411 pmid:23647514
    OpenUrlCrossRefPubMed
  24. ↵
    1. Peelen MV,
    2. He C,
    3. Han Z,
    4. Caramazza A,
    5. Bi Y
    (2014) Nonvisual and visual object shape representations in occipitotemporal cortex: evidence from congenitally blind and sighted adults. J Neurosci 34:163–170. doi:10.1523/JNEUROSCI.1114-13.2014 pmid:24381278
    OpenUrlAbstract/FREE Full Text
  25. ↵
    1. Reich L,
    2. Szwed M,
    3. Cohen L,
    4. Amedi A
    (2011) A ventral visual stream reading center independent of visual experience. Curr Biol 21:363–368. doi:10.1016/j.cub.2011.01.040 pmid:21333539
    OpenUrlCrossRefPubMed
  26. ↵
    1. Ricciardi E,
    2. Bonino D,
    3. Pellegrini S,
    4. Pietrini P
    (2014) Mind the blind brain to understand the sighted one! Is there a supramodal cortical functional architecture? Neurosci Biobehav Rev 41:64–77. doi:10.1016/j.neubiorev.2013.10.006 pmid:24157726
    OpenUrlCrossRefPubMed
  27. ↵
    1. Saygin ZM,
    2. Osher DE,
    3. Koldewyn K,
    4. Reynolds G,
    5. Gabrieli JD,
    6. Saxe RR
    (2011) Anatomical connectivity patterns predict face selectivity in the fusiform gyrus. Nat Neurosci 15:321–327. doi:10.1038/nn.3001 pmid:22197830
    OpenUrlCrossRefPubMed
  28. ↵
    1. Saygin ZM,
    2. Osher DE,
    3. Norton ES,
    4. Youssoufian DA,
    5. Beach SD,
    6. Feather J,
    7. Gaab N,
    8. Gabrieli JD,
    9. Kanwisher N
    (2016) Connectivity precedes function in the development of the visual word form area. Nat Neurosci 19:1250–1255. doi:10.1038/nn.4354 pmid:27500407
    OpenUrlCrossRefPubMed
  29. ↵
    1. Schreiber K,
    2. Krekelberg B
    (2013) The statistical analysis of multi-voxel patterns in functional imaging. PLoS One 8:e69328. doi:10.1371/journal.pone.0069328 pmid:23861966
    OpenUrlCrossRefPubMed
  30. ↵
    1. Shu N,
    2. Li J,
    3. Li K,
    4. Yu C,
    5. Jiang T
    (2009a) Abnormal diffusion of cerebral white matter in early blindness. Hum Brain Mapp 30:220–227. doi:10.1002/hbm.20507 pmid:18072278
    OpenUrlCrossRefPubMed
  31. ↵
    1. Shu N,
    2. Liu Y,
    3. Li J,
    4. Li Y,
    5. Yu C,
    6. Jiang T
    (2009b) Altered anatomical network in early blindness revealed by diffusion tensor tractography. PLoS One 4:e7228. doi:10.1371/journal.pone.0007228 pmid:19784379
    OpenUrlCrossRefPubMed
  32. ↵
    1. Smith SM,
    2. Jenkinson M,
    3. Woolrich MW,
    4. Beckmann CF,
    5. Behrens TE,
    6. Johansen-Berg H,
    7. Bannister PR,
    8. De Luca M,
    9. Drobnjak I,
    10. Flitney DE,
    11. Niazy RK,
    12. Saunders J,
    13. Vickers J,
    14. Zhang Y,
    15. De Stefano N,
    16. Brady JM,
    17. Matthews PM
    (2004) Advances in functional and structural MR image analysis and implementation as FSL. Neuroimage 23:S208–S219. doi:10.1016/j.neuroimage.2004.07.051 pmid:15501092
    OpenUrlCrossRefPubMed
  33. ↵
    1. Sporns O,
    2. Zwi JD
    (2004) The small world of the cerebral cortex. Neuroinformatics 2:145–162. doi:10.1385/NI:2:2:145 pmid:15319512
    OpenUrlCrossRefPubMed
  34. ↵
    1. Striem-Amit E,
    2. Amedi A
    (2014) Visual cortex extrastriate body-selective area activation in congenitally blind people “seeing” by using sounds. Curr Biol 24:687–692. doi:10.1016/j.cub.2014.02.010 pmid:24613309
    OpenUrlCrossRefPubMed
  35. ↵
    1. Striem-Amit E,
    2. Cohen L,
    3. Dehaene S,
    4. Amedi A
    (2012) Reading with sounds: sensory substitution selectively activates the visual word form area in the blind. Neuron 76:640–652. doi:10.1016/j.neuron.2012.08.026 pmid:23141074
    OpenUrlCrossRefPubMed
  36. ↵
    1. Striem-Amit E,
    2. Ovadia-Caro S,
    3. Caramazza A,
    4. Margulies DS,
    5. Villringer A,
    6. Amedi A
    (2015) Functional connectivity of visual cortex in the blind follows retinotopic organization principles. Brain 138:1679–1695. doi:10.1093/brain/awv083 pmid:25869851
    OpenUrlAbstract/FREE Full Text
  37. ↵
    1. Talairach J,
    2. Tournoux P
    (1988) Co-planar stereotaxic atlas of the human brain. Stuttgart, Germany: Thieme.
  38. ↵
    1. Tzourio-Mazoyer N,
    2. Landeau B,
    3. Papathanassiou D,
    4. Crivello F,
    5. Etard O,
    6. Delcroix N,
    7. Mazoyer B,
    8. Joliot M
    (2002) Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. Neuroimage 15:273–289. doi:10.1006/nimg.2001.0978 pmid:11771995
    OpenUrlCrossRefPubMed
  39. ↵
    1. Wang D,
    2. Qin W,
    3. Liu Y,
    4. Zhang Y,
    5. Jiang T,
    6. Yu C
    (2013) Altered white matter integrity in the congenital and late blind people. Neural Plast 2013:128236. doi:10.1155/2013/128236 pmid:23710371
    OpenUrlCrossRefPubMed
  40. ↵
    1. Wang X,
    2. Peelen MV,
    3. Han Z,
    4. He C,
    5. Caramazza A,
    6. Bi Y
    (2015) How visual is the visual cortex? Comparing connectional and functional fingerprints between congenitally blind and sighted individuals. J Neurosci 35:12545–12559. doi:10.1523/JNEUROSCI.3914-14.2015 pmid:26354920
    OpenUrlAbstract/FREE Full Text
  41. ↵
    1. Wolbers T,
    2. Klatzky RL,
    3. Loomis JM,
    4. Wutte MG,
    5. Giudice NA
    (2011) Modality-independent coding of spatial layout in the human brain. Curr Biol 21:984–989. doi:10.1016/j.cub.2011.04.038 pmid:21620708
    OpenUrlCrossRefPubMed
  42. ↵
    1. Xia M,
    2. Wang J,
    3. He Y
    (2013) BrainNet Viewer: a network visualization tool for human brain connectomics. PLoS One 8:e68910. doi:10.1371/journal.pone.0068910 pmid:23861951
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 37 (18)
Journal of Neuroscience
Vol. 37, Issue 18
3 May 2017
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Advertising (PDF)
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Domain Selectivity in the Parahippocampal Gyrus Is Predicted by the Same Structural Connectivity Patterns in Blind and Sighted Individuals
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Domain Selectivity in the Parahippocampal Gyrus Is Predicted by the Same Structural Connectivity Patterns in Blind and Sighted Individuals
Xiaoying Wang, Chenxi He, Marius V. Peelen, Suyu Zhong, Gaolang Gong, Alfonso Caramazza, Yanchao Bi
Journal of Neuroscience 3 May 2017, 37 (18) 4705-4716; DOI: 10.1523/JNEUROSCI.3622-16.2017

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Domain Selectivity in the Parahippocampal Gyrus Is Predicted by the Same Structural Connectivity Patterns in Blind and Sighted Individuals
Xiaoying Wang, Chenxi He, Marius V. Peelen, Suyu Zhong, Gaolang Gong, Alfonso Caramazza, Yanchao Bi
Journal of Neuroscience 3 May 2017, 37 (18) 4705-4716; DOI: 10.1523/JNEUROSCI.3622-16.2017
del.icio.us logo Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • domain selectivity
  • PPA
  • ventral visual cortex
  • visual experience
  • white matter

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Model sharing in the human medial temporal lobe
  • Mapping the integration of sensory information across fingers in human sensorimotor cortex
  • Role of voltage-gated K+ channels and K2P channels in intrinsic electrophysiological properties and saltatory conduction at nodes of Ranvier of rat lumbar spinal ventral nerves
Show more Research Articles

Behavioral/Cognitive

  • Model sharing in the human medial temporal lobe
  • Spontaneous Alpha-Band Oscillations Bias Subjective Contrast Perception
  • The role of visual experience in individual differences of brain connectivity
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
(JNeurosci logo)
(SfN logo)

Copyright © 2022 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.