Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

Similar Computational Hierarchies for Reading and Speech in the Occipital Cortex of Sighed and Blind: Converging Evidence from fMRI and Chronometric TMS

Jacek Matuszewski, Łukasz Bola, Olivier Collignon and Artur Marchewka
Journal of Neuroscience 14 May 2025, 45 (20) e1153242024; https://doi.org/10.1523/JNEUROSCI.1153-24.2024
Jacek Matuszewski
1Crossmodal Perception and Plasticity Lab, Institute of Research in Psychology (IPSY) and Institute of Neuroscience (IoNS), Université Catholique de Louvain, Louvain-la-Neuve 1348, Belgium
2Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw 02-093, Poland
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Jacek Matuszewski
Łukasz Bola
3Institute of Psychology, Polish Academy of Sciences, Warsaw 00-378, Poland
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Olivier Collignon
1Crossmodal Perception and Plasticity Lab, Institute of Research in Psychology (IPSY) and Institute of Neuroscience (IoNS), Université Catholique de Louvain, Louvain-la-Neuve 1348, Belgium
4School of Health Sciences, HES-SO Valais-Wallis, The Sense Innovation and Research Center, Lausanne 1011, Switzerland
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Artur Marchewka
2Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw 02-093, Poland
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • Peer Review
  • PDF
Loading

Abstract

High-level perception results from interactions between hierarchical brain systems responsive to gradually increasing feature complexities. During reading, the initial evaluation of simple visual features in the early visual cortex (EVC) is followed by orthographic and lexical computations in the ventral occipitotemporal cortex (vOTC). While similar visual regions are engaged in tactile Braille reading in congenitally blind people, it is unclear whether the visual network maintains or reorganizes its hierarchy for reading in this population. Combining fMRI and chronometric transcranial magnetic stimulation (TMS), our study revealed a clear correspondence between sighted and blind individuals (both male and female) on how their occipital cortices functionally supports reading and speech processing. Using fMRI, we first observed that vOTC, but not EVC, showed an enhanced response to lexical vs nonlexical information in both groups and sensory modalities. Using TMS, we further found that, in both groups, the processing of written words and pseudowords was disrupted by the EVC stimulation at both early and late time windows. In contrast, the vOTC stimulation disrupted the processing of these written stimuli only when applied at late time windows, again in both groups. In the speech domain, we observed TMS effects only for meaningful words and only in the blind participants. Overall, our results suggest that, while the responses in the deprived visual areas might extend their functional response to other sensory modalities, the computational gradients between early and higher-order occipital regions are retained, at least for reading.

  • blindness
  • language
  • plasticity
  • reading
  • sensory deprivation
  • VWFA

Significance Statement

The sighted visual cortex hierarchically interprets visual signals, from simple visual features in the early visual cortex to complex features in higher-order visual areas. The blind visual cortex is known to respond to tactile and auditory information, but is a similar computational hierarchy used to process these signals? Here we showed that the blind visual cortex processes tactile reading in a spatiotemporal hierarchy strikingly similar to the hierarchy used by the sighted visual cortex to process visual reading. Intriguingly, the blind visual cortex seems additionally involved in the processing of spoken words. Our results suggest that the computational gradients between sensory-deprived early and higher-order areas are largely independent of visual experiences, despite their enhanced responses to cross-modal input.

Introduction

Congenital blindness enhances responses to other sensory modalities (tactile, auditory) in occipital regions mainly processing vision in sighted people (Bavelier and Neville, 2002; Frasnelli et al., 2011). However, the principles governing this neuroplasticity, and the role of sensory experience in shaping functional organization of the cortex, are still debated.

On the one hand, researchers suggest that the functional organization of the blind visual cortex is largely retained (Cecchetti et al., 2016; Ricciardi et al., 2020, 2023) as tactile and auditory processing from many domains activates the visual cortex similarly to visual processing. This includes simple tactile texture processing in V1 (Stilla et al., 2008), motion-selective responses in hMT+/V5 (Dormal et al., 2016; Battal et al., 2022), and similarly distributed category-selective responses in the higher-order associative ventral occipitotemporal cortex (vOTC) of sighted and blind (Mattioni et al., 2020). These results suggest the visual cortex can be organized independently of visual inputs. Alternatively, some studies suggest that in the absence of typical inputs, the visual cortex of the blind reverses its hierarchy (Amedi et al., 2003), with the early visual cortex (EVC), defined predominantly as V1 and possibly its close neighbors, becoming a higher-order computational unit engaged in complex processing, such as mathematical reasoning (Kanjlia et al., 2016) or spoken sentence processing (Burton et al., 2003; Van Ackeren et al., 2018).

Visual perception is typically conceived as a hierarchical system where “low-level” information is extracted from the primary visual cortex and cascaded down the ventral and dorsal streams for more complex analyses (Heeger, 2017). In addition to such feedforward processing scheme, recurrent processing includes feedback loops from downstream regions into V1 (de Lange et al., 2018). Thus, after initial evaluation and transfer to associative cortices, feedback signals are sent back to EVC, highlighting its engagement in multiple parallel processes (de Graaf et al., 2014). Here we ask whether these computational gradients depend on visual experience.

One prototypical model of hierarchical processing between low-level and higher-order cortical regions is reading. It requires parallel operations ranging from symbol recognition followed by the connection of the letter strings to the corresponding units of speech and meaning (Dehaene, 2009; Pegado et al., 2014). These processes typically follow a spatiotemporal division of labor where simple features (lines and line junctions of letters) are processed in the EVC, whereas the orthographical and lexical statuses are determined in the vOTC (Nobre et al., 1994), later named the “visual word form area” (VWFA, Dehaene and Cohen, 2011).

While EVC and vOTC support Braille reading in congenitally blind people (Sadato et al., 1996; Cohen et al., 1997; Büchel et al., 1998), we do not know how nonvisual information flows between these regions. Transcranial magnetic stimulation (TMS) with multiple time windows (chronometric design) established that sighted EVC is causally involved in recognizing tactile Braille letters before vOTC (Bola et al., 2019). In congenitally blind, the computational hierarchy between low-level and higher-order occipital areas remains unexplored.

Additionally, previous studies showed that the blind visual cortex may extend its roles beyond reading, to speech processing (Kim et al., 2017; Dzięgiel-Fivet et al., 2021). However, speech also activates sighted EVC (Seydell-Greenwald et al., 2023) and VWFA (Planton et al., 2019). This questions the nature of blind visual cortex reorganization to perform new functions, as its observed activity might result from unmasking or upscaling of functional architecture present also in sighted (Makin and Krakauer, 2023). In this view, the visual cortex's responses to speech in sighted might be similar but weaker than in blind (Dormal and Collignon, 2011).

Here we address these issues by investigating neuronal responses to reading and speech in the occipital cortex of congenitally blind and sighted subjects. We first used fMRI to characterize the EVC and the VWFA responses to lexical (words, nonwords) and nonlexical information, presented in the written and the spoken modality. Then, we used chronometric TMS to investigate whether, when, and how those regions causally support reading and speech.

Materials and Methods

fMRI experiment

Subjects

Twenty right-handed congenitally blind adults (14 females, age mean, M = 36.6; standard deviation, SD = 7.6; level of education = M = 15.7 years; SD = 1.9 years) participated in the MRI study. Additionally, 20 demographically matched sighted adults served as a control group (14 females, age M = 36.5; SD = 7.7; level of education = M = 16.2 years; SD = 1.3 years). Subsequently, 13 blind (11 females, age M = 37.7; SD = 8.6; level of education M = 15.8 years; SD = 1.9 years) and 13 sighted (8 females, age M = 34.6, SD = 8.7; level of education M = 16.1 years; SD = 1.4 years) participants from the MRI sample also participated in the TMS experiment.

In both experiments, the groups did not statistically differ in age (two-sample t tests, both p values > 0.358), gender (chi-square tests, both p values > 0.377), or level of education (two-sample t tests, both p values > 0.305). Detailed subjects’ characteristics are presented in Table 1. None of the participants had any history of neurological illness or brain damage and all participants declared to have normal hearing. All sighted subjects had normal or corrected-to-normal vision. In all blind subjects, blindness was a result of peripheral damage. Informed consent as well as MRI and TMS safety screenings were collected before the study. Forms were sent to the blind subjects before the experiment in a format readable by the screen reading software and were available in Braille on-site. All subjects received monetary compensation for their participation in the study. The study was approved by the local ethics committee.

View this table:
  • View inline
  • View popup
Table 1.

Characteristics of the subjects

Behavioral tests

Before the fMRI experiment, individual reading speed was examined by timing the participants’ reading of words and pseudowords from a standardized dyslexia diagnosis test (The Decoding Test; Szczerbiński and Pelc-Pękała, 2013). Original procedures and materials were used for sighted subjects (for whom the test was designed). Each test comprised two sets of 75 words gradually increasing in length, from 2 to 13 letters or 69 pseudowords (2–10 letters). In each set subjects had 30 s to read the stimuli aloud as fast as possible. For blind subjects, procedures were adapted to a slower pace of tactile Braille reading. Following consultation with the test's author, all stimuli were additionally split into equal halves containing either 38/37 words or 34/35 pseudowords per set (counterbalanced). This decision was made to avoid bias toward shorter words at the beginning of the lists, i.e., all lists were shortened with retained proportions in stimuli length. Correctly read stimuli from both sets were summed and analyzed in a 2 (Group: blind, sighted) by 2 (Lexicality: words, pseudowords) mixed ANOVA with Bonferroni-corrected post hoc tests. Welch correction to degrees of freedom was applied whenever variances were not equal. All subjects were familiarized with the procedure with a different set of exemplary stimuli.

MRI data acquisition

MRI data were acquired with a Siemens Trio 3T scanner (Siemens) with a 12-channel head coil. A structural T1-weighted image was acquired with 176 slices; echo time, 3.32 ms; repetition time, 2,530 ms; voxel size, 1 × 1 × 1 mm; flip angle, 7°; and field of view, 256 × 256 mm. Functional data were acquired with echo-planar imaging (EPI) pulse sequence with 35 slices; echo time, 3 ms; repetition time, 2,000 ms; voxel size, 3 × 3 × 3.85 mm; flip angle, 80°; and field of view, 64 × 64 mm. Functional data were corrected for B0 field inhomogeneities with a double-echo gradient echo sequence (field map) with 186 slices; echo time1, 4.5 ms; echo time2, 6.96 ms; repetition time, 800 ms; flip angle, 60°; and field of view, 72 × 72 mm. The field map sequence was collected in the middle of the experiment (after the 3rd functional run out of 6).

fMRI: task and stimuli

During the scanning, subjects were asked to carefully process words, pseudowords, and control stimuli. All stimuli were presented in blocks of four in one of the two modalities: reading (tactile Braille for blind/visual alphabet for sighted subjects) or speech (Fig. 1A). Tactile Braille stimuli were presented via NeuroDevice tactile Braille display (Debowska et al., 2013), visual stimuli were presented on an LCD monitor, and auditory stimuli were presented via MRI-safe headphones.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Overview of experimental design and behavioral results in the fMRI study. A, Tactile Braille (for congenitally blind), visual (for sighted), and auditory (both groups) words, pseudowords, and control stimuli were presented in blocks of four in a pseudorandomized, alternating order. Before the switch in sensory modality, subjects were presented with an auditory cue. Sighted subjects were instructed to close their eyes during speech processing. Subjects were performing a 1-back task, in which a button had to be pressed when identical stimuli appeared consecutively (10% of trials, see red frames for examples). B, Behavioral results: performance in the fMRI task summarized with d’ statistics (right). Error bars represent the standard error of the mean. All sighted subjects scored 100% correctly in spoken words and pseudowords (hence no error bars). Asterisks above conditions indicate group differences and asterisks above the line indicate within-group differences between conditions. ISI, interstimulus interval; *p < 0.05; **p < 0.01, Bonferroni corrected (see Materials and Methods for details).

All words and pseudowords were 3–4 letters (1–2 syllables) long and were balanced across conditions for frequency and the number of adjectives, verbs, or nouns. Pseudowords were created by letter transposition or substitution, resulting in meaningless but pronounceable stimuli. Control stimuli represented chains of repeated nonlinguistic characters. During reading, they consisted of 3, 4, or 5 hash (#) strings for sighted subjects or full six-dot Braille characters that carry no linguistic meaning for blind subjects. For auditory control stimuli, we used vocoded speech with varying numbers (2, 3, or 4) of repeated noise bursts prepared with Praat software (www.praat.org). Vocoding divides the speech signal into three frequency bands, applies the dynamic amplitude contour of the original to a noise source, and then recombines these into a unitary signal again (Dzięgiel-Fivet et al., 2021). Thus, vocoded stimulus retains its speech-related dynamic frequency and amplitude pattern but lacks phonetic information. Here, the vocoded pattern obtained from a 1-syllable word was multiplied to obtain different numbers of repetitions matched to visual and tactile control conditions. Subjects were briefly trained before the experiment to familiarize themselves with all kinds of stimuli.

To ensure that subjects paid attention to all stimuli, we introduced a simple 1-back task throughout the experiment. Subjects reacted with a button press when two consecutive stimuli within blocks were identical, i.e., “boy-boy” for words or “###-###” for control (Fig. 1A). There were seven pseudorandomized targets in each condition (10% of trials). The targets and the order of trials within blocks were counterbalanced across subjects with two pseudorandomized sets, A and B, which had identical stimuli apart from different targets (repetitions).

The experiment was divided into six runs. Each run was further divided into 3 s order blocks, containing a block of four stimuli from each condition (reading: words, pseudowords, control and speech: words, pseudowords, control). To avoid constant switching between sensory modalities, the order of conditions was randomized within each modality, but the modalities were always alternating (e.g., read: control-pseudowords-words → speech: pseudowords-control-words → read: words-control-pseudowords → speech…). Due to alternating sensory modalities within each run, we opted not to use a physical blindfold, as putting it on and off could result in increased motion in the scanner. Therefore, before each modality switch, subjects heard a distinct sound cue informing about the upcoming modality, which the sighted group associated with “eyes open” (reading) or “eyes closed” (speech). The starting modality was counterbalanced across subjects and stimuli sets and alternated between runs. In other words, in each group subjects were evenly divided into four scenarios: (1) Set A starting with reading in odd and speech in even runs, (2) Set A starting with speech in odd and reading in even runs, (3) Set B starting with reading in odd and speech in even runs, and (4) Set B starting with speech in odd and reading in even runs. Visual stimuli were presented for 1 s, tactile Braille stimuli were presented for 3 s and auditory stimuli were presented for roughly 1 s. All trials within blocks were divided by a 1 s interstimuli interval, and blocks were divided by a randomized 3–5 s rest period. In the case of switching modality, there was an additional 2.5 s rest period between the cue and the start of the first condition (Fig. 1A). In total, 18 blocks and 72 stimuli were presented in each condition (6 runs * 3 repetitions * 4 stimuli).

Behavioral performance in the 1-back task was measured with d’ statistic (Green and Swets, 1966), which accounts for individual response style differences. First, responses from all subjects, conditions, and modalities were categorized into four categories: (1) hits, correct button presses for targets; (2) misses, lack of responses for targets; (3) false alarms, button presses for nontarget stimuli; and (4) correct rejections, lack of responses for nontargets. Next, the hit rate was calculated as the number of hits/number of hits + misses and the false alarm rate as the number of false alarms/number of false alarms + correct rejections. Finally, d’ was calculated as Z(hit rate) − Z(false alarm rate), where Z(x) is the inverse of the cumulative Gaussian distribution. To ensure that the d’ values were finite, for the subject–condition combinations in which the hit rate or false alarm rate was equal to 0 or 1, we used 0.01 or 0.99 instead (Finc et al., 2020). Calculated d’ statistics for all experimental conditions were analyzed in a 2 (Group: blind, sighted)-by-2 (Modality: reading, speech)-by-3 (Condition: words, pseudowords, control) mixed ANOVA. Greenhouse–Geisser correction p value correction for nonsphericity was applied where needed. Post hoc tests were calculated as t tests with Bonferroni-corrected p values for the number of planned comparisons (viz., comparisons between conditions/modalities within each group and comparisons between groups within each condition/modality). Welch correction to degrees of freedom was applied whenever variances were not equal. Comparisons within each group were Bonferroni corrected (12 comparisons). Comparisons between groups were not corrected to avoid type II errors.

fMRI: data analyses

The fMRI data from all runs were unwarped via voxel displacement maps calculated from field maps, realigned for motion correction using 4th-degree B-Spline, slice-time corrected, coregistered to the subject's T1-weighted image, normalized to Montreal Neuroimaging Institute space with voxel size of 2 mm isotropic through estimated deformation field, and smoothed with full width at half maximum Gaussian kernel of 5 mm. One blind subject requested a break between the 5th and 6th run, resulting in the acquisition of an additional field map. For this subject, data from the 6th run were unwarped with a second voxel displacement map. All other preprocessing steps were identical.

Next, preprocessed data from all runs were entered into subject-specific general linear models (GLMs). Timings of all experimental condition blocks across both sensory modalities were entered as regressors of interest. Additionally, auditory cues informing about the upcoming sensory modality, 1-back targets and 6 head movement parameters were included as regressors of no interest. Data were convolved with the specified timings through canonical hemodynamic response function and filtered with a 128 Hz high-pass filter. Serial correlations in the data were accounted for using an autoregressive (AR1) model during classical restricted maximum likelihood parameter estimation.

To characterize the profile of neuronal responses in the EVC and the VWFA, contrast estimates from all experimental conditions were extracted from two regions of interest (ROI). The EVC was defined as a combination of retinotopically mapped ventral and dorsal V1 maps from visfatlas (Rosenke et al., 2021). The visual word form area was defined as a 5 mm radius sphere in the canonical coordinates reported by Cohen et al. (2002), transformed from Talairach (X = −39; Y = −58; Z = −9) to MNI space (X = −41; Y = −57; Z = −16) via mni2tal web app.

Extracted data were entered into ROI-specific 2 (Group: blind, sighted)-by-2 (Modality: reading, speech)-by-3 (Condition: words, pseudowords, control) mixed ANOVAs. Greenhouse–Geisser p value correction for nonsphericity was applied where needed. The significance of neuronal response was tested with one-sample T-tests against zero, comparisons between conditions were calculated with paired T-tests and comparisons between groups were calculated with two-sample T-tests. Welch correction to degrees of freedom was applied whenever variances were not equal. All p values were Bonferroni corrected for the number of performed comparisons (24 for one-sample T-tests and 18 for comparison T-tests).

The fMRI data were analyzed using SPM12 (Wellcome Trust Centre for Human Neuroimaging) software running on MATLAB 2016b (MathWorks). Behavioral data analyses, ROI analyses, and plots were performed using R (4.1.3, R Core Team, 2022), with tidyverse (Wickham et al., 2019), rstatix (Kassambara, 2021), ez (Lawrence, 2016), cowplot (Wilke, 2019), patchwork (Pedersen, 2020), ggpubr (Kassambara, 2020), RColorBrewer (Neuwirth and Neuwirth, 2022), and marginaleffect (Arel-Bundock, 2023) R packages.

TMS experiment

TMS: task and stimuli

In the TMS experiment, subjects performed a lexical decision task in which they had to discriminate between words and pseudowords. This task was chosen as it is an established TMS protocol successfully used in previous TMS studies investigating the involvement of the VWFA in linguistic processing by evoking across visual (Duncan et al., 2010; Pattamadilok et al., 2015), auditory (Pattamadilok et al., 2019), and tactile (Siuda-Krzywicka et al., 2016) sensory modalities. Tasks were performed via reading (tactile Braille alphabet for the blind, visual alphabet for the sighted) or speech processing in separate runs for each TMS site. This resulted in four runs: EVC reading, EVC speech, VWFA reading, and VWFA speech. To avoid sensory stimulation fatigue (especially for tactile reading in the blind), runs were ordered by the TMS site with alternating sensory modalities. In other words, subjects never read or listen to stimuli in two consecutive runs. Sighted subjects were blindfolded during speech runs with a physical blindfold mask that was resting on the subject's neck during reading runs and lifted to cover the eyes during speech runs. The quality of neuronavigation registration was checked before starting each run.

All stimuli came from a larger database of words and pseudowords that were used in our previous experiments (Banaszkiewicz et al., 2021; Kuper et al., 2021; Matuszewski et al., 2021). Briefly, 4-to-6 letters long words were probed from the SUBTLEX-PL database (Mandera et al., 2014) with a restriction of occurrence frequency higher than 1 per million. Pseudowords were created by changing one letter within a word, resulting in orthographically and phonologically plausible strings with no meaning. Across all conditions, words were matched for frequency, length, number of syllables, and neighborhood size. An identical procedure was employed for pseudowords, except for frequency. In total, 160 words and 160 pseudowords were divided into 8 experimental conditions (4 TMS time windows × 2 TMS sites × 2 sensory modalities), resulting in 20 trials (10 words + 10 pseudowords) per condition. Visual stimuli were presented on the screen in front of sighted participants and tactile Braille was presented with an Active Star display (HelpTech). All auditory stimuli were recorded in a soundproof room, digitized with a 16 bit analog-to-digital conversion at a sampling rate of 44 kHz and presented via pneumatic in-ear ER1 headphones (Etymotic Research), which also served as hearing protection against the TMS coil noises.

Each run started with a 10 s fixation, followed by a trial presentation. Each trial consisted of word or pseudoword presentation, and TMS pulses were administered in one of the four time windows (see below, TMS: protocol, for details). There was a 2 s period for lexical decision and a 2–3 s (randomized) rest period before the next trial (Fig. 2A). Visual stimuli were presented for 560 ms to match the average duration of the auditory stimuli. Due to the complex nature of haptic processing, tactile Braille was presented for 3,000 ms (Veispak et al., 2013; Kim et al., 2017; Dzięgiel-Fivet et al., 2021) and was read with the subjects’ reading hand of choice. The order of stimuli within conditions was randomized for each subject. Stimuli sets were counterbalanced across TMS sites.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Overview of experimental design and stimulation protocols in the TMS study. A, Congenitally blind and sighted subjects performed lexical decision tasks during reading (tactile Braille/visual alphabets) or speech processing. Subjects discriminated between single words or pseudowords, presented in a random order. EVC, early visual cortex; VWFA, visual word form area. B, Pulses were administered to the early visual cortex or the visual word form area in separate runs for reading and speech processing. The plot visualizes the average normalized coordinates for blind (blue) and sighted (orange) groups in the MNI space. Stimulation during the experiment occurred in native space to account for individual differences in neuroanatomy. Red lines contain edges of the retinotopically mapped V1 in the early visual cortex (top panel) and the original reported location of the visual word form area. V1 mask was created by combining left and right ventral and dorsal V1 regions from the visfatlas (Rosenke et al., 2021). The visual word form area is represented by a 7 mm radius sphere with the center of X = −41; Y = −57; Z = −16 (Cohen et al., 2002). Individual target coordinates are additionally displayed in Figure 3. C, In each trial, two transcranial magnetic stimulation pulses were applied 40 ms apart (25 Hz) in one of the four time windows relative to the stimulus onset: −40/0, 60/100, 160/200, or 260/300 ms.

Subjects were familiarized with the procedure before the experiment via training and instructed to respond as quickly as possible. Stimulus presentation and TMS pulse timings were controlled using a program written in the Presentation 21.0 environment (Neurobehavioral Systems). All responses were given via the CEDRUS RB-540 response pad (Cedrus).

TMS: localization of sites

During the TMS experiment, the early visual cortex (EVC) and the visual word form area (VWFA) were targeted through a neuronavigation system. Before the experiment, all TMS target sites were marked on subjects’ anatomical scans in Brainsight software. The EVC site was marked at the posterior termination of the calcarine cortex (Merabet et al., 2008; Chambers et al., 2013; Bola et al., 2019). The coil position was perpendicular to the calcarine sulcus, which ensured pulse trajectory along the EVC. Note that in opposition to fMRI analyses which used an external anatomical mask of V1 from the visfatlas, claiming that V1 was specifically targeted with TMS is not straightforward, as even with stimulation along the calcarine sulcus, neighboring V2 area might also receive partial stimulation (Salminen-Vaparanta et al., 2012). Therefore, we use the term “EVC” here. The VWFA site was localized as an intersection between canonical coordinates from the literature and individual brain activity patterns during the fMRI reading task. First, the reading (words + pseudowords) > control contrasts from the fMRI 1-back task in the tactile and visual modality were calculated for blind and sighted subjects respectively (see above, fMRI: task and stimuli, for details). This procedure allowed us to map vOTC responses to orthographic processing. Next, a 10 mm sphere in canonical VWFA location (MNI X = −41, Y = −57, Z = −16; Cohen et al., 2002) was warped to each subject's native space using inverse deformation fields generated from the SPM segmentation step. These spheres were then overlapped with the obtained patterns of brain activity, resulting in individually defined and theoretically constrained reading responses in the VWFA. Finally, peaks of those reading-specific activations within the individually adjusted VWFA sphere were chosen as TMS targets.

Furthermore, to ensure that the chosen sites were not systematically different across groups, single-subject data were normalized to MNI space, and all coordinates were averaged across participants. The mean (± standard error of the mean) MNI coordinates for the EVC (Blind X = 0.9 ± 1.3; Y = −87 ± 0.8; Z = 1.2 ± 2.3; Sighted X = 0.3 ± 1.3; Y = −88.7 ± 1; Z = 5.4 ± 3) were not different between groups, indicated by two-sample T-tests (T(24) = 0.31; p = 0.757, T(23) = 1.41; p = 0.170; and T(22) = −1.10; p = 0.28; for X, Y, and Z, respectively). Similarly, no group differences were found in the coordinates of the VWFA (Blind X = −39.9 ± 1; Y = −56.3 ± 1.5; Z = −13.4 ± 1.7; Sighted X = −41 ± 0.7; Y = −56.6 ± 1; Z = −12.4 ± 1.8): T(22) = 0.93; p = 0.360, T(21) = 0.20; p = 0.846; and T(24) = −0.40; p = 0.691; for X, Y, and Z, respectively). The quality of registration was checked before each run to rule out accidental movements of the headband containing neuronavigation sensors. To additionally verify the experimental target locations, we overlayed normalized group coordinates with the retinotopically mapped V1 location from the recent visual cortex atlas (Rosenke et al., 2021) and the sphere from reported VWFA coordinates to ensure that the targets are within the boundaries of the stimulated structures. Normalized target coordinates are presented for visualization purposes in Figure 2B. Additionally, group average and individual normalized coordinates are presented in Figure 3.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Representation of TMS targets normalized to MNI space: group average (top panel) and individual (bottom panel) coordinates for the early visual cortex and visual word form area in congenitally blind and sighted subjects. Actual stimulation was performed in the native space of each subject to account for intersubject neuroanatomical variability. Normalization to MNI space was performed in Brainsight software. See Materials and Methods for details on TMS target localization.

TMS was expected to affect the subjects’ performance across the two sites at different times. Therefore, the sites could serve one another as inherent control. With the inclusion of the control time window, a separate control site was not required in this design.

TMS: protocol

Stimulation was performed using a MagPro X100 stimulator (MagVenture) with a 70 mm figure-of-eight coil. The stimulation was guided via the Brainsight 2 neuronavigation system (Rogue Research) with a Polaris Vicra infrared camera (Northern Digital). During each trial, a paired-pulse TMS with an interpulse interval of 40 ms (25 Hz) was applied at one of four time windows relative to the trial onset: −40/0, 60/100, 160/200, or 260/300 ms (Fig. 2C). Summation of the TMS pulses in the paired-pulse stimulation results in larger effects than single-pulse (Walsh and Pascual-Leone, 2003). In turn, it increases the possibility of an observable TMS effect at the behavioral level and was successfully used in language processing studies (Turker and Hartwigsen, 2021 for review). This design was adopted from a cross-validated study using TMS in the early and higher-order visual cortices during scene perception (Wischnewski and Peelen, 2021a, b) with a few notable differences: (1) with limited access to the congenitally blind subjects, we opted to use a within-subject design that resulted in stimulation across all sites for each participant, instead experiments performed on independent samples for each site; (2) to control for potential TMS side effects we decided to use an additional control time window (viz., −40/0 ms) with biologically improbable neuronal contributions to sensory processing. This allowed us to perform comparisons across experimental time windows within each site.

TMS was performed with individually adjusted intensity based on the resting motor threshold. The motor threshold was measured as the lowest stimulator output required to observe a visible hand twitch in at least 5 out of 10 trials during stimulation of the contralateral hand area in the primary motor cortex. The right primary motor cortex (M1) was marked as a target for resting motor threshold measurements. M1 location was marked via anatomical landmarks as the “omega knob” in the right precentral gyrus—the location of the canonical hand area in M1 (Merabet et al., 2004; Vidoni et al., 2010). Stimulation intensity for the training and experimental procedures was set to 110% of the motor threshold measured by visible left-hand twitches following approximately half of 10 pulses to right M1. This method of determining the stimulation intensity has been successfully applied in multiple previous TMS studies that used “virtual lesion” protocols to investigate the functional role of the left-vOTC in visual reading, tactile reading, and speech processing tasks (Duncan et al., 2010; Pattamadilok et al., 2015; Siuda-Krzywicka et al., 2016; Bola et al., 2019). The average individual motor threshold, measured by maximum stimulation output power, was 39.7% (SD = 8%; range, 28–54%) for the blind group and 36.7%(SD = 4.7%; range, 28–46%) for the sighted group. There was no significant difference in intensity between groups (two-sample T-test t(19.4) = 1.16, p = 0.261). Subjects did not report any side effects during the EVC stimulation. During the VWFA stimulation, minimal head twitches were observed due to the proximity of neck muscles. Most subjects noticed these effects but did not find them uncomfortable. One blind subject reported that the stimulation was unpleasant. In this case, the TMS intensity was reduced to 100% motor threshold. Since the side effects were independent of the TMS timings, these peripheral effects were accounted for by comparisons to the within-site control time window.

The order of TMS sites and sensory modalities was counterbalanced across subjects. Additionally, within each sensory/TMS site condition, the order of the TMS time windows was randomized for each condition for each subject before the experiment. The randomization was performed with the rule that in adjacent experimental trials, TMS had to be applied at adjacent time windows (e.g., 60/100 ms → 160/200 ms → 60/100 ms → −40/0 ms, etc.). Such a staircase procedure is widely used in chronometric TMS studies (Sliwinska et al., 2012; Pattamadilok et al., 2015; Bola et al., 2019) to reduce the probability of subjects differentiating the stimulation conditions between time windows—especially between the earliest and the latest TMS onsets.

TMS: procedure

First, we asked subjects to provide informed consent and complete a safety screening. Next, we familiarized them with the principles of TMS and the neuronavigation system. Subsequently, we registered subjects to their structural MRI scan containing marked TMS target sites (EVC and VWFA). Afterward, we measured the resting motor threshold by applying single TMS pulses to the hand area in the right M1. Then, we conducted two short training sessions with paired-pulse TMS for reading and speech processing to familiarize the subjects with the task and stimulation. Finally, we performed the actual TMS experiments, divided into four runs with short breaks between them. Additionally, before each run, we applied TMS to the target site to test for potential side effects of the stimulation and ensure the subjects’ comfort. The whole procedure lasted ∼120 min for blind or 90 min for sighted subjects.

TMS: data analyses

Before data analyses, all trials were trimmed to exclude atypically accelerated or delayed responses. For each subject, these trials were defined as button presses faster or slower than 2.5 SD in each experimental condition (2.6% trials for the blind and 2.5% trials for the sighted). All data analyses were performed with R (4.1.3, R Core Team, 2022).

Analyses were split into reaction times and accuracy. For each metric, linear mixed-effect models were used to fit the data. Fixed factors predicting accuracy or reaction time were represented by an interaction between the TMS site, lexicality, and TMS time window for each group separately. Random effects included subject and stimuli length [model equation: Accuracy/Reaction Time ∼ Site * Time Window * Lexicality + (1|Subject) + (1|Length)]. Additionally, to directly test TMS site × Time Window × Modality interactions, we computed models including sensory modality as a fixed factor interacting with the rest. Finally, to test for interaction effects between blind and sighted groups, we computed a linear model including the group factor with lme function from nlme R package. To account for differences in reaction times between reading and speech and in reading between groups (tactile/visual), model weights were offset by constant variance function allowing the group × sensory modality variance inequalities. However, due to reaction time differences across reading and speech conditions, further post hoc analyses were performed with estimated marginal means computed within group-and-modality–specific models to properly estimate standard errors for speech and reading with emmeans package. Post hoc pairwise tests were Bonferroni corrected for the number of all performed comparisons. Specifically, we were interested in two main types of comparisons computed within each group, stimulated site, and sensory modality: (1) TMS effects in relation to the control time window (3 comparisons) and (2) TMS effects between all “experimental” time windows (3 comparisons, namely, early vs middle, early vs late, and middle vs late). Therefore, with six comparisons per site (2), group (2), and sensory modality (2), all p values were multiplied by 24. Such a strict procedure was chosen to partially offset the more robust approach of trial-based linear mixed-effects models (see below), which results in more degrees of freedom in comparison with traditional group-level statistics. We did not directly compare the TMS effects across stimulated sites and sensory modalities due to confounding factors, such as side effects of the TMS pulses or overall temporal characteristics of processing in each sensory modality.

Reaction time data were fitted with lmer command, using a Gaussian link function and included trimmed correct responses (1,854 and 1,928 trials for blind reading and speech and 1,782 and 1,774 for sighted reading and speech, respectively). For accuracy analyses in the TMS experiment, we followed guidelines from Jaeger (2008), which were also implemented in previous TMS studies that served as an inspiration for this experiment (Bola et al., 2019; Pattamadilok et al., 2019). Specifically, given the bimodal nature of responses, accuracy analyses were performed with a generalized linear mixed-effects model binomial link function from the LME4 R package (Bates, 2016). The reaction time joint model included 7,338 correct trials. The accuracy joint model included 7,796 trials including correct (7338), incorrect (412), and missing (46) responses which were treated as incorrect. For all models, model-fitting was optimized with “nAGQ = 0” and “control = glmerControl (optimizer = “nloptwrap”)” functions to account for a large number of trials (Bates, 2016; Pattamadilok et al., 2019). All models were additionally compared with a null (random intercept-only) model with analyses of deviance performed by a likelihood ratio test to ensure the fit. The significance of fixed effects and their interactions was tested with type III analysis of variance with Satterthwaite's method for degrees of freedom estimation and Wald chi-square tests for reaction time and accuracy models, respectively. All post hoc pairwise comparisons were computed using estimated marginal means in the emmeans R package.

Finally, since estimated marginal means represent model-based values which can be difficult to interpret (especially for accuracy), we also include observed values for visual inspection and comparison. “Observed reaction time” data represent TMS effects computed for mean reaction times in each experimental condition. “Observed accuracy” figures represent TMS effects computed for accuracy rates, i.e., decrease in correctness (%) in all TMS time windows in relation to the control time window.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Behavioral results: pseudowords and words reading speed. Subjects were tested with standardized reading lists and had to read aloud as fast as possible for one minute. ***p < 0.001, Bonferroni corrected.

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Functional MRI results: responses to sensory control stimuli, pseudowords and words during reading (tactile Braille or visual) or speech processing in the early visual cortex (top panel) and the visual word form area (VWFA, bottom panel) of congenitally blind and sighted subjects. Regions of interest, from which the signal was extracted, are represented in red: V1 data were extracted from the combined primary visual cortex masks in visfatlas (Rosenke et al., 2021); the VWFA data were extracted from a 5 mm radius sphere placed at the original coordinates reported by Cohen et al. (2002). The asterisks above bars represent the significance of one-sample T-tests against 0 and the asterisks between bars represent the significance of post hoc paired T-tests. All p values were corrected for multiple comparisons with Bonferroni’s correction. NS, not significant; *p < 0.05, **p < 0.01, ***p < 0.001.

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

TMS effects on reaction times across groups (left: congenitally blind, right: sighted), sensory modalities (top: reading, bottom: speech), TMS sites (the early visual cortex, the visual word form area), and time. Graphs represent TMS effects calculated by contrasts between reaction time estimated marginal means in each time window (TMS pulses delivered 60/100, 160/200, or 260/300 ms after stimuli presentation) related to the control time window (with pulses delivered −40/0 ms in relation to the stimuli presentation). Error bars represent the standard error of the contrast estimates. Raw observed mean TMS effects are additionally presented in Figure 12. Symbols above time window labels represent the significance of its contrast to the control time window. Ns, not significant; *p < 0.05; **p < 0.01; ***p < 0.001; t p = 0.1, Bonferroni corrected (see Materials and Methods for details).

Figure 7.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 7.

TMS effects on accuracies across groups (congenitally blind, sighted), sensory modalities (reading, speech), TMS sites (the early visual cortex, the visual word form area), and time. Graphs represent TMS effects calculated by contrasts between accuracy in each time window related to the control time window (with pulses delivered −40/0 ms in relation to the stimuli presentation). All effects are presented on a logit scale used in linear mixed-effects analyses, and they correspond to changes oscillating ∼5% of increases or decreases in accuracy. Error bars represent the standard error of the contrast estimates. Raw observed TMS effects calculated as changes in accuracy (%) are additionally presented in Figure 14. Values above each time window represent the significance of its contrast to the control time window. Ns, not significant.

Data availability

The code and data used in this study are available as a GitHub repository: https://github.com/JacMatu/fMRI_chronoTMS_Blind_Sighted.

Results

Behavioral results

First, we investigated the reading speed of words and pseudowords in both groups. A 2 (Group: blind, sighted)-by-2 (Lexicality: words, pseudowords) mixed ANOVA indicated a significant main effect of group (F(1,38) = 128.2, p < 0.001), main effect of lexicality (F(1,38) = 260.4, p < 0.001), and group × lexicality interaction (F(1,38) = 39, p < 0.001). Bonferroni-corrected post hoc tests showed expectedly that within 1 min both groups read more words than pseudowords (paired T-tests: blind T(19) = 9.1; pcorr < 0.001; sighted T(19) = 13.3; pcorr < 0.001) and that holistic visual reading in sighted was faster than sequential Braille reading in blind regardless of the lexicality (two-sample T-tests: pseudowords T(28) = 7.9; pcorr < 0.001; words T(37) = 13; pcorr < 0.001; Fig. 4).

Figure 8.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 8.

TMS reaction times: main effects of lexicality across groups and modalities. Estimated marginal mean reaction times for pseudowords and words in blind (left) and sighted (right) groups during reading (top) and speech processing (bottom). For each subplot, data were averaged across TMS sites (the early visual cortex and the visual word form area) and TMS time windows (control, early, middle, and late; see Materials and Methods for details). Data divided into the abovementioned experimental conditions are presented in Figure 10 (estimated marginal means) and Figure 13 (observed reaction times). ***p < 0.001.

Next, performance in the fMRI task showed that while the groups performed comparably in the linguistic conditions, blind subjects were better than sighted subjects in the degraded speech control condition processing and worse in the reading control condition processing. We analyzed the behavioral performance across all conditions (words, pseudowords, control) in reading and speech processing. We calculated d’ statistics in each condition to account for subjects’ response strategies. A mixed ANOVA with condition and modality as within-subject factors and the group as between-subject factor revealed a significant main effect of condition (F(2,76) = 15.7; p < 0.001), group × modality interaction (F(1,38) = 23.7; p < 0.001), modality × condition interaction (F(2,76) = 3.8; p = 0.041), and a group × modality × condition interaction (F(1,38) = 1.9; p < 0.001). Simple main effect analyses revealed no effect of group (F(1,38) = 1.1; p = 0.295) or modality (F(1,38) = 1.9; p = 0.171). Planned comparisons between conditions and groups in each modality revealed that sighted subjects performed worse in speech control conditions than in words and pseudowords (both T(19) = 4.3; pcorr = 0.005) and they were worse than blind subjects (T(24) = 3.3; pcorr = 0.019). There were no differences across conditions within each modality for the blind subjects (all pcorr > 0.444). Blind subjects performed worse than sighted in the reading control (T(22) = 3; p = 0.006) and pseudowords (T(29) = 2.6; p = 0.016) conditions and marginally worse in speech pseudowords condition (T(19) = 1.8; p = 0.083; Fig. 1B).

fMRI results

Sensory deprivation does not cause V1 to prefer linguistic stimuli

First, to characterize the profile of the EVC responses to all types of stimuli, we tested the differences of the BOLD (blood oxygen level-dependent) signal levels across groups and conditions against 0 with Bonferroni-corrected one-sample T-tests (Fig. 5, left). The EVC of the blind subjects responded significantly to all types of stimuli during reading (control T(20) = 3.9, pcorr = 0.032, pseudowords T(20) = 5.3, pcorr = 0.001, words T(20) = 4.1, pcorr = 0.019) and speech processing (control T(20) = 9.7, pcorr < 0.001, pseudowords T(20) = 5.7, pcorr < 0.001, words T(20) = 6.5, pcorr < 0.001). On the other hand, the EVC of the sighted subjects significantly responded to reading sensory control stimuli (T(20) = 4.6, pcorr = 0.006), but not pseudowords (T(20) = 2.2, pcorr = 0.99) and words (T(20) = 2.4, pcorr = 0.801). This difference could be perhaps driven by increased visual attention during detailed visual search in the control condition, in opposition to automatized reading in other conditions. Consistently with the results of studies that used sounds (Laurienti et al., 2002; Vetter et al., 2014), we found negative BOLD responses in the EVC of the blindfolded sighted participants during speech processing in all conditions (control T(20) = −4.2, pcorr = 0.016, pseudowords T(20) = −6.4, pcorr < 0.001, words T(20) = −6, pcorr < 0.001).

Figure 9.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 9.

TMS effects on reaction times across lexicalities (words, pseudowords), groups (left: congenitally blind, right: sighted), sensory modalities (top: reading, bottom: speech), TMS sites (the early visual cortex, the visual word form area), and time. Graphs represent TMS effects calculated by contrasts between reaction time estimated marginal means in each time window (TMS pulses delivered 60/100, 160/200, or 260/300 ms after stimuli presentation) related to the control time window (with pulses delivered −40/0 ms in relation to the stimuli presentation). Solid lines represent TMS effects for words and dashed lines represent TMS effects for pseudowords. Raw observed mean TMS effects are additionally presented in Figure 13. Error bars represent the standard error of the contrast estimates. Asterisks above each time window label represent the significance of its contrast to the control time window for words or pseudowords. Asterisks within each time window indicate differences in the magnitude of the TMS effect between words and pseudowords. *p < 0.05; ***p < 0.001; t pcorr = 0.117 (puncorr = 0.004), Bonferroni corrected (see Materials and Methods for details).

Next, to test the selectivity of responses, we computed a mixed ANOVA, which showed significant main effects of group (F(1,38) = 63.4, p < 0.001), sensory modality (F(1,38) = 9.4, p = 0.004), and condition (F(2,76) = 18.6, p < 0.001). Crucially, there was a significant three-way group–modality–condition interaction, highlighting distinct patterns of neuronal responses in the EVC across groups (F(2,76) = 12.9, p < 0.001; see Table 2 for details of post hoc pairwise comparisons). While the abovementioned analyses showed that the blind EVC responded to all types of tactile stimuli during reading, these responses were nonselective, as indicated by insignificant differences between sensory control stimuli, words, and pseudowords (Fig. 5, top; Table 2). However, low-level control sounds during speech processing evoked stronger responses than words or pseudowords in the EVC, hinting at a possible preference for low-level stimuli in the auditory modality or a potential higher attentional demand in this condition (Fig. 5, top; Table 2). A similar pattern was observed in sighted subjects during visual reading, with stronger V1 responses to low-level reading control than to words or pseudowords and no differences between the latter. Additionally, no differences between conditions were found during speech processing in sighted (Fig. 5, top; Table 2).

View this table:
  • View inline
  • View popup
Table 2.

Post hoc pairwise comparisons for BOLD responses in the early visual cortex across groups and conditions in the fMRI task

Finally, while it goes beyond the scope of this manuscript, we also tested other sides of this three-way interaction: comparisons between groups (blind, sighted) and sensory modalities (reading, speech) within each condition. Briefly, these results showed that differences in the level of V1 activity between blind and sighted were only significant during speech processing (words, pseudowords, and control) and not during reading. Lastly, congenitally blind subjects activated V1 more for speech than tactile reading, but only in the control condition, with no differences during words and pseudowords processing. Expectedly, sighted subjects activated V1 more for (visual) reading than for speech processing in all conditions. Details of these tests are provided in Table 3.

View this table:
  • View inline
  • View popup
Table 3.

Post hoc pairwise comparisons for BOLD responses in the early visual cortex and the visual word form area across groups and conditions in the fMRI localizer task

VWFA prefers lexical material irrespective of sensory deprivation and processing modality

The VWFA of the congenitally blind subjects responded significantly to pseudowords and words both during reading and speech processing (read pseudowords T(19) = 5.2, pcorr = 0.002, read words T(19) = 5.6, pcorr < 0.001; spoken pseudowords T(19) = 7; pcorr < 0.001; spoken words T(19) = 6.7; pcorr < 0.001; Fig. 5, bottom). Additionally, significant responses to the control condition were found in the blind, but only during speech processing (T(19) = 4.3; pcorr = 0.022) and not reading (T(19) = 1.3; pcorr = 0.999). In sighted subjects, both pseudowords (T(19) = 6.1; pcorr < 0.001) and words (T(19) = 5.1; pcorr = 0.004) evoked significant activity, but only during visual reading. We did not observe significant activity to speech processing in sighted subjects in any condition (all ps > 0.517).

Next, we tested the selectivity of responses with a mixed ANOVA, which, similarly to the EVC, showed significant main effects of group (F(1,38) = 7.9, p < 0.001), sensory modality (F(1,38) = 5.9, p = 0.019), and condition (F(2,76) = 37.5, p < 0.001). However, due to higher responses to words and pseudowords than to sensory control conditions across all groups in both modalities, there was no significant three-way group–modality–condition interaction, hinting at similar patterns of neuronal responses in the VWFA across groups (F(2,76) = 0.7, p = 0.478; see Table 2 for detailed comparisons). Instead, we observed two two-way interactions: group-by-modality (F(1,38) = 38.9; p < 0.001) and group-by-condition (F(2,76) = 4.6; p = 0.013), which were driven by responses to speech being present in the blind group (Fig. 5, bottom). Contrary to results observed in V1, neuronal responses in the VWFA indicated the preference for lexical material, which was demonstrated by stronger activation for words and pseudowords in reading and speech in both congenitally blind and sighted subjects (Fig. 5, bottom; Table 2). Even though it was weaker, this pattern was observed also in sighted subjects during speech processing. To directly test the hypotheses about differences in activity across ROIs, modalities and conditions both in blind and sighted, we performed additional interaction analyses within each group. These results showed that, indeed, an ROI-by-modality-by-condition interaction is present both in blind (F(2,38) = 10.6, p < 0.001) and sighted (F(2,38) = 5.2, p = 0.01).

Additionally, we also compared the levels of activity in each condition between groups (blind, sighted) and sensory modalities (reading and speech, within each group). These results showed no differences between the blind and sighted during reading and significantly stronger VWFA responses in the blind during speech processing in all conditions (Table 3). Finally, comparisons between sensory modalities within each group showed no differences between conditions during tactile reading and speech processing in the blind and preference toward visual reading over speech for all conditions in the sighted (Table 3).

TMS results

Blind EVC and VWFA support the early stages of reading similarly to sighted

First, we investigated the spatial and temporal patterns of TMS-induced slowdowns during reading and speech processing in the congenitally blind group. We ensured that the overall model fit is improved by the inclusion of all fixed (sensory modality, TMS site, TMS time window, Lexicality) and random (subject, stimuli length) effects, by comparing it to the null model (i.e., a model with a random subject intercept only; χ2(32) = 3,989.9, p < 0.001). Crucially, the full linear mixed-effect model showed significant three-way interaction between sensory modality, stimulated site, and time window, implying that TMS affected the blind EVC and the visual word form area differently in reading and speech (F(3,3736) = 2.84, p = 0.036). We also observed the main effects of sensory modality (F(1,3736) = 6,619; p < 0.001), site (F(1,3736) = 41.9; p < 0.001), time window (F(3,3736) = 9.3; p < 0.001), and lexicality (F(1,3736) = 262.4; p < 0.001), with overall reaction times slower for reading, generally slower in all experimental time windows in relation to control, slower when TMS was administered to the visual word form area, and slower when subjects were processing pseudowords.

Next, to investigate the modality-specific patterns in the EVC and the visual word form area, we performed pairwise comparisons using estimated marginal means from the modality-specific models. TMS should not interfere with stimuli processing in the control time window (−40/0 ms). Therefore, we defined the specificity of the stimulation as a significance of differences between reaction times in each “experimental” time window (namely: 60/100, 160/200 and 260/300 ms) and the control time window, which allowed us to account for all confounding factors related to the stimulation itself. When blind subjects were reading in tactile Braille, TMS administered to the EVC caused slowdowns 60/100 ms (T(1824) = 3.27; pcorr = 0.026), 160/200 ms (T(1824) = 3.22; pcorr = 0.031), and 260/300 ms after stimulus presentation (T(1824) = 3.69; pcorr = 0.005; Fig. 6, top left). This effect shows that the blind EVC participates in tactile stimuli processing from the earliest moments, but this effect is generalized across time, with no temporal specificity. Contrary, in the visual word form area of the blind, TMS effect during reading was absent in the earliest time window (T(1824) = 0.72; pcorr = 0.999), but significant in both later time windows with gradual increase of the slowdown toward the latest time window (T(1824) = 3.88; pcorr = 0.003 and T(1824) = 5.27; pcorr < 0.001 for 160/200 and 260/300 ms, respectively; Fig. 6, top left). This result implies that the visual word form area engages in tactile reading only after the initial sensory evaluation of the stimuli. During speech processing in the blind, TMS did not cause any significant reaction time differences from the baseline control time window (all pcorr values in the EVC and the visual word form area > 0.99; Fig. 6, bottom left).

Figure 10.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 10.

TMS accuracies: main effects of lexicality across groups and modalities. Estimated marginal mean reaction times for pseudowords and words in blind (left) and sighted (right) groups during reading (top) and speech processing (bottom). For each subplot, data were averaged across TMS sites (the early visual cortex and the visual word form area) and TMS time windows (control, early, middle, and late; see Materials and Methods for details). Data divided into the abovementioned experimental conditions are presented in Figure 14 (estimated marginal means) and Figure 13 (observed error rates). ***p < 0.001.

Next, to further investigate the temporal patterns of the visual cortex's engagement in reading and speech processing, we compared the magnitude of the TMS effects between subsequent experimental time windows. We found no differences between 60/100, 160/200, and 260/300 ms when TMS was administered to the blind EVC, neither during tactile reading nor during speech processing (all pcorr > 0.99). However, we found significant differences between later time windows and the earliest time window in the visual word form area that were specific to tactile reading. Specifically, in comparison with the 60/100 ms time window, TMS induced greater slowdowns when it was delivered 160/200 ms (T(1824) = 3.17; pcorr = 0.036) and 260/300 ms (T(1824) = 5.04; pcorr < 0.001) after stimuli presentation, with no differences between the latter two (T(1824) = 1.37; pcorr > 0.99; Fig. 6, top left).

We used an identical analytical approach in the sighted controls, including data from visual reading and speech processing. In the sighted, we also observed the main effects of sensory modality (F(1,3511) = 2,103.7; p < 0.001), site (F(1,3511) = 24.9; p < 0.001), time window (F(3,3511) = 5.5; p < 0.001), and lexicality (F(1,3511) = 255.8; p < 0.001), with overall reaction times slower for reading, generally slower in all experimental time windows in relation to control, slower when TMS was administered to the visual word form area, and slower when subjects were processing pseudowords. We observed a two-way interaction between sensory modality and TMS time window, with significant slowdowns in all experimental time windows in relation to the control time window, but only during reading (ps < 0.028). Additionally, we observed a two-way interaction between sensory modality and stimulated site, with slower reaction times during the visual word form area stimulation, again specific to reading (p < 0.001). The three-way interaction between sensory modality, stimulated site, and TMS time window was not significant in the sighted subjects (F(3,3511) = 0.61; p = 0.608).

While the temporal patterns of TMS effects in the sighted were weaker than in blind subjects, they retained a remarkable similarity between the groups. First, we tested for TMS specificity using the control time window as a baseline. This analysis showed that when sighted subjects were reading visually, TMS pulses administered to their EVC slowed down their reaction times in the earliest (60/100 ms, T(1753) = 3.24; pcorr = 0.029) and the latest time window (260/300 ms; T(1753) = 3.13; pcorr = 0.042; Fig. 6, top right), supporting potential interaction between feedforward and feedback processing in the EVC of the sighted. While these slowdowns were much smaller (∼40 ms) than those in the blind (∼120 ms), differences in both groups constituted ∼6–7% increases from the baseline (control) reaction times. We did not find a significant TMS effect in the middle time window, 160/200 ms after stimuli presentation (T(1753) = 2.7; pcorr = 0.168), which may be partially driven by our conservative approach to multiple corrections (puncorr = 0.007). Similarly to the blind, TMS applied to the sighted visual word form area slowed down reading reaction times in the middle (T(1753) = 4.03; pcorr = 0.001) and late (T(1753) = 4.58; pcorr < 0.001; Fig. 6, top right) time windows in relation to the control baseline, supporting the view of higher-order processing occurring in the ventral occipitotemporal cortex. We did not observe any significant changes in reaction times induced by TMS during speech perception in sighed subjects (all pscorr > 0.99, all psuncorr > 0.155).

Next, similarly to the blind, we compared the magnitudes of the TMS effects between subsequent experimental time windows. In the sighted, we did not observe significant differences between any time windows in the EVC during reading or speech (all pscorr > 0.99). In the visual word form area, the magnitude of the slowdowns increased progressively across time, and we observed a reading-specific trend that did not reach statistical significance after corrections for multiple comparisons. However, in comparison with the early time window, increases in reaction times were greater in the middle (T(1753) = 2.33; puncorr = 0.019, pcorr = 0.466) and in the late time window (T(1753) = 2.86; puncorr = 0.004, pcorr = 0.010; Fig. 6, top right). Therefore, while the slowdowns induced by TMS during visual reading in sighted were smaller than tactile reading in the blind, the relative temporal patterns remained remarkably similar between the groups.

To explicitly test for differences in spatial and temporal TMS patterns between groups, we tested for a four-way interaction between group, TMS site, sensory modality, and time with a joined mixed linear-effects model with additional offset for group-by-modality variance inequality, which accounted for differences in reaction times between groups in reading (tactile Braille vs visual alphabets), but not speech processing. This interaction was insignificant (χ2(3) = 2.67, p = 0.45).

Next, we investigated TMS-induced effects of accuracy in the lexical decision tasks. A full model showed a trend for the tested four-way interaction between group, sensory modality, TMS site, and time windows (χ2(3) = 7.12, p = 0.068). However, further investigation of the temporal TMS patterns through modality-and-group-specific models showed no significant time × site interactions (blind reading χ2(3) = 1.57, p = 0.66; blind speech χ2(3) = 4.75, p = 0.19; sighted reading χ2(3) = 4.58, p = 0.21; sighted speech χ2(3) = 3.55, p = 0.31). Exploratory analyses of the TMS effects across groups, time, sites, and sensory modalities (i.e., changes in accuracy induced by TMS stimulation in comparison with the control time window in each experimental condition) showed no effects (Fig. 7). Given that the overall accuracy reached the level of 94%, the current task was most likely too easy for the subjects to induce errors with TMS (citation for task difficulty effects).

Figure 11.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 11.

TMS effects on accuracies across lexicalities (words, pseudowords), groups (congenitally blind, sighted), sensory modalities (reading, speech), TMS sites (the early visual cortex, the visual word form area), and time. Graphs represent TMS effects calculated by contrasts between accuracy in each time window related to the control time window (with pulses delivered −40/0 ms in relation to the stimuli presentation). Solid lines represent TMS effects for words and dashed lines represent TMS effects for pseudowords. All effects are presented on a logit scale used in linear mixed-effects analyses and they correspond to changes oscillating ∼5% of increases or decreases in accuracy. Raw observed TMS effects calculated as changes in accuracy (%) are additionally presented in Figure 15. Error bars represent the standard error of the contrast estimates. Values above each time window represent the significance of its contrast to the control time window.

Lexicality modulates TMS effects over time, but only in the blind group

Is the temporal hierarchy of the EVC and the visual word form area affected by stimulus lexicality? Words and pseudowords are characterized by similar low-level properties and both follow orthographic rules, while differing in semantic meaning, which is only found in real words. Our initial fMRI results hinted at no differences in the level of EVC and visual word form area activity between the two types of stimuli (Fig. 5). Nevertheless, we used the opportunity that our subjects performed a lexical decision task during the TMS study to further inspect potential lexicality modulation of the visual cortex's engagement in blind and sighted subjects by dividing our stimuli into words and pseudowords. Note that estimated marginal means are computed from averaging only ten trials and are more exploratory in nature, especially given no direct Group × TMS site × Time Window × Lexicality four-way interaction from a joint linear model (χ2(3) = 0.43, p = 0.94).

Since pseudowords are more demanding to process than words, expectedly we found the main effects of lexicality (with slower reaction times to pseudowords than words) present in reading and speech processing both in the blind (reading F(1,1824) = 273.4; p < 0.001; speech F(1,1898) = 99.2; p < 0.001) and the sighted group (reading F(1,1753) = 173.7; p < 0.001; speech F(1,1745) = 151.2; p < 0.001; Fig. 8). Additionally, interaction analyses showed a significant modality-by-lexicality (F(1,3736) = 81.2, p < 0.0001) and modality-by-time-by-lexicality interaction in the blind group (F(3,3736) = 3, p = 0.029), indicating that stimuli lexicality affected temporal profiles of responses differently for reading and speech. However, these patterns were similar in EVC and VWFA (four-way interaction with the TMS site included F(3,3736) = 0.04, p = 0.993). No significant lexicality interactions we found in the sighted subjects.

Figure 12.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 12.

Control visualization of observed TMS effects on reaction times across groups (left: congenitally blind, right: sighted), sensory modalities (top: reading, bottom: speech), TMS sites (the early visual cortex, the visual word form area), and time: observed data. Graphs represent TMS effects calculated by contrasts between mean reaction time in each time window (TMS pulses delivered 60/100, 160/200, or 260/300 ms after stimuli presentation) related to the control time window (with pulses delivered −40/0 ms in relation to the stimuli presentation). Error bars represent the standard error of the contrast estimates.

Next, we explored detailed time courses of the TMS effects on RT for words and pseudowords separately. In the blind group, TMS administered to the EVC resulted in comparable effects for words and pseudowords in the two initial time windows. However, in the latest time window (in relation to the control time window), induced slowdowns were increased selectively for words (T(1824) = 3.51; pcorr = 0.022), and not for pseudowords (T(1824) = 1.71; pcorr = 0.999), suggesting that stimuli meaning may have an impact at the later stages of neuronal computations in the blind EVC (Fig. 9, top left). Conversely, lexicality did not impact the reaction times differently during the visual word form area stimulation in the blind, with comparable, progressively increasing slowdowns toward the latest time window observed both for words (T(1824) = 4.22; pcorr = 0.001) and pseudowords (T(1824) = 3.26; pcorr = 0.055, puncorr = 0.001). To directly test the linear increases of the slowdowns, we fitted a linear contrast over time windows separately for words and pseudowords during the stimulation of each site. In the VWFA, we found a significant fit both for words (T ratio (1824) = 4.9, p < 0.0001) and pseudowords (T ratio (1824) = 3.6, p < 0.001). This was not the case for the EVC, in which linear fit was found only for words (T ratio (1824) = 3.3, p = 0.004), and not for pseudowords (T ratio (1824) = 1.7, p = 0.378).

Figure 13.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 13.

Control visualization of observed reaction times (lexicality effects). TMS effects on reaction times across groups (left: congenitally blind, right: sighted), sensory modalities (top: reading, bottom: speech), TMS sites (the early visual cortex, the visual word form area), and time. Graphs represent TMS effects calculated by contrasts between reaction time observed means in each time window (TMS pulses delivered 60/100, 160/200, or 260/300 ms after stimuli presentation) related to the control time window (with pulses delivered −40/0 ms in relation to the stimuli presentation). Solid lines represent TMS effects for words and dashed lines represent TMS effects for pseudowords. Error bars represent the standard error of the contrast estimates.

In the sighted group, we did not observe any significant lexicality effects during reading with TMS over the EVC. However, TMS in the visual word form area resulted in slowdowns observed in later time windows for reading words (middle time window T(1753) = 4.24; pcorr = 0.001; late time window T(1753) = 4.21; pcorr = 0.001) and were much weaker for pseudowords (middle time window T(1753) = 1.51; pcorr = 0.999, puncorr = 0.131; late time window T(1753) = 2.29; pcorr = 0.999, puncorr = 0.022; Fig. 9, top right).

Finally, during speech processing, we observed hints of weaker lexicality effects limited to the blind group in the early time window, represented by word-selective slowdowns both in the EVC (T(1898) = 2.14; puncorr = 0.026) and the visual word form area (T(1898) = 3.02; puncorr = 0.003; pcorr = 0.062, Fig. 9, bottom). Additionally, we tested the differences in the TMS effects magnitudes by investigating marginal effects. Specifically, we tested differences in contrasts of reaction times in each time window and the control time window between words and pseudowords (contrast of contrasts). These results showed that in the blind group, TMS induced greater slowdowns during the speech processing of words than pseudowords in the early time window in the EVC (contrast estimate = 106.6; SE = 34.5, z = 3.09; pcorr = 0.048) and the VWFA (contrast estimate = 106.6; SE = 34.5, z = 3.09; puncorr = 0.005; pcorr = 0.11). No effects for speech processing were observed in the sighted group, even with more lenient significance levels.

Similarly to main analyses, TMS did not impact blind subjects’ accuracy, with no significant lexicality main effects (all ps > 0.344; Fig. 10) or interactions (all ps > 0.285) in reading or speech processing. In the sighted group, we found a significant main effect of lexicality during reading (χ2(1) = 5, p = 0.025), with lower accuracy for pseudowords. No lexicality interactions were significant in the sighted group (Fig. 11). Finally, we inspected the raw observed patterns of TMS effects on RTs (Fig. 12), RTs across lexicalities (Fig. 13), accuracies (Fig. 14) and accuracies across lexicalities (Fig. 15), which showed comparable results to mixed linear models used for statistical analyses.

Figure 14.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 14.

Control visualization of observed TMS effects on accuracies across groups (congenitally blind, sighted), sensory modalities (reading, speech), TMS sites (the early visual cortex, the visual word form area), and time: observed data. Graphs represent percentage-based error rates TMS effects calculated by contrasts between accuracy in each time window related to the control time window (with pulses delivered −40/0 ms in relation to the stimuli presentation). Error bars represent the standard error of the contrast estimates.

Figure 15.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 15.

Control visualization of observed accuracy (lexicality effects). TMS effects on accuracy across groups (left: congenitally blind, right: sighted), sensory modalities (top: reading, bottom: speech), TMS sites (the early visual cortex, the visual word form area), and time. Graphs represent percentage-based error rates calculated by contrasts between reaction time observed means in each time window (TMS pulses delivered 60/100, 160/200, or 260/300 ms after stimuli presentation) related to the control time window (with pulses delivered −40/0 ms in relation to the stimuli presentation). Solid lines represent TMS effects for words and dashed lines represent TMS effects for pseudowords. Error bars represent the standard error of the contrast estimates.

Discussion

We combined fMRI and chronometric TMS to characterize brain responses and the computational hierarchy between the EVC and the VWFA during reading by touch (Braille) or vision and processing speech in congenitally blind and sighted subjects. Our results support the view that the spatiotemporal hierarchy for linguistic processing in the occipital cortex of sighted and congenitally blind is largely similar.

Using fMRI, we observed that the EVC responded to all conditions (words, pseudowords, and sensory control), regardless of their linguistic status during reading in blind and sighted, suggesting that the EVC supports low-level sensory extractions, in both groups. Indeed, blind V1 activates for simple tasks like texture exploration (Stilla et al., 2008), vibrotactile stimulation (Collignon et al., 2011; Müller et al., 2019), or sounds (Norman and Thaler, 2019), without any linguistic context. Moreover, blind EVC responded to all sounds. Responses to speech in the blind V1 compared with sighted were reported previously (Burton, 2003; Bedny et al., 2012). Here, V1 activity was significantly higher for the meaningless control condition than for words and pseudowords, similarly to visual processing in sighted (Fig. 5). This suggests that higher-order linguistic operations are not required to engage the blind EVC, as it may retain its tuning toward low-level processing in atypical modalities (Bednaya et al., 2022). Our TMS results showed that EVC is crucial for reading both in blind and sighted in the earliest (60/100 ms) time window. This effect was extended to spoken word processing only in the blind group. Similar effects occurring 50 ms poststimulation were previously found in the blind population for sound (Collignon et al., 2007, 2009) and touch (Müller et al., 2019). Where do these early cross-modal signals originate? Studies suggest they might be bottom-up inputs from subcortical regions, representing a sensory component of stimuli evaluation (Bavelier and Neville, 2002). While blind optic radiations degenerate (Paré et al., 2023), these pathways could be used by other modalities resulting in partially similar roles of sighted EVC. In this view, low-level processing could originate from the lateral geniculate nucleus to layer 4 of V1 (roughly 35 ms poststimulation; Müller et al., 2019) together with corticocortical inputs from other sensory regions.

Interestingly, reading effects in EVC were also extended to other time windows (160/200 and 260/300 ms). What is the nature of these computations? First, in the context of reading, they could be linked to serial updates of processed input. Especially in the Braille reading, this extended effect could be driven by progressing from letter to letter. Interestingly, similar extensions to later time windows were observed in the sighted, in which visual reading is rather holistic. While this might be against a simplified cortical hierarchy, similar effects beyond early processing were observed in sighted EVC. Specifically, intracortical recordings during reading showed that following the initial increase in activity, sighted EVC actives again ∼500 ms following stimuli presentation (Woolnough et al., 2021, 2022). This is also consistent with mixed results reported from visual masking studies showing critical time windows ranging from −50 to 200 ms (de Graaf et al., 2014), suggesting that TMS applied to V1 most likely reflects multiple recurrent visual processes (Lee et al., 1998, Chambers et al., 2013). Therefore, due to the high degree of corticocortical connections and a mixture of feedforward and feedback signals, V1's roles in more complex processing might be observed in sighted subjects.

Beyond the EVC, results from our study showed that the higher-order word-selective visual cortex (presumably VWFA) retains its preference toward linguistic material—highlighted by increased responses to words and pseudowords over control stimuli for both groups, further causally supported by our TMS results. Consistently with other chronometric TMS studies on reading (Duncan et al., 2010; Pattamadilok et al., 2015), our results highlight vOTC's engagement in reading only after 100/150 ms. Since VWFA was critical only at the later stage, its role could be linked to retained orthographical analysis in the blind, which supports hypothesis of retained visual cortex's organization. While other fMRI studies showed similarities between visual and tactile reading in blind (Reich et al., 2011) and trained sighted subjects (Siuda-Krzywicka et al., 2016, Bola et al., 2019), our results show that dynamics between low-level and higher-order blind visual cortices might be retained. Furthermore, both groups responded more to lexical stimuli than control conditions during reading and speech, highlighting the multimodal nature of linguistic processing in vOTC, even in sighted subjects. However, the exact nature of these responses is still debated (Pattamadilok et al., 2019). In conjunction with TMS effects, these results suggest that the order of operations could propagate from EVC through vOTC to the language network with a posterior-anterior gradient in sighted and blind people (Lerma-Usabiaga et al., 2018).

Our fMRI data show that the linguistic preferences occurred only in the vOTC and not in the EVC. Furthermore, our TMS results show that the computations in EVC can precede those occurring in vOTC in a similar fashion in sighted and blind subjects, suggesting that basic computational gradients might be largely independent of a lifelong lack of visual experience. Additionally, while the evidence for higher-order linguistic processing in the blind EVC was reported previously (Amedi et al., 2003; Bedny et al., 2011; Van Ackeren et al., 2018), traces of nonvisual processing in the EVC were also found in sighted, including scene representations (Vetter et al., 2014), speech processing (Seydell-Greenwald et al., 2023), and object representations (Borghesani et al., 2016). Therefore, EVC might be involved in multiple processes driven by bottom-up and top-down signals, resulting in low-level processing (feedforward inputs) and more complex tasks (feedback signals) from higher-order visual areas (Kok et al., 2012; de Graaf et al., 2014; de Lange et al., 2018). These might be additionally modulated by attentional demands (Stevens et al., 2007; Duymuş et al., 2024). Here, higher attentional demands could partially modulate EVC BOLD responses to control conditions (which include a 1-back task on a nonverbalizable material). Overall, our results support the idea that V1 might serve as a “cognitive map” beyond low-level representations.

In contrast to reading, apart from weaker word-specific slowdowns in the earliest time window of the blind group, we did not observe significant TMS effects during speech processing. This may seem puzzling given the abovementioned evidence for the visual cortex's engagement in speech processing, but there are a few possible explanations. First, while we aimed for a balanced design with TMS time windows identical across groups and conditions, auditory, visual, and tactile processing is inherently different in time. Therefore, it is possible that TMS with timings tailored to speech would reveal a different pattern of results. Second, while speech-related activity in the visual cortex was reported both in blind and sighted, these activities might be, to some degree, epiphenomenal and not causally involved in auditory processing since they could represent a by-product of connections between early sensory areas or top-down signals, in line with a theory treating V1 as a “multimodal cognitive blackboard” (Roelfsema and de Lange, 2016). Finally, since in blind TMS effects were observed selectively for meaningful word processing (Fig. 9), blind EVC's response to sounds might be modulated by semantic meaning (Bedny et al., 2011; Van Ackeren et al., 2018; Xu et al., 2023). Thus, V1, which receives (predominant) feedforward but also feedback signals, could prefer low-level processing of spatially arranged Braille texture, while supporting more complex, feedback-driven processing (e.g., semantic). Here, a hint for such a mechanism could be partially supported by stronger slowdowns observed for words and not pseudowords in the latest time window during blind EVC stimulation (Fig. 9)—an effect not present in the sighted. These results raise an intriguing possibility that for speech, EVC enhances its tuning to higher-level semantic processing already observable in sighted (Seydell-Greenwald et al., 2023) but would causally support perception/behavior only in the blind. This response in the blind EVC may build on existing connections observable in the sighted, but those connections might still show functional reorganization due to cross-modal plasticity (Dormal and Collignon, 2011). However, given the exploratory nature of this analysis, these results should be interpreted with caution.

We acknowledge our study's limitations. Additional TMS time windows tailored specifically for each modality would allow tracing later stages of linguistic processing (e.g., semantics) in reading and speech and a more direct comparison between groups and modalities. While our main aim was to trace the causal involvement of the visual cortex in linguistic processing with chronometric TMS, a refined design with well-defined categories would enable inference about fine-tuned representations through multivoxel pattern analyses.

In conclusion, our findings suggest that initial stages of reading hierarchy are similar between blind and sighted, with V1 being critical to Braille and visual reading before the vOTC, supporting partial retention of V1's original functions. While such hierarchy was not observed in speech, the blind EVC was crucial for meaningful word processing, suggesting possible feedback signals from language network driving more complex computations. Therefore, while blindness may extend roles performed by the deprived cortices, their computational hierarchy might not depend on visual experience.

Footnotes

  • This work was supported by the National Science Centre Poland Grant (2017/27/N/HS6/02669) and the National Science Centre Poland Grant (2018/30/E/HS6/00206). J.M. was additionally supported by the FSR Incoming Postdoc fellowship from Université Catholique de Louvain. We thank Dawid Droździel and Marta Rodziewicz for technical help and to the blind community for their enthusiastic participation in the experimental procedures.

  • ↵*O.C. and A.M. share senior authorship.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Jacek Matuszewski at jacek.matuszewski{at}uclouvain.be or Olivier Collignon at Olivier.collignon{at}uclouvain.be or Artur Marchewka at a.marchewka@nencki.edu.pl.

SfN exclusive license.

References

  1. ↵
    1. Amedi A,
    2. Raz N,
    3. Pianka P,
    4. Malach R,
    5. Zohary E
    (2003) Early ‘visual’ cortex activation correlates with superior verbal memory performance in the blind. Nat Neurosci 6:758–766. https://doi.org/10.1038/nn1072
    OpenUrlCrossRefPubMed
  2. ↵
    1. Arel-Bundock V
    (2023). marginaleffects: Predictions, Comparisons, Slopes, Marginal Means, and Hypothesis Tests. R package version 0.9.0. https://CRAN.R-project.org/package=marginaleffects
  3. ↵
    1. Banaszkiewicz A,
    2. Matuszewski J,
    3. Bola Ł,
    4. Szczepanik M,
    5. Kossowski B,
    6. Rutkowski P,
    7. Szwed M,
    8. Emmorey K,
    9. Jednoróg K,
    10. Marchewka A
    (2021) Multimodal imaging of brain reorganization in hearing late learners of sign language. Hum Brain Mapp 42:384–397. https://doi.org/10.1002/hbm.25229 pmid:33098616
    OpenUrlCrossRefPubMed
  4. ↵
    1. Bates D
    (2016) Lme4: linear mixed-effects models using Eigen and S4. R Package Version 1:1.
    OpenUrl
  5. ↵
    1. Battal C, et al.
    (2022) Structural and functional network-level reorganization in the coding of auditory motion directions and sound source locations in the absence of vision. J Neurosci 42:4652–4668. https://doi.org/10.1523/JNEUROSCI.1554-21.2022 pmid:35501150
    OpenUrlAbstract/FREE Full Text
  6. ↵
    1. Bavelier D,
    2. Neville HJ
    (2002) Cross-modal plasticity: where and how? Nat Rev Neurosci 3:443–452. https://doi.org/10.1038/nrn848
    OpenUrlCrossRefPubMed
  7. ↵
    1. Bednaya E,
    2. Mirkovic B,
    3. Berto M,
    4. Ricciardi E,
    5. Martinelli A,
    6. Federici A,
    7. Debener S,
    8. Bottari D
    (2022). Early visual cortex tracks speech envelope in the absence of visual input (p. 2022.06.28.497713). bioRxiv. https://doi.org/10.1101/2022.06.28.497713
  8. ↵
    1. Bedny M,
    2. Pascual-Leone A,
    3. Dodell-Feder D,
    4. Fedorenko E,
    5. Saxe R
    (2011) Language processing in the occipital cortex of congenitally blind adults. Proc Natl Acad Sci U S A 108:4429–4434. https://doi.org/10.1073/pnas.1014818108 pmid:21368161
    OpenUrlAbstract/FREE Full Text
  9. ↵
    1. Bedny M,
    2. Pascual-Leone A,
    3. Dravida S,
    4. Saxe R
    (2012) A sensitive period for language in the visual cortex: distinct patterns of plasticity in congenitally versus late blind adults. Brain Lang 122:162–170. https://doi.org/10.1016/j.bandl.2011.10.005 pmid:22154509
    OpenUrlCrossRefPubMed
  10. ↵
    1. Bola Ł,
    2. Matuszewski J,
    3. Szczepanik M,
    4. Droździel D,
    5. Sliwinska MW,
    6. Paplińska M,
    7. Jednoróg K,
    8. Szwed M,
    9. Marchewka A
    (2019) Functional hierarchy for tactile processing in the visual cortex of sighted adults. Neuroimage 202:116084. https://doi.org/10.1016/j.neuroimage.2019.116084
    OpenUrlCrossRefPubMed
  11. ↵
    1. Borghesani V,
    2. Pedregosa F,
    3. Buiatti M,
    4. Amadon A,
    5. Eger E,
    6. Piazza M
    (2016) Word meaning in the ventral visual path: a perceptual to conceptual gradient of semantic coding. Neuroimage 143:128–140. https://doi.org/10.1016/j.neuroimage.2016.08.068
    OpenUrlCrossRefPubMed
  12. ↵
    1. Büchel C,
    2. Price C,
    3. Friston K
    (1998) A multimodal language region in the ventral visual pathway. Nature 394:274–277. https://doi.org/10.1038/28389
    OpenUrlCrossRefPubMed
  13. ↵
    1. Burton H
    (2003) Visual cortex activity in early and late blind people. J Neurosci 23:4005–4011. https://doi.org/10.1523/jneurosci.23-10-04005.2003 pmid:12764085
    OpenUrlFREE Full Text
  14. ↵
    1. Burton H,
    2. Diamond JB,
    3. McDermott KB
    (2003) Dissociating cortical regions activated by semantic and phonological tasks: a fMRI study in blind and sighted people. J Neurophysiol 90:1965–1982. https://doi.org/10.1152/jn.00279.2003 pmid:12789013
    OpenUrlCrossRefPubMed
  15. ↵
    1. Cecchetti L,
    2. Kupers R,
    3. Ptito M,
    4. Pietrini P,
    5. Ricciardi E
    (2016) Are supramodality and cross-modal plasticity the yin and yang of brain development? From blindness to rehabilitation. Front Syst Neurosci 10:1–8. https://doi.org/10.3389/fnsys.2016.00089 pmid:27877116
    OpenUrlCrossRefPubMed
  16. ↵
    1. Chambers CD,
    2. Allen CPG,
    3. Maizey L,
    4. Williams MA
    (2013) Is delayed foveal feedback critical for extra-foveal perception? Cortex 49:327–335. https://doi.org/10.1016/j.cortex.2012.03.007
    OpenUrlCrossRefPubMed
  17. ↵
    1. Cohen LG, et al.
    (1997) Functional relevance of cross-modal plasticity in blind humans. Nature 389:180–183. https://doi.org/10.1038/38278
    OpenUrlCrossRefPubMed
  18. ↵
    1. Cohen L,
    2. Lehéricy S,
    3. Chochon F,
    4. Lemer C,
    5. Rivaud S,
    6. Dehaene S
    (2002) Language-specific tuning of visual cortex? Functional properties of the visual word form area. Brain 125:1054–1069. https://doi.org/10.1093/brain/awf094
    OpenUrlCrossRefPubMed
  19. ↵
    1. Collignon O,
    2. Davare M,
    3. Olivier E,
    4. De Volder AG
    (2009) Reorganisation of the right occipito-parietal stream for auditory spatial processing in early blind humans. A transcranial magnetic stimulation study. Brain Topogr 21:232–240. https://doi.org/10.1007/s10548-009-0075-8
    OpenUrlCrossRefPubMed
  20. ↵
    1. Collignon O,
    2. Lassonde M,
    3. Lepore F,
    4. Bastien D,
    5. Veraart C
    (2007) Functional cerebral reorganization for auditory spatial processing and auditory substitution of vision in early blind subjects. Cereb Cortex 17:457–465. https://doi.org/10.1093/cercor/bhj162
    OpenUrlCrossRefPubMed
  21. ↵
    1. Collignon O,
    2. Vandewalle G,
    3. Voss P,
    4. Albouy G,
    5. Charbonneau G,
    6. Lassonde M,
    7. Lepore F
    (2011) Functional specialization for auditory–spatial processing in the occipital cortex of congenitally blind humans. Proc Natl Acad Sci U S A 108:4435–4440. https://doi.org/10.1073/pnas.1013928108 pmid:21368198
    OpenUrlAbstract/FREE Full Text
  22. ↵
    1. Debowska W,
    2. Wolak T,
    3. Soluch P,
    4. Orzechowski M,
    5. Kossut M
    (2013) Design and evaluation of an innovative MRI-compatible Braille stimulator with high spatial and temporal resolution. J Neurosci Methods 213:32–38. https://doi.org/10.1016/j.jneumeth.2012.12.002
    OpenUrlCrossRefPubMed
  23. ↵
    1. de Graaf TA,
    2. Koivisto M,
    3. Jacobs C,
    4. Sack AT
    (2014) The chronometry of visual perception: review of occipital TMS masking studies. Neurosci Biobehav Rev 45:295–304. https://doi.org/10.1016/j.neubiorev.2014.06.017
    OpenUrlCrossRefPubMed
  24. ↵
    1. Dehaene S
    (2009) Reading in the brain: the science and evolution of a human invention. New York: Viking.
  25. ↵
    1. Dehaene S,
    2. Cohen L
    (2011) The unique role of the visual word form area in reading. Trends Cogn Sci 15:254–262. https://doi.org/10.1016/j.tics.2011.04.003
    OpenUrlCrossRefPubMed
  26. ↵
    1. de Lange FP,
    2. Heilbron M,
    3. Kok P
    (2018) How do expectations shape perception? Trends Cogn Sci 22:764–779. https://doi.org/10.1016/j.tics.2018.06.002
    OpenUrlCrossRefPubMed
  27. ↵
    1. Dormal G,
    2. Collignon O
    (2011) Functional selectivity in sensory-deprived cortices. J Neurophysiol 105:2627–2630. https://doi.org/10.1152/jn.00109.2011
    OpenUrlCrossRefPubMed
  28. ↵
    1. Dormal G,
    2. Rezk M,
    3. Yakobov E,
    4. Lepore F,
    5. Collignon O
    (2016) Auditory motion in the sighted and blind: early visual deprivation triggers a large-scale imbalance between auditory and “visual” brain regions. Neuroimage 134:630–644. https://doi.org/10.1016/j.neuroimage.2016.04.027
    OpenUrlCrossRefPubMed
  29. ↵
    1. Duncan KJ,
    2. Pattamadilok C,
    3. Devlin JT
    (2010) Investigating occipito-temporal contributions to reading with TMS. J Cogn Neurosci 22:739–750. https://doi.org/10.1162/jocn.2009.21207 pmid:19302001
    OpenUrlCrossRefPubMed
  30. ↵
    1. Duymus H, et al.
    (2024) The visual cortex in the blind but not the auditory cortex in the deaf becomes multiple-demand regions. Brain 147:3624–3637. https://doi.org/10.1093/brain/awae187 pmid:38864500
    OpenUrlCrossRefPubMed
  31. ↵
    1. Dzięgiel-Fivet G,
    2. Plewko J,
    3. Szczerbiński M,
    4. Marchewka A,
    5. Szwed M,
    6. Jednoróg K
    (2021) Neural network for Braille reading and the speech-reading convergence in the blind: similarities and differences to visual reading. Neuroimage 231:117851. https://doi.org/10.1016/j.neuroimage.2021.117851
    OpenUrlCrossRefPubMed
  32. ↵
    1. Finc K,
    2. Bonna K,
    3. He X,
    4. Lydon-Staley DM,
    5. Kühn S,
    6. Duch W,
    7. Bassett DS
    (2020) Dynamic reconfiguration of functional brain networks during working memory training. Nat Commun 11:2435. https://doi.org/10.1038/s41467-020-15631-z pmid:32415206
    OpenUrlCrossRefPubMed
  33. ↵
    1. Frasnelli J,
    2. Collignon O,
    3. Voss P,
    4. Lepore F
    (2011) Chapter 15—crossmodal plasticity in sensory loss. In: Progress in brain research Vol. 191 (Green AM, Chapman CE, Kalaska JF, Lepore F, eds), pp 233–249. Amsterdam: Elsevier.
  34. ↵
    1. Green DM,
    2. Swets JA
    (1966) Signal detection theory and psychophysics (pp. xi, 455). New York: John Wiley & Sons.
  35. ↵
    1. Heeger DJ
    (2017) Theory of cortical function. Proc Natl Acad Sci U S A 114:1773–1782. https://doi.org/10.1073/pnas.1619788114 pmid:28167793
    OpenUrlAbstract/FREE Full Text
  36. ↵
    1. Jaeger TF
    (2008) Categorical data analysis: away from ANOVAs (transformation or not) and toward logit mixed models. J Mem Lang 59:434–446. https://doi.org/10.1016/j.jml.2007.11.007 pmid:19884961
    OpenUrlCrossRefPubMed
  37. ↵
    1. Kanjlia S,
    2. Lane C,
    3. Feigenson L,
    4. Bedny M
    (2016) Absence of visual experience modifies the neural basis of numerical thinking. Proc Natl Acad Sci U S A 113:11172–11177. https://doi.org/10.1073/pnas.1524982113 pmid:27638209
    OpenUrlAbstract/FREE Full Text
  38. ↵
    1. Kassambara A
    (2020) Ggpubr:‘ggplot2’based publication ready plots. R package version 0.4. 0.
  39. ↵
    1. Kassambara A
    (2021) rstatix: Pipe-friendly framework for basic statistical tests. R package version 0.7. 0.
  40. ↵
    1. Kim JS,
    2. Kanjlia S,
    3. Merabet LB,
    4. Bedny M
    (2017) Development of the visual word form area requires visual experience: evidence from blind Braille readers. J Neurosci 37:0997–0917. https://doi.org/10.1523/JNEUROSCI.0997-17.2017 pmid:29061700
    OpenUrlPubMed
  41. ↵
    1. Kok P,
    2. Jehee JFM,
    3. de Lange FP
    (2012) Less is more: expectation sharpens representations in the primary visual cortex. Neuron 75:265–270. https://doi.org/10.1016/j.neuron.2012.04.034
    OpenUrlCrossRefPubMed
  42. ↵
    1. Kuper C,
    2. Matuszewski J,
    3. Banaszkiewicz A,
    4. Bola L,
    5. Szczepanik M,
    6. Draps M,
    7. Kordos P,
    8. Szwed M,
    9. Jednoróg K,
    10. Marchewka A
    (2021) Functional reorganization of the reading network in the course of foreign language acquisition. Neuroimage 226:117544. https://doi.org/10.1016/j.neuroimage.2020.117544
    OpenUrlCrossRefPubMed
  43. ↵
    1. Laurienti PJ,
    2. Burdette JH,
    3. Wallace MT,
    4. Yen YF,
    5. Field AS,
    6. Stein BE
    (2002) Deactivation of sensory-specific cortex by cross-modal stimuli. J Cogn Neurosci 14:420–429. https://doi.org/10.1162/089892902317361930
    OpenUrlCrossRefPubMed
  44. ↵
    1. Lawrence MA
    (2016) ez: Easy analysis and visualization of factorial experiments. R package version 4.4-0.
  45. ↵
    1. Lee TS,
    2. Mumford D,
    3. Romero R,
    4. Lamme VA
    (1998) The role of the primary visual cortex in higher level vision. Vision Res 38:2429–2454. https://doi.org/10.1016/s0042-6989(97)00464-1
    OpenUrlCrossRefPubMed
  46. ↵
    1. Lerma-Usabiaga G,
    2. Carreiras M,
    3. Paz-Alonso PM
    (2018) Converging evidence for functional and structural segregation within the left ventral occipitotemporal cortex in reading. Proc Natl Acad Sci U S A 115:E9981–E9990. https://doi.org/10.1073/pnas.1803003115 pmid:30224475
    OpenUrlAbstract/FREE Full Text
  47. ↵
    1. Makin TR,
    2. Krakauer JW
    (2023) Against cortical reorganisation. Elife 12:e84716. https://doi.org/10.7554/eLife.84716 pmid:37986628
    OpenUrlCrossRefPubMed
  48. ↵
    1. Mandera P,
    2. Keuleers E,
    3. Wodniecka Z,
    4. Brysbaert M
    (2014) Subtlex-pl: subtitle-based word frequency estimates for Polish. Behav Res Methods 47:471–483. https://doi.org/10.3758/s13428-014-0489-4
    OpenUrl
  49. ↵
    1. Mattioni S,
    2. Rezk M,
    3. Battal C,
    4. Bottini R,
    5. Mendoza KEC,
    6. Oosterhof NN,
    7. Collignon O
    (2020) Categorical representation from sound and sight in the ventral occipito-temporal cortex of sighted and blind. Elife 9:1–33. https://doi.org/10.7554/eLife.50732 pmid:32108572
    OpenUrlCrossRefPubMed
  50. ↵
    1. Matuszewski J,
    2. Kossowski B,
    3. Bola L,
    4. Banaszkiewicz A,
    5. Paplinska M,
    6. Gyger L,
    7. Kherif F,
    8. Szwed M,
    9. Frackowiak RS,
    10. Jednoróg K
    (2021) Brain plasticity dynamics during tactile Braille learning in sighted subjects: multi-contrast MRI approach. Neuroimage 227:117613. https://doi.org/10.1016/j.neuroimage.2020.117613
    OpenUrlCrossRefPubMed
  51. ↵
    1. Merabet LB,
    2. Hamilton R,
    3. Schlaug G,
    4. Swisher JD,
    5. Kiriakopoulos ET,
    6. Pitskel NB,
    7. Kauffman T,
    8. Pascual-Leone A
    (2008) Rapid and reversible recruitment of early visual cortex for touch. PLoS One 3:e3046. https://doi.org/10.1371/journal.pone.0003046 pmid:18728773
    OpenUrlCrossRefPubMed
  52. ↵
    1. Merabet L,
    2. Thut G,
    3. Murray B,
    4. Andrews J,
    5. Hsiao S,
    6. Pascual-Leone A
    (2004) Feeling by sight or seeing by touch? Neuron 42:173–179. https://doi.org/10.1016/S0896-6273(04)00147-3
    OpenUrlCrossRefPubMed
  53. ↵
    1. Müller F,
    2. Niso G,
    3. Samiee S,
    4. Ptito M,
    5. Baillet S,
    6. Kupers R
    (2019) A thalamocortical pathway for fast rerouting of tactile information to occipital cortex in congenital blindness. Nat Commun 10:5154. https://doi.org/10.1038/s41467-019-13173-7 pmid:31727882
    OpenUrlCrossRefPubMed
  54. ↵
    1. Neuwirth E,
    2. Neuwirth ME
    (2022) Package ‘RColorBrewer’.
  55. ↵
    1. Nobre AC,
    2. Allison T,
    3. McCarthy G
    (1994) Word recognition in the human inferior temporal lobe. Nature 372:260–263.
    OpenUrlCrossRefPubMed
  56. ↵
    1. Norman LJ,
    2. Thaler L
    (2019) Retinotopic-like maps of spatial sound in primary ‘visual’ cortex of blind human echolocators. Proc Biol Sci 286:20191910. https://doi.org/10.1098/rspb.2019.1910 pmid:31575359
    OpenUrlPubMed
  57. ↵
    1. Paré S,
    2. Bleau M,
    3. Dricot L,
    4. Ptito M,
    5. Kupers R
    (2023) Brain structural changes in blindness: a systematic review and an anatomical likelihood estimation (ALE) meta-analysis. Neurosci Biobehav Rev 150:105165. https://doi.org/10.1016/j.neubiorev.2023.105165
    OpenUrlCrossRefPubMed
  58. ↵
    1. Pattamadilok C,
    2. Bulnes LC,
    3. Devlin JT,
    4. Bourguignon M,
    5. Morais J,
    6. Goldman S,
    7. Kolinsky R
    (2015) How early does the brain distinguish between regular words, irregular words, and pseudowords during the reading process? Evidence from neurochronometric TMS. J Cogn Neurosci 27:1259–1274. https://doi.org/10.1162/jocn_a_00779
    OpenUrlCrossRefPubMed
  59. ↵
    1. Pattamadilok C,
    2. Planton S,
    3. Bonnard M
    (2019) Spoken language coding neurons in the visual word form area: evidence from a TMS adaptation paradigm. Neuroimage 186:278–285. https://doi.org/10.1016/j.neuroimage.2018.11.014
    OpenUrlCrossRefPubMed
  60. ↵
    1. Pedersen TL
    (2020) Patchwork: the composer of plots. R Package Version 1:182.
    OpenUrl
  61. ↵
    1. Pegado F,
    2. Nakamura K,
    3. Hannagan T
    (2014) How does literacy break mirror invariance in the visual system? Front Psychol 5:703. https://doi.org/10.3389/fpsyg.2014.00703 pmid:25071669
    OpenUrlPubMed
  62. ↵
    1. Planton S,
    2. Chanoine V,
    3. Sein J,
    4. Anton JL,
    5. Nazarian B,
    6. Pallier C,
    7. Pattamadilok C
    (2019) Top-down activation of the visuo-orthographic system during spoken sentence processing. Neuroimage 202:116135. https://doi.org/10.1016/j.neuroimage.2019.116135
    OpenUrlCrossRefPubMed
  63. ↵
    1. Reich L,
    2. Szwed M,
    3. Cohen L,
    4. Amedi A
    (2011) A ventral visual stream reading center independent of visual experience. Curr Biol 21:363–368. https://doi.org/10.1016/j.cub.2011.01.040
    OpenUrlCrossRefPubMed
  64. ↵
    1. Ricciardi E,
    2. Bottari D,
    3. Ptito M,
    4. Röder B,
    5. Pietrini P
    (2020). The sensory-deprived brain as a unique tool to understand brain development and function. https://doi.org/10.1016/j.neubiorev.2019.10.017
  65. ↵
    1. Ricciardi E,
    2. Pietrini P,
    3. Bottari D
    (2023) Rethinking the sensory deprived brain. Int J Psychophysiol 188:63. https://doi.org/10.1016/j.ijpsycho.2023.05.162
    OpenUrl
  66. ↵
    1. Roelfsema PR,
    2. de Lange FP
    (2016) Early visual cortex as a multiscale cognitive blackboard. Annu Rev Vis Sci 2:131–151. https://doi.org/10.1146/annurev-vision-111815-114443
    OpenUrlCrossRefPubMed
  67. ↵
    1. Rosenke M,
    2. Van Hoof R,
    3. Van Den Hurk J,
    4. Grill-Spector K,
    5. Goebel R
    (2021) A probabilistic functional atlas of human occipito-temporal visual cortex. Cereb Cortex 31:603–619. https://doi.org/10.1093/cercor/bhaa246 pmid:32968767
    OpenUrlCrossRefPubMed
  68. ↵
    1. Sadato N,
    2. Pascual-Leone A,
    3. Grafman J,
    4. Ibañez V,
    5. Deiber MP,
    6. Dold G,
    7. Hallett M
    (1996). Activation of the primary visual cortex by Braille reading in blind subjects. https://doi.org/10.1038/380526a0
  69. ↵
    1. Salminen-Vaparanta N,
    2. Noreika V,
    3. Revonsuo A,
    4. Koivisto M,
    5. Vanni S
    (2012) Is selective primary visual cortex stimulation achievable with TMS? Hum Brain Mapp 33:652–665. https://doi.org/10.1002/hbm.21237 pmid:21416561
    OpenUrlCrossRefPubMed
  70. ↵
    1. Seydell-Greenwald A,
    2. Wang X,
    3. Newport EL,
    4. Bi Y,
    5. Striem-Amit E
    (2023) Spoken language processing activates the primary visual cortex. PLoS One 18:e0289671. https://doi.org/10.1371/journal.pone.0289671 pmid:37566582
    OpenUrlCrossRefPubMed
  71. ↵
    1. Siuda-Krzywicka K,
    2. Bola Ł,
    3. Paplińska M,
    4. Sumera E,
    5. Jednoróg K,
    6. Marchewka A,
    7. Śliwińska MW,
    8. Amedi A,
    9. Szwed M
    (2016) Massive cortical reorganization in sighted braille readers. Elife 5:1–26. https://doi.org/10.7554/eLife.10762 pmid:26976813
    OpenUrlCrossRefPubMed
  72. ↵
    1. Sliwinska MW,
    2. Khadilkar M,
    3. Campbell-Ratcliffe J,
    4. Quevenco F,
    5. Devlin JT
    (2012) Early and sustained supramarginal gyrus contributions to phonological processing. Front Psychol 3:1–10. https://doi.org/10.3389/fpsyg.2012.00161 pmid:22654779
    OpenUrlCrossRefPubMed
  73. ↵
    1. Stevens AA,
    2. Snodgrass M,
    3. Schwartz D,
    4. Weaver K
    (2007) Preparatory activity in occipital cortex in early blind humans predicts auditory perceptual performance. J Neurosci 27:10734–10741. https://doi.org/10.1523/JNEUROSCI.1669-07.2007 pmid:17913907
    OpenUrlAbstract/FREE Full Text
  74. ↵
    1. Stilla R,
    2. Hanna R,
    3. Hu X,
    4. Mariola E,
    5. Deshpande G,
    6. Sathian K
    (2008) Neural processing underlying tactile microspatial discrimination in the blind: a functional magnetic resonance imaging study. J Vis 8:13. https://doi.org/10.1167/8.10.13 pmid:19146355
    OpenUrlAbstract/FREE Full Text
  75. ↵
    1. Szczerbiński M,
    2. Pelc-Pękała O
    (2013) Zestaw metod do diagnozy trudności w czytaniu–Test Dekodowania (The Decoding Test–A Set of Tools for Diagnosing Reading Difficulties). PTPiP, Gdansk (Poland).
  76. ↵
    1. Turker S,
    2. Hartwigsen G
    (2021) Exploring the neurobiology of reading through non-invasive brain stimulation: a review. Cortex 141:497–521. https://doi.org/10.1016/j.cortex.2021.05.001
    OpenUrlCrossRefPubMed
  77. ↵
    1. Van Ackeren MJ,
    2. Barbero FM,
    3. Mattioni S,
    4. Bottini R,
    5. Collignon O
    (2018) Neuronal populations in the occipital cortex of the blind synchronize to the temporal dynamics of speech. Elife 7:1–20. https://doi.org/10.7554/eLife.31640 pmid:29338838
    OpenUrlCrossRefPubMed
  78. ↵
    1. Veispak A,
    2. Boets B,
    3. Ghesquière P
    (2013) Differential cognitive and perceptual correlates of print reading versus braille reading. Res Dev Disabil 34:372–385. https://doi.org/10.1016/j.ridd.2012.08.012
    OpenUrlCrossRefPubMed
  79. ↵
    1. Vetter P,
    2. Smith FW,
    3. Muckli L
    (2014) Decoding sound and imagery content in early visual cortex. Curr Biol 24:1256–1262. https://doi.org/10.1016/j.cub.2014.04.020 pmid:24856208
    OpenUrlCrossRefPubMed
  80. ↵
    1. Vidoni ED,
    2. Acerra NE,
    3. Dao E,
    4. Meehan SK,
    5. Boyd LA
    (2010) Role of the primary somatosensory cortex in motor learning: an rTMS study. Neurobiol Learn Mem 93:532–539. https://doi.org/10.1016/j.nlm.2010.01.011
    OpenUrlCrossRefPubMed
  81. ↵
    1. Walsh V,
    2. Pascual-Leone A
    (2003) Transcranial magnetic stimulation: a neurochronometrics of mind. Cambridge, MA: MIT Press.
  82. ↵
    1. Wickham H, et al.
    (2019) Welcome to the Tidyverse. J Open Source Softw 4:1686. https://doi.org/10.21105/joss.01686
    OpenUrlCrossRef
  83. ↵
    1. Wilke CO
    (2019) Cowplot: streamlined plot theme and plot annotations for ‘ggplot2’. R Package Version 1:1. Available at https://cran.r-project.org/package=cowplot
    OpenUrl
  84. ↵
    1. Wischnewski M,
    2. Peelen MV
    (2021a) Causal evidence for a double dissociation between object- and scene-selective regions of visual cortex: a preregistered TMS replication study. J Neurosci 41:751–756. https://doi.org/10.1523/JNEUROSCI.2162-20.2020 pmid:33262244
    OpenUrlAbstract/FREE Full Text
  85. ↵
    1. Wischnewski M,
    2. Peelen MV
    (2021b) Causal neural mechanisms of context-based object recognition. eLife 10:e69736. https://doi.org/10.7554/eLife.69736 pmid:34374647
    OpenUrlCrossRefPubMed
  86. ↵
    1. Woolnough O,
    2. Donos C,
    3. Rollo PS,
    4. Forseth KJ,
    5. Lakretz Y,
    6. Crone NE,
    7. Fischer-Baum S,
    8. Dehaene S,
    9. Tandon N
    (2021) Spatiotemporal dynamics of orthographic and lexical processing in the ventral visual pathway. Nature Human Behaviour 5:389–398. https://doi.org/10.1038/s41562-020-00982-w pmid:33257877
    OpenUrlPubMed
  87. ↵
    1. Woolnough O,
    2. Forseth KJ,
    3. Rollo PS,
    4. Roccaforte ZJ,
    5. Tandon N
    (2022) Event-related phase synchronization propagates rapidly across human ventral visual cortex. Neuroimage 256:119262. https://doi.org/10.1016/j.neuroimage.2022.119262 pmid:35504563
    OpenUrlCrossRefPubMed
  88. ↵
    1. Xu Y,
    2. Vignali L,
    3. Sigismondi F,
    4. Crepaldi D,
    5. Bottini R,
    6. Collignon O
    (2023) Similar object shape representation encoded in the inferolateral occipitotemporal cortex of sighted and early blind people. PLoS Biol 21:e3001930. https://doi.org/10.1371/journal.pbio.3001930 pmid:37490508
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 45 (20)
Journal of Neuroscience
Vol. 45, Issue 20
14 May 2025
  • Table of Contents
  • About the Cover
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Similar Computational Hierarchies for Reading and Speech in the Occipital Cortex of Sighed and Blind: Converging Evidence from fMRI and Chronometric TMS
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Similar Computational Hierarchies for Reading and Speech in the Occipital Cortex of Sighed and Blind: Converging Evidence from fMRI and Chronometric TMS
Jacek Matuszewski, Łukasz Bola, Olivier Collignon, Artur Marchewka
Journal of Neuroscience 14 May 2025, 45 (20) e1153242024; DOI: 10.1523/JNEUROSCI.1153-24.2024

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Similar Computational Hierarchies for Reading and Speech in the Occipital Cortex of Sighed and Blind: Converging Evidence from fMRI and Chronometric TMS
Jacek Matuszewski, Łukasz Bola, Olivier Collignon, Artur Marchewka
Journal of Neuroscience 14 May 2025, 45 (20) e1153242024; DOI: 10.1523/JNEUROSCI.1153-24.2024
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • Peer Review
  • PDF

Keywords

  • blindness
  • language
  • plasticity
  • reading
  • sensory deprivation
  • VWFA

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Increased neuronal expression of the early endosomal adaptor APPL1 replicates Alzheimer’s Disease-related endosomal and synaptic dysfunction with cholinergic neurodegeneration.
  • Presynaptic mu opioid receptors suppress the functional connectivity of ventral tegmental area dopaminergic neurons with aversion-related brain regions
  • Change of spiny neuron structure in the basal ganglia song circuit and its regulation by miR-9 during song development
Show more Research Articles

Behavioral/Cognitive

  • Increased neuronal expression of the early endosomal adaptor APPL1 replicates Alzheimer’s Disease-related endosomal and synaptic dysfunction with cholinergic neurodegeneration.
  • Presynaptic mu opioid receptors suppress the functional connectivity of ventral tegmental area dopaminergic neurons with aversion-related brain regions
  • Change of spiny neuron structure in the basal ganglia song circuit and its regulation by miR-9 during song development
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.