Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

How Does Literacy Affect Speech Processing? Not by Enhancing Cortical Responses to Speech, But by Promoting Connectivity of Acoustic-Phonetic and Graphomotor Cortices

Alexis Hervais-Adelman, Uttam Kumar, Ramesh K. Mishra, Vivek A. Tripathi, Anupam Guleria, Jay P. Singh and Falk Huettig
Journal of Neuroscience 23 November 2022, 42 (47) 8826-8841; https://doi.org/10.1523/JNEUROSCI.1125-21.2022
Alexis Hervais-Adelman
1Max Planck Institute for Psycholinguistics, Nijmegen, 6525 XD, The Netherlands
2Neurolinguistics and Department of Psychology, University of Zurich, 8050, Zurich, Switzerland
3Neuroscience Center Zurich, University of Zurich and Eidgenössische Technische Hochschule Zurich, Zurich, 8057, Switzerland
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Uttam Kumar
4Centre of Biomedical Research, Lucknow 226014, Uttar Pradesh, India
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Ramesh K. Mishra
5University of Hyderabad, Gachibowli 500046, Telangana, India
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Vivek A. Tripathi
6Centre for Behavioural and Cognitive Sciences, University of Allahabad, Old Katra 211002, Uttar Pradesh, India
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Anupam Guleria
4Centre of Biomedical Research, Lucknow 226014, Uttar Pradesh, India
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Anupam Guleria
Jay P. Singh
6Centre for Behavioural and Cognitive Sciences, University of Allahabad, Old Katra 211002, Uttar Pradesh, India
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Jay P. Singh
Falk Huettig
1Max Planck Institute for Psycholinguistics, Nijmegen, 6525 XD, The Netherlands
7Centre for Language Studies, Radboud University, 6525 HT Nijmegen, The Netherlands
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Falk Huettig
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Previous research suggests that literacy, specifically learning alphabetic letter-to-phoneme mappings, modifies online speech processing and enhances brain responses, as indexed by the BOLD, to speech in auditory areas associated with phonological processing (Dehaene et al., 2010). However, alphabets are not the only orthographic systems in use in the world, and hundreds of millions of individuals speak languages that are not written using alphabets. In order to make claims that literacy per se has broad and general consequences for brain responses to speech, one must seek confirmatory evidence from nonalphabetic literacy. To this end, we conducted a longitudinal fMRI study in India probing the effect of literacy in Devanagari, an abubgida, on functional connectivity and cerebral responses to speech in 91 variously literate Hindi-speaking male and female human participants. Twenty-two completely illiterate participants underwent 6 months of reading and writing training. Devanagari literacy increases functional connectivity between acoustic-phonetic and graphomotor brain areas, but we find no evidence that literacy changes brain responses to speech, either in cross-sectional or longitudinal analyses. These findings shows that a dramatic reconfiguration of the neurofunctional substrates of online speech processing may not be a universal result of learning to read, and suggest that the influence of writing on speech processing should also be investigated.

SIGNIFICANCE STATEMENT It is widely claimed that a consequence of being able to read is enhanced auditory processing of speech, reflected by increased cortical responses in areas associated with phonological processing. Here we find no relationship between literacy and the magnitude of brain response to speech stimuli in individuals who speak Hindi, which is written using a nonalphabetic script, Devanagari, an abugida. We propose that the exact nature of the script under examination must be considered before making sweeping claims about the consequences of literacy for the brain. Further, we find evidence that literacy enhances functional connectivity between auditory processing areas and graphomotor areas, suggesting a mechanism whereby learning to write might influence speech perception.

  • GMFA
  • reading
  • speech perception
  • VWFA
  • writing

Introduction

Learning to read and write involves acquiring a mapping of spoken language to orthographic symbols, which have both a visual (recognition) and a motor (production) component. These mapping processes have been suggested to have functional consequences for brain areas associated with acoustic-phonetic processing that are manifest during the processing of speech. It has been argued that literacy modifies the underlying phonological code for speech reception, as evidenced in increased brain responses to speech in auditory areas in literate compared with illiterate individuals (Dehaene et al., 2010). Furthermore, research has previously provided some evidence that increasing print-speech convergence (a positive correlation between the magnitude of brain responses [indexed by BOLD fMRI] to speech and text) in numerous brain areas may be a hallmark of the onset of literacy (Monzalvo and Dehaene-Lambertz, 2013; Preston et al., 2016; Chyl et al., 2018). Such findings imply that in brain areas where orthographic and speech information make contact, literacy leads to enhanced responses to both input modalities. A related effect was reported by Shankweiler et al. (2008), who showed that the degree of convergence of responses to anomalous written and spoken sentences in left inferior frontal gyrus correlated with reading skill. Such results are interpreted as evidence that reading piggybacks onto the neural bases of speech.

Although it is tempting to assume that any form of literacy will reconfigure phonological representations, and consequently, brain responses to speech, the worldwide diversity of orthographic systems invites closer examination of these findings. Alphabetic systems encode subsyllabic segments, which are arguably not a “natural” unit in speech production. Learning to map speech to such units mandated by a writing system may well incur different phonological restructuring demands compared with other types of scripts that more directly encode speech units or are more or less orthographically transparent. In an elegant study addressing the potential impact of orthographic systems on speech-print convergence, Rueckl et al. (2015) performed an fMRI study on readers of different types of script: Alphabetic (Spanish, orthographically transparent, English, opaque), Abjad (Hebrew), and logographic (Chinese). They found evidence of consistent print-speech convergence in a number of left perisylvian brain areas across all four languages, with relatively subtle differences as a function of orthographic transparency and more marked differences when comparing logographic with nonlogographic writing systems.

The present study aimed to complement this research by examining the impact of literacy on brain responses to speech stimuli in Hindi, which is written in Devanagari, which is an abugida. Abugidas (Daniels, 2020) are nonalphabetic writing systems that are distinct from alphabets, alphasyllabaries (e.g., Japanese Kana), abjads (e.g., Hebrew, Arabic), and logosyllabaries (e.g., Chinese Hanzi). In an abugida, basic characters systematically encode a consonant with an inherent vowel, with optional diacritics modifying the vowel. Characters therefore simultaneously encode syllabic and subsyllabic information (for a more extensive description, see Vaid and Gupta, 2002).

We used fMRI to record cerebral responses to auditory sentences in a group of 91 Hindi-speaking individuals of varying levels of literacy (ranging from illiterate to fluent readers) from rural communities in Northern India, first in cross-sectional investigation, and in a follow-up investigation after 22 illiterate participants had taken a 6 month literacy course. We used a listening task with no meta-linguistic component to ensure that any observed effects were not the result of deliberate, task-driven, phonological processing components. In order to examine evidence of functional modifications during speech processing, beyond modulation of magnitude of brain response, we examined functional connectivity of phonological processing regions: left planum temporale (PT, previously reported to show greater responses to speech in literate than illiterate individuals) (Dehaene et al., 2010) and left posterior superior temporal gyrus (pSTG, a brain area implicated in categorical representation of speech sounds) (Yi et al., 2019).

Contrary to the prediction that literacy induces increased secondary auditory cortical responses to speech, we find no evidence for changes to the brain's response to speech, either cross-sectionally or longitudinally. However, we find significant evidence of literacy-enhanced functional connectivity between acoustic-phonetic processing areas of posterior superior temporal cortex and graphomotor areas, suggestive of the development of a functional link between a graphemic code and speech processing with literacy.

Materials and Methods

The data presented in this paper includes a novel reanalysis of a subset of previously published data (Hervais-Adelman et al., 2019). The study was longitudinal, consisting of two MRI sessions ∼6 months apart.

Participants

Participants were recruited from two villages near the city of Lucknow in the Northern Indian state of Uttar Pradesh as part of a study that was approved by the ethics committee of the Center of Biomedical Research, Lucknow. After giving informed consent, 91 healthy right-handed human volunteers without a known history of psychiatric disease or neurologic condition took part in the study (for more details, see Demographic and behavioral data). All of the participants were right-handed and examined by a medical doctor. None of the participants had any known neurologic impairments. Participants were interviewed about their educational background. A word-reading test and a letter identification task were administered.

Participant characteristics

Information on age, income, and number of literate family members was obtained in an interview. Right-handedness was verified in interview by asking participants which hand they used for common activities (e.g., drawing). Raven's Progressive Matrices were administered to test nonverbal cognitive abilities.

Literacy measures

Two measures of literacy were recorded, namely, letter identification (ability to name the 46 primary Devanagari characters, “Aksara”) and word reading ability (ability to read out-loud 86 words of varying syllabic complexity). In the aksara-identification task, participants received a recorded spoken instruction. In the spoken instruction, they were told that they would be presented with the aksara of the Hindi alphabet one by one. They were told that each letter would be shown for 5 s followed by a question mark. The instructions, which were provided orally by a researcher who was present throughout, specified that, when the question mark was on the screen, they should name the letter aloud. The response was recorded. The recording terminated automatically after 10 s. The 46 aksara were presented in font Mangal (size 96 point). Participants' performance was recorded as the total number of aksara correctly named.

The word-reading test consisted of 86 words, presented one at a time in font Mangal (size 96 point). The words were of differing syllabic complexity (26 monosyllabic, 30 disyllabic, and 30 trisyllabic) and presented in a pseudorandomized order. Each word was displayed on the screen for 10 s followed by a question mark. A spoken recorded instruction specified that, when the question mark was on the screen, participants should say aloud the word they had seen. Participants' responses were recorded. The recording terminated automatically after 30 s. Performance was recorded as the total number of words correctly read.

Participant characteristics are shown in Table 1, and correlations between reading scores and other factors are shown in Figure 1.

View this table:
  • View inline
  • View popup
Table 1.

Summary of principal demographic and behavioral information for participants (N = 91)

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Participant characteristics, Time 1. The mean values for the N = 91 participants included at Time 1. Relationships between pairs of variables were calculated using nonparametric correlation (Kendall's τ b), †except in the case of sex, for which point-biserial correlations were calculated. *p < 0.05. **p < 0.01. ***p < 0.001. Histograms on diagonal represent the distribution of values of the indicated variable. Trend lines on scatter plots indicate linear fits. Ribbon indicates 95% CI of the fit. IRN, Indian Rupees; Ed, education (yr).

Literacy at the study site

Literacy rates in India stand at ∼75% nationwide. Given the large population of India, this means that there are still almost 300 million illiterate adults in India. The Indian constitution actually guarantees free and compulsory education. The Indian government has also introduced schemes that provide free food to improve attendance and retention at government schools. The reason that literacy levels have not gone up further so far are deeply rooted socioeconomic circumstances, including large economic inequalities, caste and gender discrimination. There are also striking regional differences. In the Southern Indian state of Kerala, almost everybody can read and write, but Uttar Pradesh in the North of India, where we conducted our study, is among the states with the worst literacy rates. The situation gets particularly dramatic when several of the socioeconomic factors that determine illiteracy combine. The Dalits, who are in the West, sometimes referred to as the “untouchables,” belong to the lowest caste in the traditional religious Hindu social structure. The Indian constitution has officially abolished untouchability, but having to deal with implicit oppression remains a reality of much of Dalit life. Many Dalit girls do not attend any school or drop out very early because of poverty, and to avoid humiliation by teachers and classmates. Almost half of Dalit women in India are illiterate (Raghavendra, 2020).

We recruited participants from the remote countryside because we wanted to engage some participants that were completely illiterate, who could not read and write at all, and some participants from exactly the same community with the same socioeconomic background but who had learned to read. This selection procedure enabled us to ensure that any differences other than literacy between the two groups were minimized. It also enabled us to rule out with considerable confidence that there were any neurologic impairments or intelligence differences between the groups that were the reason that some of the people in the villages had gone to school to learn to read and others had not. In some cases, literate and illiterate participants were from the same family. The reason that some of the children of a family are sent to school but others are not is often simply that large poor Dalit families cannot afford to send all children to school. Lower caste, poor and rural households tend to have such limited resources to make a living that all family members, including children, have to work to contribute to the survival of household. Consequently, educational opportunities are limited, and access to schooling and formal instruction in literacy is lost (Raghavendra, 2020).

Sex and literacy

There was an imbalance in the sex of the participants as a function of literacy status, with illiterate participants being more likely to be female. This is driven by the cultural factors discussed above (Raghavendra, 2020). The potential consequences of this for the present experiment are not readily predictable; however, numerous meta-analyses over recent decades have indicated that there are no systematic effects of sex to be found in the literature on functional imaging of language processes (Kansaku and Kitazawa, 2001; Sommer et al., 2004, 2008; Kaiser et al., 2009; Wallentin, 2009; Sato, 2020). We therefore do not believe that this limitation alone could account for potential between-group effects that we observe, especially in those analyses that treat word reading as a continuous predictor. Nevertheless, we have attempted to account for the potential confounding effect of sex, age, and Raven's performance in the between-groups analyses presented in Experiment 2.

MRI procedure

Stimuli were presented blocked by condition. There were 10 blocks for each task, which were arranged in a different pseudo-random order for each participant. Stimulus presentation was controlled using E-Prime (Psychology Software Tools).

Stimuli

Six stimulus categories were presented during fMRI acquisition: visual sentences, auditory sentences, horizontal checkerboards, and vertical checkerboards. Two additional conditions were included, which invited participant responses. These were visual (written) and spoken commands that instructed the participant to respond by pressing either the left or right response button of an MR-compatible button box. Behavioral responses to these trials are not analyzed, and the conditions are not described further. Ten blocks of each stimulus type were presented in a randomized order (randomized per participant). In each visual sentence block, each trial consisted of a simple sentence, which was shown on four successive screens with 1-3 words on each screen. Each screen was shown for 400 ms with an interval of 100 ms between each screen. Between each sentence, there was a 1500 ms pause. All words were displayed in font Mangal size 86. The words were shown at the center of the screen. Participants received a recorded auditory instruction to read the sentences. Blocks lasted 35 s.

For auditory sentences, each block consisted of 10 simple declarative sentences. Sentences were unrelated to one another. They were presented over MR-compatible headphones in four audio sequences comprising 1-3 words for each sentence, similar to the visual sentence presentation. Participants were instructed to listen to the sentences carefully. Blocks lasted ∼60 s (mean: 59.55 s, range: 57.75-66.19 s, variability because of variable latencies in audio delivery). Visual and auditory presentation examples are illustrated in Figure 2.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Illustrative timeline of sentence presentation in visual and auditory blocks with broad transcription to English provided. Visual sentence: Embedded Image (English translation: “The cats look at birds on the wall”), Auditory sentence Embedded Image (English translation: “The children saw a lion in the circus”).

Vertical and horizontal checkerboard blocks each consisted of 30 flashing vertical checkerboards. The checkerboards changed their contrast after 400 ms. Each block lasted 12 s (30 × 400 ms). Analyses of these stimuli are not presented here; they have previously been discussed (Hervais-Adelman et al., 2019).

In order to provide an estimate of individual participants' baseline BOLD activity, every stimulus block was succeeded by a null block, in which no stimuli were presented, lasting 12 s (5 scans).

Sentence materials

Two sentence lists were used which were assigned to auditory or visual presentation (consistently across all participants). Each list of sentences consisted of 10 unrelated simple Hindi declarative sentences (e.g., “We call the fire brigade when there is a fire,” “The children saw a lion in the circus”). The same set of sentences was played or displayed in each of the blocks (i.e., each sentence was repeated in each of the 10 either visual or auditory blocks).

Instructions

Participants were instructed orally on the procedure before being settled comfortably on the patient table and placed in the scanner bore. They were asked to remain as still as possible and to pay attention to the stimuli appearing on screen or played over the headphones. They were instructed to respond by button press to visual or auditory sentences that commanded them to press a button. Further, participants were informed that they could terminate their participation at any time if they wished, and were instructed in the use of the MR-scanner emergency-call squeeze “button.”

Relationships between reading ability and demographic factors

Pairwise correlation analyses using Kendall's τ b were conducted to test for relationships between literacy and various demographic factors, including age, sex, monthly income, and performance on Raven's. Matrices of all the participants at the first time point (N participants = 91). A nonparametric test was favored because the distribution of the reading data is not normal (Lilliefors test statistic = 0.271, p < 0.001, Fig. 1). The complete set of pairwise correlations is reported in Figure 1. As would be expected from the participant selection procedure, significant correlations of interest were found between the literacy measures (word reading and Aksara recognition) and years of schooling (word reading: τ = 0.605, p < 0.001, Aksara recognition: τ = 0.580, p < 0.001). There was also a significant relationship between literacy and sex (dummy coded as a binary variable), such that female participants were less likely to be literate, which is because of cultural factors affecting access to schooling at the study site (word reading: τ = 0.420, p < 0.001, Aksara recognition: τ = 0.411, p < 0.001). There was a negative correlation between age and literacy (word reading: τ = −0.214, p = 0.006, Aksara recognition: τ = −0.228, p = 0.003), indicating that the more literate participants tended to be younger than the illiterate participants. A further relationship was found between literacy and Raven's matrices performance (word reading: τ = 0.242, p = 0.002, Aksara recognition: τ = 0.278, p < 0.001).

It is particularly noteworthy that monthly income is not significantly related to literacy (word reading: τ = 0.119, p = 0.147, Aksara recognition: τ = 0.108, p = 0.171), indicating that literacy is unlikely to be a primary determinant of socioeconomic status in the communities from which the study participants were drawn.

Mediation of literacy by fluid intelligence

In order to rule out a potential confounding effect of underlying fluid intelligence differences, a mediation analysis was conducted to test for a possible mediating effect of Raven's matrices performance on the relationship between years of schooling and literacy. The analysis was conducted using the MeMoBootR package (Buchanan, 2018) in R version 4.0.5 (R Core Team, 2021). The path plot for the mediation analysis is shown in Figure 3. Evaluating the relationship between years of schooling mediated by Raven's (assumed to be a proxy for nonverbal IQ) shows a direct effect of schooling on word reading (b = 6.76, t(89) = 9.362, p < 0.001), a significant relationship between Education and Raven's (b = 0.402, t(89) = 2.74, p = 0.007), and a significant relationship between Raven's and literacy when accounting for education. The direct effect of education on literacy is, however, not significantly modulated by Raven's (direct effect, accounting for Raven's, b = 6.285, t(88) = 8.562, p < 0.001, Aroian Sobel test z = 1.705, p = 0.088, bootstrapped 95% confidence for the indirect effect: −0.151, 1.130). Thus, the component of nonverbal intelligence reflected in Raven's performance is not responsible for literacy, and it may even be speculated that superior Raven's performance in the literate participants could itself be the result of schooling.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Path plot for mediation analysis of effect of Raven's performance on the direct relationship between years of schooling and literacy. Values on links indicate estimated β parameter for the regression of the variables on one another. Value in brackets is the direct effect of education on literacy, accounting for Raven's performance. Mediation analysis suggests that the influence of years of schooling on literacy is not mediated by Raven's performance (Sobel test z = 1.705, p = 0.088, bootstrap [N = 1000 permutations] 95% CI: −0.151, 1.130).

MRI data acquisition and preprocessing

Anatomical and functional data were collected before and after the literacy program using a 3.0 Tesla Siemens MAGNETOM Skyra (Siemens) whole-body magnetic resonance scanner using a 64-channel radio frequency head coil. T1-weighted 3D MPRAGE images were obtained using a pulse sequence with TR = 1.690 ms, TE = 2.60 ms, TI = 1.100 ms, FOV = 256 × 256, matrix size = 256 × 256 × 192, and voxel size = 1.0 × 1.0 × 1.0mm3. Functional images for the visual and localizer runs were acquired as continuous EPI (TR = 2400 ms, TE = 30 ms, 38 slices, voxel size: 3.5 × 3.5 × 3 mm, no interslice gap, interleaved slice order). Preprocessing was conducted using the default pipeline implemented in the Conn toolbox (Whitfield-Gabrieli and Nieto-Castanon, 2012), version CONN20.b, SPM12 build 7219, running in MATLAB R2016a (version 9.0.0.341360). This consists of functional realignment and unwarping, slice-timing correction (for Siemens interleaved acquisitions), both using SPM12 default settings, followed by outlier identification based on the observed BOLD signal and subject motion parameters. Acquisitions with frame × frame displacement of >0.9 mm or global BOLD signal changes >5 SD were flagged as potential outliers (the default thresholds implemented by the toolbox). Identified outliers were later included in the first-level, within-subject, statistical design. A new reference image, based on all scans except marked outliers, is produced. The structural and functional data are then realigned and normalized to MNI space, and segmented into gray matter, white matter, and CSF, using SPM12 unified segmentation and normalization (Ashburner and Friston, 2005). Default Conn parameters were used. Functional data were then smoothed using a Gaussian kernel of 8 mm FWHM.

Condition versus baseline activation maps

The functional imaging session was modeled at the single-subject level using a GLM in SPM12. The design consisted of one regressor per condition (sentence reading, visual commands, sentence listening, auditory commands, horizontal checkerboards, vertical checkerboards). For each participant, additional regressors were included to flag any outlier scans identified during realignment (one regressor per scan). Six regressors of no interest coding for scan-to-scan movement (x, y, and z translations and rotations), a seventh term coding for scan-to-scan global BOLD change, and a constant term were added. Stimulus blocks were modeled as epochs convolved with the canonical HRF in SPM12, and rest trials (baseline) were left unmodeled. To rule out the possibility of systematic effects of participant movement on any literacy-related results, the number of identified outliers was tested for a relationship with word reading scores; no significant relationships were found (Time 1: Kendall's τ = 0.086, p = 0.263; Time 2: τ = –0.094, p = 0.323). This also suggests that there was no systematic effect of literacy on compliance with the instruction to remain as still as possible in the scanner.

For each participant, parameter estimates for the auditory sentence and visual sentence conditions were contrasted with the baseline, and the resulting contrast images were used for second-level, random-effects analyses, in which the individual subject data were tested for reliable group-level effects using a one-sample t test.

Relationship between BOLD response and literacy

Because of the non-normal distribution of the reading scores across the group, analyses that sought to probe a literacy–BOLD link were executed using nonparametric statistics. This was achieved using the Randomize tool from the FSL version 6.0 package (Winkler et al., 2014), using 5000 permutations to determine the null distribution of the statistic for the contrast of interest. Results reported are significant at a cluster-mass threshold of p < 0.05 FDR-corrected for multiple comparisons, with a cluster-forming threshold set to be equivalent to uncorrected voxelwise p < 0.001 (for N = 91, t(90) = 3.092).

ROIs

ROI definitions

A goal of the investigation was to probe the claims of Dehaene et al. (2010) regarding the change in responsiveness to speech in PT as a function of literacy. We therefore defined a target ROI based on the coordinates of the peak effect of literacy on response to spoken language processing in their investigation. A sphere of radius 8 mm centered at MNI (x,y,z mm): −38, −28, 18.

However, since the Dehaene et al. (2010) interpretation of their finding is at the level of phonological processing of speech, it seems important to test the possibility that other phonetic processing regions may be affected by literacy. Given the accumulation of functional imaging and intracranial reports of the role of pSTG in processing of speech, particularly at the phonological level, and the evidence of print-speech convergence in the posterior superior temporal cortex, a second ROI was created in the left pSTG. This ROI was based on the coordinates published by Chevillet et al. (2013) as a focus of acoustic-phonetic processing based on an fMRI investigation. A sphere of radius 8 mm centered at MNI (x,y,z mm): −56, −50, 8 was used. This ROI was investigated to probe whether sites other than PT relevant to phoneme processing display any modulation of activation during as a function of literacy.

A visual wordform area (VWFA) ROI was created as a sphere of 8 mm radius centered on the coordinates reported in Hervais-Adelman et al. (2019), MNI (x,y,z mm): −45, −55, −10. This ROI constitutes a control region, where the effect of literacy on response to orthographic stimulation is already known, and serves to validate the analyses executed on the two other ROIs. Loci of the ROIs are shown in Figure 4.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Loci of the selected ROIs projected on axial sections of the MNI single-subject brain. Selection of ROIs is described in text.

ROI data extraction

Individual participant's t statistic for the contrast against baseline was extracted for all voxels in the ROI (excluding any missing values, i.e., voxels not containing brain tissue), and the mean was tested for a relationship with literacy using Kendall's τ. Analyses were conducted in MNI space, using normalized single-subject images.

Functional connectivity

Functional connectivity analyses were conducted using the CONN toolbox version 20.b (Whitfield-Gabrieli and Nieto-Castanon, 2012). First-level design matrices for each participant from the fixed-effects analysis described above were entered into the toolbox. The default denoising pipeline was also run, with temporal filtering adjusted to use a high-pass filter with a cutoff of 0.008 Hz. The low-frequency cutoff was omitted since the experimental conditions were presented in a block design, with relatively long block durations. The denoising procedure was fully described by Whitfield-Gabrieli and Nieto-Castanon (2012).

First-level functional connectivity was estimated for the three ROIs (PT, VWFA, and pSTG) with the rest of the brain (seed-to-voxel connectivity). This involves the calculation of a Fisher-transformed bivariate correlation coefficient between the BOLD time-series of the ROI and the BOLD time-series of each of the non-ROI voxels. In order to probe the relationship between connectivity and literacy, the individual condition-wise connectivity maps were then tested, voxelwise, for a correlation with literacy, as indexed by word-reading scores. In order to avoid errors because of the non-normal distribution of the reading scores, permutation testing (5000 permutations), as implemented in CONN for cluster-mass thresholding, was used to determine significance of any effects at a whole-brain FDR-corrected level of p < 0.05, with a cluster-forming threshold of uncorrected p < 0.001. Since the goal of this analysis was to explore the presence of increased functional connectivity as a function of literacy, evaluation of functional connectivity was restricted to positive relationships. Consequently, one-tailed tests of positive effects were conducted at the cluster-forming stage.

Additional Bayesian analyses of BOLD-literacy correlations in ROIs

Given the partially confirmatory nature of this investigation (attempting to reproduce effects of literacy on the response of auditory phoneme-processing cortical regions during speech processing), where possible, Bayesian analyses were executed to estimate the strength of any failures to replicate the prior effect. Bayesian analyses were conducted using JASP 0.16.3 (JASP Team), using uninformative priors (specified below). Frequentist statistics are reported for the whole-brain fMRI analyses, as Bayesian analysis tools are not yet readily available for these. Frequentist p values are provided wherever Bayes factors are not available. Unless otherwise indicated, Bayes factors are presented as BF10, indicating the ratio of evidence in favor of the alternative hypothesis. Values >1 indicate evidence in favor of the alternative hypothesis; values <1 indicate evidence in favor of the null hypothesis.

Results

BOLD response to speech and orthographic stimuli

Initial control analyses comparing the sentence listening and sentence reading conditions to baseline served to verify that expected patterns of, respectively, auditory (Table 2) and visual (Table 3) activation were apparent (Fig. 5). Listening to sentences produced significant increases in substantial expanses of bilateral superior temporal areas spanning anterior to posterior regions, consistent with speech processing (Hickok, 2012; Price, 2012), as well as increases in left inferior frontal gyrus broadly consistent with linguistic processes implicated in sentence processing (Friederici, 2012; Price, 2012; Hagoort, 2017). Visual presentation of sentences (which, it must be noted, were not interpretable for a large proportion of the participants) led on average to widespread significant bilateral visual cortical activation compared with baseline.

View this table:
  • View inline
  • View popup
Table 2.

Sentence listening versus baselinea

View this table:
  • View inline
  • View popup
Table 3.

Sentence reading versus baseline, group meana

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Group mean activation map for auditory sentences versus baseline and visual sentences versus baseline, thresholded at voxelwise p(unc.) < 0.001, with a cluster-mass significance of p(FDR) < 0.05, projected on MNI single-subject template brain. z coordinates (MNI mm) are supplied for each slice and marked on the render in bottom row. L, Left; R, right.

In order to determine whether literacy in Devanagari (functionally approximated by word reading score) has an impact on the BOLD response to speech, as previously reported for literacy in the Latin alphabet (Dehaene et al., 2010), we conducted a regression analysis to test for a correlation between word reading score and brain response during auditory sentence presentation and during visual sentence presentation (Table 4).

View this table:
  • View inline
  • View popup
Table 4.

Modulation of responses to written sentences by literacya

Cerebral responses to orthographic stimuli are modulated by literacy

As previously reported on these data, using a different analytical approach (Hervais-Adelman et al., 2019), BOLD responses to visual sentence presentation was significantly modulated by literacy in a number of areas consistent with the reading network, implicating visual, occipito-fusiform, midline motor, and left inferior frontal gyral regions (Fig. 6).

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Correlation between BOLD response to visual sentences and literacy (indexed by word reading score) at Time 1, thresholded using at cluster-mass p(FDR) < 0.05 with a cluster-forming threshold of p < 0.001. Results projected onto MNI single-subject brain. z coordinates (MNI mm) are supplied for each slice and marked on the render in bottom row. L, Left; R, right.

To ensure that the expected effect of literacy on brain response could be discerned for the BOLD response to orthographic stimuli, an analysis of the modulation of response in VWFA by literacy was conducted. A Bayesian correlation (Kendall's τ b, with a stretched β prior of width = 1) shows decisive evidence in favor of the existence of a correlation between literacy and BOLD response while reading sentences (Aksara recognition: Kendall's τ = 0.354, BF10 = 27,550.521; word reading: Kendall's τ = 0.388, BF10 = 320,744.068).

No evidence that cerebral responses to speech are affected by literacy

Testing for the effect of literacy on the patterns of BOLD activation revealed no loci at which there were significant relationships between word reading scores and brain response during sentence listening, at the relatively liberal threshold of uncorrected voxelwise p < 0.001.

Despite the absence of significant effects of literacy on the BOLD response to auditory sentences at whole-brain levels, it is conceivable that a more subtle relationship was missed. Given the existence of an a priori rationale for testing specific ROIs, we further probed the possibility that PT response to spoken sentences is modulated by literacy. We conducted an ROI analysis using Bayesian correlation to determine whether any conclusion may be imputed based on the evidence at hand.

Bayesian correlation (Kendall's τ b, with a stretched beat prior of width = 1) analysis was conducted to test the relationship between participant-wise BOLD response to auditory sentence presentation extracted from each of the two acoustic-phonetic ROIs (PT and pSTG), and literacy, quantified by Aksara recognition and word reading scores. These analyses revealed that there is no evidence in favor of relationships between literacy and brain response (PT, relationship with Aksara recognition: τ = 0.095, BF10 = 0.332, relationship with word reading, τ = 0.007, BF10 = 0.137; pSTG relationship with Aksara recognition: τ = –0.090, BF10 = 0.303, relationship with word reading: τ = –0.030, BF10 = 0.149). The associated Bayes factors indicate anecdotal to substantial levels of evidence in favor of the null hypothesis of no relationship between BOLD response and literacy. The ROIs and the relationships between BOLD response and word reading scores at the first and second time points of the study are shown in Figure 7.

Figure 7.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 7.

Relationship between word-reading scores and BOLD response to auditory sentences or visual sentences, plotted for each of the three ROIs examined in this study (VWFA). Scatter plots represent individual subject mean t statistics in the sampled region at each time point of the study. Trend line indicates fit of robust linear regression. Ribbon indicates 95% CI of fit. Because of the non-normal distribution of the data for word reading, statistical analyses were conducted using nonparametric methods (Kendall's τ b). Trendlines are illustrative. τ values indicated on each scatter plot indicate Kendall's τ and statistical significance uncorrected for multiple testing. BF10, Bayes factor in favor of the hypothesis of a relationship between literacy and BOLD response.

To ensure that the expected effect of literacy on brain response could be discerned for the BOLD response to orthographic stimuli, a control analysis of the modulation of response in VWFA by literacy was conducted. A Bayesian correlation (Kendall's τ b, with a stretched β prior of width = 1) shows decisive evidence in favor of the existence of a correlation between literacy and BOLD response while reading sentences (Aksara recognition: Kendall's τ = 0.354, BF10 = 27,550.521; word reading: Kendall's τ = 0.388, BF10 = 320,744.068).

Functional connectivity of PT, pSTG and VWFA

It is self-evident that, in hearing individuals, the acquisition of literacy at some stage corresponds to acquiring a mapping of spoken language to orthographic symbols. This presumably involves auditory processes and may therefore have functional consequences for regions associated with acoustic-phonetic processing. We therefore sought evidence for an impact of literacy on the functional connectivity of these regions, during auditory sentence presentation.

Functional connectivity of PT during spoken sentence presentation is not modulated by literacy

Functional coupling to a seed region (the PT or pSTG ROI) during auditory sentence presentation was estimated and tested for modulation by literacy. We found no indication that there was a significant impact of literacy (even at a liberal voxelwise threshold of uncorrected p < 0.001) on the functional connectivity of the PT ROI.

Functional connectivity between pSTG and graphemic/motor frontal area during auditory sentence presentation increases as a function of literacy

A cluster of 602 voxels in dorsal frontal cortex (spanning Brodmann areas 6d, 4a, and 4p) showed functional coupling to pSTG during sentence listening that was significantly positively associated with literacy (Fig. 8; Table 5). We note that, even under a nondirectional hypothesis using a two-tailed test, we observe a cluster comprising a subset of N = 342 voxels of this cluster, with a cluster-mass FDR-corrected p = 0.049. This cluster of dorsal sensorimotor voxels intersects with an area commonly said to have been identified by Exner (Exner, 1881; Roux et al., 2010) as crucial for handwriting and, more recently, repeatedly associated with handwriting in a number of functional imaging (Longcamp et al., 2003, 2008, 2014; Planton et al., 2017) and cortical stimulation studies (Roux et al., 2009), known as the “graphemic/motor frontal area” (GMFA). This result suggests that, while learning to read may not necessarily alter the responsiveness of PT to auditory sentences, a functional relationship between the pSTG acoustic-phonetic processing area and handwriting representations develops. The important implication is that learning to associate Devanagari characters with their acoustic-phonetic form appears to lead to the development of an auditory to graphomotor mapping.

View this table:
  • View inline
  • View popup
Table 5.

Significant modulation of pSTG connectivity by literacy during auditory sentence presentationa

Figure 8.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 8.

Significant correlation between functional connectivity of pSTG and literacy during auditory sentence processing. Results thresholded at cluster-mass p(FDR) < 0.05, using a cluster-forming threshold of voxelwise p < 0.001 (single-tailed distribution, considering positive relationships only). Results projected onto MNI single-subject brain. z coordinates (MNI mm) are supplied for each slice and marked on the render in bottom row. L, Left; R, right.

Functional connectivity of VWFA to auditory and prefrontal cortices during visual sentence presentation is modulated by literacy

As an additional control, the connectivity of VWFA was examined during sentence reading, and the correlation between connectivity and literacy was evaluated (Table 6). This revealed a broad pattern of connectivity between VWFA and the wider reading network (Fig. 9), including the left inferior frontal gryus, left dorsal premotor cortices, and supplementary motor areas, alongside bilateral visual areas and a region of posterior superior temporal sulcus associated with processing of speech. This latter cluster is somewhat distant, both neuroanatomically and in terms of auditory-processing (Rutten et al., 2019) from the PT ROI examined in this study, but abuts the pSTG ROI defined a priori.

View this table:
  • View inline
  • View popup
Table 6.

Significant modulation of VWFA connectivity by literacy during visual sentence presentationa

Figure 9.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 9.

Brain areas showing significant modulation of VWFA connectivity by literacy during visual sentence processing. Results thresholded at cluster-mass p(FDR) < 0.05, using a cluster-forming threshold of voxelwise p < 0.001 (single-tailed distribution, considering positive relationships only). Results projected onto MNI single-subject brain. z coordinates (MNI mm) are supplied for each slice and marked on the render in bottom row. L, Left; R, right.

Experiment 2: literacy training intervention and MRI follow-up

Methods

A number of the illiterate participants (N = 22) completed a 6 month literacy training program, after which they participated in an MRI session of the same design and with the same stimuli as described above. All other participants were invited to return for a follow-up scan; however, only 12 illiterate participants returned without training, alongside 26 literate participants. The illiterate training and nontraining groups were matched for age, gender, handedness, income, number of literate family members, reading scores, and nonverbal intelligence (Table 7). Complete details of the training procedure and its efficacy were described previously (Hervais-Adelman et al., 2019) and are summarized below. The longitudinal analyses were confined to the evaluating the response of the ROIs described above to auditory and visual sentences.

View this table:
  • View inline
  • View popup
Table 7.

Characteristics of participants at Time 2a

Participants

Of the 91 participants initially tested, 60 returned for a follow-up scan, for whom complete data are available for only 59 (because of an error in collecting behavioral data for 1 participant during the follow-up session), comprising three groups: literate (N = 25, mean word reading score: 65.48), illiterate participants who participated a 6 month literacy training and nontrained (N = 22, mean word reading score: 0.55), and control illiterate individuals (N = 12, mean word reading score: 3.25).

Although at recruitment the three groups of participants were matched on relevant demographic indicators, except for schooling and literacy, the reduced subset who returned at follow-up were not. The differences between groups at Time 2 (the participants were not treated as groups in the cross-sectional analyses presented above) can be seen in Table 7, and groupwise comparisons are presented in Figure 9. The potentially confounding factors age, sex and Raven's performance were therefore included in analyses of the data in this experiment.

MRI acquisition and preprocessing

MRI was acquired exactly as for Session 1 described above. Participants were presented with the same stimuli. Data were preprocessed using the same pipeline and were submitted to the same first-level analyses.

Literacy training procedure

Participants received classroom instruction in literacy 3 times per week for ∼2 h per session over a 6 month period. The instructor was a professional teacher who followed the local method of reading and writing instruction. During the first month, the Devanagari alphabet was taught (vowels, followed by consonants). Reading and writing of script were taught simultaneously. Next, two-letter words were taught, ∼200 in the first month. Over the second month, participants were taught to read and write simple sentences containing mostly two-letter words. During the third month, teaching focused on three-letter words, alongside the reading and writing of simple sentences. For the remaining 3 months of the program, more complex words and foundational grammar rules (e.g., differences between nouns, verbs, pronouns, and adjectives and basic rules of tense and gender) were taught. The instructor monitored learning and progress continuously throughout the program.

Effectiveness of training

Because of the non-normal distribution of the reading data (both Aksara and word-reading scores are heavily rightward skewed), effectiveness of training was established using aligned rank-transformed (ART) ANOVA, implemented in R version 4.1.1 (R Core Team, 2021) using ARTool version 0.11.1 (Wobbrock et al., 2011). Aligned rank transformation (ART) is an established method for dealing with non-normally distributed data in designs, including both within- and between-participants terms, and is suitable for repeated-measures data. Using the ARTool, data were transformed and the transformed data were submitted to repeated-measures ANOVA. The model tested was as follows: Reading Performance ∼ Time×Group×Sex + Error(Participant)

Where reading performance was either word reading score or Aksara recognition (i.e., tests for the effects of time, group, and their interaction) accounted for repeated-measures over participant and included an interaction with sex. Sex was included in this analysis as the composition of groups was not balanced for sex (see Fig. 10). The factor sex was included with an interaction as the constraints of the ART are such that the model must include all combinations of interactions of the fixed effects. Further, because of constraints of the ART, it was not possible to include continuous covariates (age and Raven's performance). A subsidiary analysis was conducted to attempt to adjust for potential impacts of the continuous predictors age and Raven's performance, as follows: reading performance was rank-transformed using the rank function in R, with the option ties.method = “random” and a linear regression of age and Raven's on reading performance was conducted for each time point and the residuals stored. This procedure was executed >10,000 replicates to ensure comprehensive sampling of the randomized tied ranks. The mean residuals of the rank regressions (averaged over all replicates) were then input to the ART model above. This procedure of post hoc pairwise comparisons was executed on the ART model estimates using the emmeans package, version 1.6.2-1 (Lenth, 2021), averaging across levels of sex.

Figure 10.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 10.

Comparison of groups included in follow-up on principle demographic and literacy indicators and reading indicators at both time points. Violin plots represent distributions of individual participants, divided by group, with significant pairwise differences indicated (Holm-corrected for multiple comparisons); nonsignificant comparisons are omitted for clarity. Individual data points for reading performance at time points are also plotted. Trendlines are anchored at the group median for each time point. *p < 0.05. **p < 0.01. ***p < 0.001.

Longitudinal analysis of training-induced changes in brain response

The response to both auditory sentences and visual sentences was examined in the three ROIs defined above in order to evaluate whether a significant effect of training (and therefore presumably of literacy) repeated-measures ANOVA was conducted to probe whether: (1) the BOLD response to auditory or visual sentences increased in the trained versus untrained participants or (2) whether the connectivity of the pSTG to GMFA increased with increasing literacy.

The repeated-measures ANOVA was implemented in R version 4.1.1 (R Core Team, 2021) using the lmer function from the lmerTest package, version 3.1-3 (Kuznetsova et al., 2017) testing the following model: MRIdata ∼ Group×Time + Sex + Age + Raven's + (1|Participant)

Where MRIdata denotes either the BOLD response for a given ROI or the estimated connectivity, and the model includes covariates of sex, age and Raven's performance, and a random intercept by participant.

A model including an interaction term for sex: “MRIdata ∼ Group × Time × Sex + Age + Raven's + (1|Participant)” was also fitted and subjected to model comparison, to determine whether the interaction term should be included. Model comparison was executed with models fit using an ML variance estimator as opposed to REML. The results reported below are for models fit with REML. Model comparison based on the Bayesian information criterion universally revealed that the model including an interaction term for sex was a poorer fit to the data. Post hoc pairwise comparisons were executed using the emmeans package, version 1.6.2-1 (Lenth, 2021). Since the linear mixed effects modeling framework allows the inclusion of participants with incomplete data (in this instance, participants who dropped out of the study before the second scan), below we also report results of the analyses of the fMRI data, including all participants, using the models described above. Degrees of freedom were estimated using the Kenworth–Roger approximation.

Additional Bayesian post hoc pairwise comparisons were executed in JASP, version 0.16.3 (JASP Team, 2022), using the default Cauchy prior with scale = 0.707.

Six months of literacy training: a modestly successful intervention

Repeated-measures ANOVA on aligned rank-transformed data, including a between-subjects factor of group (literate, illiterate trainee, and illiterate control) and a within-participants factor of session (Time 1, Time 2) was used to determine whether training had a significant impact on literacy. As reported previously (Hervais-Adelman et al., 2019), the literacy training program was a mixed success; descriptive plots are shown in Figure 10. A significant group × time point interaction indicated that trained participants improved in their Aksara recognition performance (F(2,53) = 54.431, p < 0.001, partial η2 = 0.673). Post hoc pairwise comparisons of within-group pretraining versus post-training performance (Holm-corrected for multiple comparisons and averaged over levels of sex) indicated that the interaction was driven by an improvement in performance in the trainees (t = 3.484, pholm = 0.010), which was absent in the literate controls (t = −2.67, pholm= 0.089) and untrained illiterate controls (t = –2.608, pholm = 0.079). There was also a significant group × time point interaction for word reading (F(2,53) = 5.18, p = 0.009, partial η2 = 0.163). However, post hoc pairwise comparisons of within-group pretraining versus post-training performance (Holm-corrected for multiple comparisons and averaged over levels of sex) indicated that this effect showed no significant within group improvement in word reading in any group (trainees: t = 1.18, pholm > 0.999, literate controls: t = −0.846, pholm > 0.999, untrained illiterate controls: t = 0.568, pholm > 0.999). These results confirm that training improved literacy as assessed by Aksara reading and the significant group × time interaction on word reading scores is suggestive of a modest impact of training on word reading.

However, it is clear from the level of performance reached by the trained participants (mean word reading performance = 7.591, compared with the literate participants' mean of 68.44 at time 2) that the training did not suffice to render the formerly illiterate individuals fluent readers of words in Devanagari script. We note also that the untrained illiterate control group was small, and conclusions involving comparisons with this group must therefore be considered with caution.

When attempting to control for the influences of age and sex by fitting the model to reading performance residualized for these factors, the pattern of results was similar: Aksara recognition performance showed a significant group × time interaction (F(2,53) = 15.364, p < 0.001, partial η2 = 0.326). Word reading performance also showed a significant group × time point interaction (F(2,53) = 12.825, p < 0.001, partial η2 = 0.326).

Impact of acquiring literacy on responses to auditory and visual sentences in PT, pSTG and VWFA

An initial repeated-measures ANOVA, with a within-participant factor of session (Time 1, Time 2) and a between-participants factor of group (literate control, nontrained control, trainee), with covariates of sex, age, and Raven's performance, was conducted to test for a significant group × time point interaction, which would be indicative of an effect of training. For completeness, this analysis was conducted on responses to both auditory and visual sentence presentation in all three ROIs (summary data are plotted in Fig. 11).

Figure 11.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 11.

Mean response (expressed as t statistic for the contrast with the null baseline) to auditory and visual sentences in the three ROIs, by group and session. Error bars indicate ± 1 SEM.

In PT, no significant group × time point interaction was found for either listening to sentences (F(2,56) = 0.716, p = 0.493, partial η2 = 0.025) or visually presented sentences (F(2,56) = 0.165, p = 0.849, partial η2 = 0.003), suggesting that there was no effect of training on BOLD response to auditory or orthographic sentences in this ROI. Similarly, there was no significant interaction between time and group for the pSTG ROI (auditory sentences: F(2,56) = 0.543, p = 0.584, partial η2 = 0.019; visual sentences: F(2,56) = 0.871, p = 0.421, partial η2 = 0.012). In VWFA, however, while there was no significant group × time point interaction for auditory sentences (F(2,56) = 1.023, p = 0.366, partial η2 = 0.035), a marginally significant group × time point interaction was found for visually presented sentences (F(2,56) = 3.144, p = 0.051, partial η2 = 0.101), suggesting that literacy training tended to increase response to orthographic stimuli in the VWFA.

Analyses incorporating the full dataset (i.e., 91 participants at Time 1 and 59 at Time 2) yielded a similar pattern of results for the interaction of group by time. In PT, there was no significant interaction (auditory: F(2,70.035) = 0.324, p = 0.725, partial η2 = 0.009; visual: F(2142) = 0.804, p = 0.4501, partial η2 = 0.011), nor was there any significant interaction in pSTG (auditory: F(2,74.187) = 0.488, p = 0.616, partial η2 = 0.013; visual: F(2142) = 0.8708, p = 0.421, partial η2 = 0.012). However, in VWFA, although no significant group × time interaction was found for auditory sentences (F(2,75.873) = 3.235, p = 0.045, partial η2 = 0.079), there was a significant interaction for visual sentences (F(2,71.587) = 1.651, p = 0.199, partial η2 = 0.044).

A follow-up analysis (Bayesian paired t test) was conducted on the null findings to determine the relative certainty that there was no categorical difference in brain response to the auditory or visual sentence materials as a result of training. This analysis was conducted for the PT and pSTG ROIs, comparing pretraining with post-training BOLD response to each condition and testing the directional hypothesis of post-training BOLD increase. These analyses revealed varying degrees of evidence in favor of the null hypothesis that there is no increase in response after training in either PT (auditory: BF−0 = 0.065; visual: BF-0 = 0.351) or pSTG (auditory: BF−0 = 0.079; visual: BF-0 = 0.672). The same analyses in VWFA showed strong evidence in favor of the alternative hypothesis that BOLD response to visual sentences was greater after training (BF−0 = 7.986), while there was substantial evidence in favor of the null hypothesis of no difference for the response to auditory sentences (BF−0 = 0.346).

In sum, the analysis of within-subject change after training provides no evidence for an increase in recruitment of acoustic-phonetic related brain areas, while increasing literacy does lead to an increase in VWFA responsiveness to visually presented sentences.

Literacy training increases functional connectivity of pSTG and GMFA during auditory sentence presentation

If learning to read is responsible for the observed correlation between literacy and functional connectivity of pSTG with GMFA during auditory sentence presentation, then the literacy training intervention should drive an increase in this connectivity. Connectivity estimates for all participants who returned for a second scan were estimated for the pSTG ROI. Individual connectivity parameters at the center of mass of the cluster identified as GMFA above (using a sphere of radius 8 mm centered at MNI: −14, −24, 58) were extracted and tested for an effect of training using a repeated-measures ANOVA (within-participants factor time: Time 1, Time 2; between-participants factor group: literate control, untrained control, trainee). This analysis revealed a significant group × time comparison (F(2,56) = 4.955, p = 0.010, partial η2 = 0.150), with post hoc pairwise comparison showing that the effect is driven by an increase in connectivity in the trainees (t = 3.340, pholm = 0.022), which was not found in the other groups (literate: t = −0.570; untrained: t = −0.610, both pholm > 0.999).

Discussion

Surprisingly, given the existing evidence that literacy induces changes in brain responses to speech, we find no evidence of such changes, either cross-sectionally or longitudinally, with Bayesian analyses, suggesting substantial evidence in favor of there being no effect. However, there is compelling evidence that a functional connection during speech processing arises between pSTG and graphomotor areas, as a result of literacy and that VWFA exhibits literacy-modulated connectivity with posterior superior temporal cortical areas during reading.

These findings contrast starkly with those previously presented for literate and ex-illiterate readers of alphabetic script (Dehaene et al., 2010), who showed modulation of PT response during sentence listening as a function of literacy status and with several reports of print-speech convergence in posterior superior temporal cortex. It should be noted, however, that investigations of print-speech convergence have examined differential responses to intelligible and unintelligible stimuli (e.g., Chyl et al., 2018), native language versus foreign language stimuli (Monzalvo and Dehaene-Lambertz, 2013), anomalous versus normal sentences (Shankweiler et al., 2008), or during picture-word matching, including pseudowords (Preston et al., 2016). The present study deliberately avoided imposing such tasks or contrasts to probe the basic question: “does literacy in Devanagari increase BOLD response to speech in phonological processing areas?”

There is an appealing argument that print-speech convergence arises as a result of enhanced phonological processing that develops with literacy, and that phonological representations relevant to reading become engaged in an obligatory manner during speech processing (even under conditions of no explicit task). However, the absence of such an effect in the current study may call into question the generalisability of the hypothesis of a dramatic alteration of the functional role of auditory processing areas by the acquisition of literacy. Not only is there no whole-brain level nor ROI-level effect of literacy on brain response to auditory sentence presentation, there is substantial evidence (at the ROI level) in favor of the hypothesis that there is no impact of literacy.

We do not seek to contradict the substantial body of evidence that points toward behavioral consequences of literacy for speech processing tasks, nor, indeed, could we, in the absence of an appropriate behavioral task in the present experiment. However, the limited behavioral evidence that orthographic knowledge can affect the processing of spoken words comes from meta-linguistic tasks that are completely or mostly offline, such as rhyme judgments (Seidenberg and Tanenhaus, 1979), phoneme monitoring (Dijkstra et al., 1995; Halle et al., 2000), and word blending (Ventura et al., 2010), which require participants explicitly to break down (individual) spoken words into smaller units. Some other behavioral studies finding evidence for orthography-on-speech effects have used tasks that do not require meta-phonological judgments but which are clearly meta-linguistic in nature, such as auditory lexical decision and shadowing (Ventura et al., 2004; Chereau et al., 2007) and are far removed from how people ordinarily listen to spoken language. The few behavioral studies that have used online speech tasks suggest that orthographic knowledge may not modulate online speech (Mitterer and Reinisch, 2015).

There is substantial debate about the nature of representation of the acoustic units of human speech sounds. Traditionally, it has been assumed that the phoneme is the basic unit of speech, a position that seems “logical” when letters map onto phonemes. However, although it is reasonable to assume some form of phoneme-letter mapping for highly researched alphabetic writing systems, such as English, it is not an account that is likely to be true across the writing systems of the world. Logosyllabic scripts (e.g., Chinese) (see Daniels, 2020) have a many-to-one mapping between symbols and sounds, and are fundamentally intransparent (the sound of a word cannot straightforwardly be derived from its symbol), in syllabic scripts (e.g., Kana of Japanese) every symbol encodes a syllable (although there is no transparent relationship between the symbol and the consonant and vowel component), and in abugidas (e.g., Indic and Ethiopic scripts) (see Daniels, 2020) every symbol systematically encodes a consonant with a modifier encoding a vowel (or a consonant with an inherent vowel, when unmarked). These different scripts reflect different spoken language units compared with the phoneme-letter mappings of alphabets (logosyllabaries: morphemes and words, syllabary: syllables, abugidas: mappings at multiple levels of granularity). Indeed, previous research strongly suggests that phoneme units are not “needed” until people learn to read an alphabetic script. Many studies have shown that awareness of subsyllabic speech units, such as phonemes, does not arise in speakers spontaneously; rather, it must be taught during learning to read (Morais et al., 1979; Morais, 2021). Moreover, even in languages with an alphabetic writing system, a phoneme is typically not produced in isolation: speech planning and production take place at either the syllable or segment level (Laganaro, 2019), which likely map more directly onto syllable-sized orthographic symbols. It is conceivable, therefore, that learning to map subsyllabic segments to a visual code in alphabetic writing systems might require or induce modifications to auditory processing and representations of speech to support the phoneme-level manipulations and representations that are relevant for writing in that specific orthographic system, but are not natural to speaking the language encoded by it.

This raises the intriguing possibility that neural orthography-on-speech processing effects are indeed script-specific rather than universal. Although a previous investigation by Rueckl et al. (2015) suggested that consistent brain areas show similar patterns of speech-print convergence across transparent and opaque alphabets, an abjad (Hebrew) and a logosyllabic script (Chinese), investigations of abugidas have not, until now, been reported. It is conceivable that the Devanagari abugida imposes different visual-to-orthographic mapping requirements compared with an alphabetic one, as the characters transparently encode consonant-vowel pairs (syllables) rather than sub-syllabic segments (phonemes), and to logosyllabaries in which the phonological value of the character cannot be decoded, only learned. Devanagari and Latin scripts' properties intersect at the conceptual level of sound-symbol mapping and the necessity of assembling sequential symbols to compose words, and it sits between alphabetic and logographic script in terms of visual complexity of individual graphemes.

The evidence presented above for Devanagari suggests that having learned the mappings between orthographic symbols and their phonological renderings per se does not necessarily induce significant changes to the processing of continuous speech in the auditory system as a whole, nor in areas specifically investigated because of the a priori evidence of their role in acoustic-phonetic processing of speech. Further investigation is essential to elucidate whether reading abugidas, which are neither alphabetic nor syllabic, is somehow categorically different to reading other scripts. In addition, we speculate that the differences we report above in part because of the script but also a function of the choice to eschew any meta-linguistic or phonological task in the investigation. This was a deliberate decision taken to ensure that we were able to test claims regarding the obligatory nature of the impact of literacy on the BOLD response to speech.

We propose that the nature of the speech unit encoded in the orthographic system used by literate individuals must be considered when generating hypotheses about the impact of literacy on speech sound representation and processing. Ultimately, we would argue that literacy in all orthographies is not equivalent, and that drawing conclusions of a universal nature from investigations of alphabetic literacy alone is problematic. As we have previously discussed (Hervais-Adelman et al., 2019), the impact of literacy on visual processing reported by Dehaene et al. (2010) for alphabetic literates was not replicated in this group of Hindi-speaking individuals, underscoring the need for further investigations to provide concordant, or discordant, evidence for influential proposals.

An especially intriguing finding of the present study is that the pSTG ROI showed greater functional connectivity with GMFA during spoken sentence processing both cross-sectionally as a function of literacy and longitudinally within-participant as a result of literacy training. The functional connectivity between this region of posterior superior temporal cortex that is associated with acoustic-phonetic processing of phonemes and the handwriting-related areas of the dorsal motor and premotor cortices is of outstanding interest. Literacy is almost never acquired as a purely receptive skill but also involves an important production component when learning to write by hand (but also in typing). It has previously been demonstrated that recognizing (alphabetic) letters activates premotor cortical areas consistent with the representation of the hand habitually used to write (Longcamp et al., 2008). This is compelling evidence for a functional role of graphomotor processes in reading. However, the role of learning to write in developing acoustic-phonetic representations at the level encoded by the script is barely discussed, although there would be every reason to posit that creating motor-auditory mappings for encoding script must be as important in becoming literate (in that it forms the functional basis of written expression) as learning the visual-auditory bases for decoding script.

While future studies will be necessary to better examine the implications of this functional relationship, the data at hand indicate that, in literate individuals, there is significantly greater coupling between hand-motor regions and auditory processing areas during online sentence processing, in the absence of any orthographic of manual task. Although it is consistent with classical Hebbian processes (Hebb, 1949) that repeated pairings of orthographic tokens with their spoken representations during learning can lead to functional coupling as a result of exercising orthographic output, the relevance of this to spoken language processing is unclear. Importantly, it suggests that we must consider the potential role of auditory-manual mapping in theories of the role of literacy in the development of phonological representation and processing.

A regrettable limitation of the present study is the relatively small number of participants from in the no training control group (N = 12). Comparisons involving this group must be interpreted with due caution because of low statistical power (e.g., for a comprehensive discussion of the impact of N on group-level statistics in fMRI, see Geuter et al., 2018). However, the inclusion of two control groups for which we expect no change over time should mitigate concerns of potential false positives. We cannot, however, rule out that some changes of modest effect size may have been missed, although the Bayesian statistics, for the most part, indicate convincing evidence one way or another.

Future, ideally preregistered longitudinal, studies will be required to systematically examine the potential script specificity (alphabetic vs nonalphabetic) of literacy-induced modulations of responses to speech, in the presence and absence of metalinguistic tasks, and to better understand the role of graphomotor learning in influencing auditory processing of speech.

Footnotes

  • This work was supported by Max Planck Society Strategic Innovation Grant to F.H. A.H.-A. was supported by Swiss National Science Foundation Grant PP00P1_163726. Raw data can be made available upon reasonable request to F.H. Custom analysis code can be made available upon request to A.H.-A.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Alexis Hervais-Adelman at alexis.hervais-adelman{at}uzh.ch

SfN exclusive license.

References

  1. ↵
    1. Ashburner J,
    2. Friston KJ
    (2005) Unified segmentation. Neuroimage 26:839–851. doi:10.1016/j.neuroimage.2005.02.018 pmid:15955494
    OpenUrlCrossRefPubMed
  2. ↵
    1. Buchanan EM
    (2018) MeMoBootR. Available at https://github.com/doomlab/MeMoBootR.
  3. ↵
    1. Chereau C,
    2. Gaskell MG,
    3. Dumay N
    (2007) Reading spoken words: orthographic effects in auditory priming. Cognition 102:341–360. doi:10.1016/j.cognition.2006.01.001 pmid:16480971
    OpenUrlCrossRefPubMed
  4. ↵
    1. Chevillet MA,
    2. Jiang X,
    3. Rauschecker JP,
    4. Riesenhuber M
    (2013) Automatic phoneme category selectivity in the dorsal auditory stream. J Neurosci 33:5208–5215. doi:10.1523/JNEUROSCI.1870-12.2013 pmid:23516286
    OpenUrlAbstract/FREE Full Text
  5. ↵
    1. Chyl K,
    2. Kossowski B,
    3. Dębska A,
    4. Łuniewska M,
    5. Banaszkiewicz A,
    6. Żelechowska A,
    7. Frost SJ,
    8. Mencl WE,
    9. Wypych M,
    10. Marchewka A,
    11. Pugh KR,
    12. Jednoróg K
    (2018) Prereader to beginning reader: changes induced by reading acquisition in print and speech brain networks. J Child Psychol Psychiatry 59:76–87. doi:10.1111/jcpp.12774 pmid:28691732
    OpenUrlCrossRefPubMed
  6. ↵
    1. Daniels PT
    (2020) Foundations of graphonomy. J Cult Cogn Sci 1–11.
  7. ↵
    1. Dehaene S,
    2. Pegado F,
    3. Braga LW,
    4. Ventura P,
    5. Nunes Filho G,
    6. Jobert A,
    7. Dehaene-Lambertz G,
    8. Kolinsky R,
    9. Morais J,
    10. Cohen L
    (2010) How learning to read changes the cortical networks for vision and language. Science 330:1359–1364. doi:10.1126/science.1194140 pmid:21071632
    OpenUrlAbstract/FREE Full Text
  8. ↵
    1. Dijkstra T,
    2. Roelofs A,
    3. Fieuws S
    (1995) Orthographic effects on phoneme monitoring. Can J Exp Psychol 49:264–271. doi:10.1037/1196-1961.49.2.264 pmid:9183977
    OpenUrlCrossRefPubMed
  9. ↵
    1. Exner S
    (1881) Untersuchungen über die Localisation der Functionen in der Grosshirnrinde des Menschen. Wilhelm Braumüller.
  10. ↵
    1. Friederici AD
    (2012) The cortical language circuit: from auditory perception to sentence comprehension. Trends Cogn Sci 16:262–268. doi:10.1016/j.tics.2012.04.001 pmid:22516238
    OpenUrlCrossRefPubMed
  11. ↵
    1. Geuter S,
    2. Qi G,
    3. Welsh R,
    4. Wager T,
    5. Lindquist M
    (2018) Effect size and power in fMRI group analysis. bioRxiv 295048. doi:10.1101/295048.
    OpenUrlAbstract/FREE Full Text
  12. ↵
    1. Hagoort P
    (2017) The core and beyond in the language-ready brain. Neurosci Biobehav Rev 81:194–204. doi:10.1016/j.neubiorev.2017.01.048 pmid:28193452
    OpenUrlCrossRefPubMed
  13. ↵
    1. Halle PA,
    2. Chereau C,
    3. Segui J
    (2000) Where is the vertical bar b vertical bar in 'absurde' [apsyrd]? It is in French listeners' minds. J Mem Lang 43:618–639. doi:10.1006/jmla.2000.2718
    OpenUrlCrossRef
  14. ↵
    1. Hebb DO
    (1949) The organization of behavior. New York: Wiley.
  15. ↵
    1. Hervais-Adelman A,
    2. Kumar U,
    3. Mishra RK,
    4. Tripathi VN,
    5. Guleria A,
    6. Singh JP,
    7. Eisner F,
    8. Huettig F
    (2019) Learning to read recycles visual cortical networks without destruction. Sci Adv 5:eaax0262. doi:10.1126/sciadv.aax0262 pmid:31555732
    OpenUrlFREE Full Text
  16. ↵
    1. Hickok G
    (2012) The cortical organization of speech processing: feedback control and predictive coding the context of a dual-stream model. J Commun Disord 45:393–402. doi:10.1016/j.jcomdis.2012.06.004 pmid:22766458
    OpenUrlCrossRefPubMed
  17. ↵
    JASP Team (2022) JASP (version 0.16.3).
  18. ↵
    1. Kaiser A,
    2. Haller S,
    3. Schmitz S,
    4. Nitsch C
    (2009) On sex/gender related similarities and differences in fMRI language research. Brain Res Rev 61:49–59. pmid:19406148
    OpenUrlCrossRefPubMed
  19. ↵
    1. Kansaku K,
    2. Kitazawa S
    (2001) Imaging studies on sex differences in the lateralization of language. Neurosci Res 41:333–337. doi:10.1016/s0168-0102(01)00292-9 pmid:11755219
    OpenUrlCrossRefPubMed
  20. ↵
    1. Kuznetsova A,
    2. Brockhoff PB,
    3. Christensen RH
    (2017) lmerTest package: tests in linear mixed effects models. J Statist Softw 82:1–26. doi:10.18637/jss.v082.i13
    OpenUrlCrossRef
  21. ↵
    1. Laganaro M
    (2019) Phonetic encoding in utterance production: a review of open issues from 1989 to 2018. Lang Cog Neurosci 34:1193–1201.
    OpenUrl
  22. ↵
    1. Lenth R
    (2021) Estimated marginal means, aka least-squares means. R package version 1.6.2-1. Available at https://CRAN.R-project.org/package=emmeans.
  23. ↵
    1. Longcamp M,
    2. Anton JL,
    3. Roth M,
    4. Velay JL
    (2003) Visual presentation of single letters activates a premotor area involved in writing. Neuroimage 19:1492–1500. doi:10.1016/S1053-8119(03)00088-0
    OpenUrlCrossRefPubMed
  24. ↵
    1. Longcamp M,
    2. Boucard C,
    3. Gilhodes JC,
    4. Anton JL,
    5. Roth M,
    6. Nazarian B,
    7. Velay JL
    (2008) Learning through hand- or typewriting influences visual recognition of new graphic shapes: behavioral and functional imaging evidence. J Cogn Neurosci 20:802–815. doi:10.1162/jocn.2008.20504 pmid:18201124
    OpenUrlCrossRefPubMed
  25. ↵
    1. Longcamp M,
    2. Lagarrigue A,
    3. Nazarian B,
    4. Roth M,
    5. Anton JL,
    6. Alario FX,
    7. Velay JL
    (2014) Functional specificity in the motor system: evidence from coupled fMRI and kinematic recordings during letter and digit writing. Hum Brain Mapp 35:6077–6087. doi:10.1002/hbm.22606 pmid:25093278
    OpenUrlCrossRefPubMed
  26. ↵
    1. Mitterer H,
    2. Reinisch E
    (2015) Letters don't matter: no effect of orthography on the perception of conversational speech. J Mem Lang 85:116–134. doi:10.1016/j.jml.2015.08.005
    OpenUrlCrossRef
  27. ↵
    1. Monzalvo K,
    2. Dehaene-Lambertz G
    (2013) How reading acquisition changes children's spoken language network. Brain Lang 127:356–365. doi:10.1016/j.bandl.2013.10.009 pmid:24216407
    OpenUrlCrossRefPubMed
  28. ↵
    1. Morais J
    (2021) The phoneme: a conceptual heritage from alphabetic literacy. Cognition 213:104740. doi:10.1016/j.cognition.2021.104740
    OpenUrlCrossRef
  29. ↵
    1. Morais J,
    2. Cary L,
    3. Alegria J,
    4. Bertelson P
    (1979) Does awareness of speech as a sequence of phones arise spontaneously? Cognition 7:323–331. doi:10.1016/0010-0277(79)90020-9
    OpenUrlCrossRef
  30. ↵
    1. Planton S,
    2. Longcamp M,
    3. Péran P,
    4. Démonet JF,
    5. Jucla M
    (2017) How specialized are writing-specific brain regions? An fMRI study of writing, drawing and oral spelling. Cortex 88:66–80. pmid:28081451
    OpenUrlCrossRefPubMed
  31. ↵
    1. Preston JL,
    2. Molfese PJ,
    3. Frost SJ,
    4. Mencl WE,
    5. Fulbright RK,
    6. Hoeft F,
    7. Landi N,
    8. Shankweiler D,
    9. Pugh KR
    (2016) Print-speech convergence predicts future reading outcomes in early readers. Psychol Sci 27:75–84. doi:10.1177/0956797615611921 pmid:26589242
    OpenUrlCrossRefPubMed
  32. ↵
    1. Price CJ
    (2012) A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading. Neuroimage 62:816–847. doi:10.1016/j.neuroimage.2012.04.062 pmid:22584224
    OpenUrlCrossRefPubMed
  33. ↵
    R Core Team (2021) R: a language and environment for statistical computing. Vienna: R Foundation for Statistical Computing.
  34. ↵
    1. Raghavendra RH
    (2020) Literacy and health status of scheduled castes in India. Contemp Voice Dalit 12:97–110.
    OpenUrl
  35. ↵
    1. Roux FE,
    2. Dufor O,
    3. Giussani C,
    4. Wamain Y,
    5. Draper L,
    6. Longcamp M,
    7. Demonet JF
    (2009) The graphemic/motor frontal area Exner's area revisited. Ann Neurol 66:537–545. doi:10.1002/ana.21804 pmid:19847902
    OpenUrlCrossRefPubMed
  36. ↵
    1. Roux FE,
    2. Draper L,
    3. Köpke B,
    4. Démonet JF
    (2010) Who actually read Exner? Returning to the source of the frontal 'writing centre' hypothesis. Cortex 46:1204–1210. doi:10.1016/j.cortex.2010.03.001
    OpenUrlCrossRefPubMed
  37. ↵
    1. Rueckl JG,
    2. Paz-Alonso PM,
    3. Molfese PJ,
    4. Kuo WJ,
    5. Bick A,
    6. Frost SJ,
    7. Hancock R,
    8. Wu DH,
    9. Mencl WE,
    10. Duñabeitia JA,
    11. Lee JR,
    12. Oliver M,
    13. Zevin JD,
    14. Hoeft F,
    15. Carreiras M,
    16. Tzeng OJ,
    17. Pugh KR,
    18. Frost R
    (2015) Universal brain signature of proficient reading: evidence from four contrasting languages. Proc Natl Acad Sci USA 112:15510–15515. doi:10.1073/pnas.1509321112 pmid:26621710
    OpenUrlAbstract/FREE Full Text
  38. ↵
    1. Rutten S,
    2. Santoro R,
    3. Hervais-Adelman A,
    4. Formisano E,
    5. Golestani N
    (2019) Cortical encoding of speech enhances task-relevant acoustic information. Nat Hum Behav 3:974–987. doi:10.1038/s41562-019-0648-9 pmid:31285622
    OpenUrlCrossRefPubMed
  39. ↵
    1. Sato M
    (2020) The neurobiology of sex differences during language processing in healthy adults: a systematic review and a meta-analysis. Neuropsychologia 140:107404. doi:10.1016/j.neuropsychologia.2020.107404 pmid:32087207
    OpenUrlCrossRefPubMed
  40. ↵
    1. Seidenberg MS,
    2. Tanenhaus MK
    (1979) Orthographic effects on rhyme monitoring. J Exp Psychol Hum Learn Mem 5:546–554. doi:10.1037/0278-7393.5.6.546
    OpenUrlCrossRef
  41. ↵
    1. Shankweiler D,
    2. Mencl WE,
    3. Braze D,
    4. Tabor W,
    5. Pugh KR,
    6. Fulbright RK
    (2008) Reading differences and brain: cortical integration of speech and print in sentence processing varies with reader skill. Dev Neuropsychol 33:745–775. doi:10.1080/87565640802418688 pmid:19005913
    OpenUrlCrossRefPubMed
  42. ↵
    1. Sommer IE,
    2. Aleman A,
    3. Bouma A,
    4. Kahn RS
    (2004) Do women really have more bilateral language representation than men? A meta-analysis of functional imaging studies. Brain 127:1845–1852. doi:10.1093/braiNAwh207 pmid:15240433
    OpenUrlCrossRefPubMed
  43. ↵
    1. Sommer IE,
    2. Aleman A,
    3. Somers M,
    4. Boks MP,
    5. Kahn RS
    (2008) Sex differences in handedness, asymmetry of the planum temporale and functional language lateralization. Brain Res 1206:76–88. doi:10.1016/j.brainres.2008.01.003
    OpenUrlCrossRefPubMed
  44. ↵
    1. Vaid J,
    2. Gupta A
    (2002) Exploring word recognition in a semi-alphabetic script: the case of Devanagari. Brain Lang 81:679–690. doi:10.1006/brln.2001.2556 pmid:12081431
    OpenUrlCrossRefPubMed
  45. ↵
    1. Ventura P,
    2. Morais J,
    3. Pattamadilok C,
    4. Kolinsky R
    (2004) The locus of the orthographic consistency effect in auditory word recognition. Lang Cogn Processes 19:57–95. doi:10.1080/01690960344000134
    OpenUrlCrossRef
  46. ↵
    1. Ventura P,
    2. Kolinsky R,
    3. Brito-Mendes C,
    4. Morais J
    (2010) Mental representations of the syllable internal structure are influenced by orthography. Lang Cogn Processes 16:393–418. doi:10.1080/01690960042000184
    OpenUrlCrossRef
  47. ↵
    1. Wallentin M
    (2009) Putative sex differences in verbal abilities and language cortex: a critical review. Brain Lang 108:175–183. doi:10.1016/j.bandl.2008.07.001 pmid:18722007
    OpenUrlCrossRefPubMed
  48. ↵
    1. Whitfield-Gabrieli S,
    2. Nieto-Castanon A
    (2012) Conn: a functional connectivity toolbox for correlated and anticorrelated brain networks. Brain Connect 2:125–141. doi:10.1089/brain.2012.0073 pmid:22642651
    OpenUrlCrossRefPubMed
  49. ↵
    1. Winkler AM,
    2. Ridgway GR,
    3. Webster MA,
    4. Smith SM,
    5. Nichols TE
    (2014) Permutation inference for the general linear model. Neuroimage 92:381–397. doi:10.1016/j.neuroimage.2014.01.060 pmid:24530839
    OpenUrlCrossRefPubMed
  50. ↵
    1. Wobbrock JO,
    2. Findlater L,
    3. Gergle D,
    4. Higgins JJ
    (2011) The aligned rank transform for nonparametric factorial analyses using only ANOVA procedures. Paper presented at the Proceedings of the SIGCHI conference on human factors in computing systems. doi:10.1145/1978942.1978963
  51. ↵
    1. Yi HG,
    2. Leonard MK,
    3. Chang EF
    (2019) The Encoding of speech sounds in the superior temporal gyrus. Neuron 102:1096–1110.
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 42 (47)
Journal of Neuroscience
Vol. 42, Issue 47
23 Nov 2022
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
How Does Literacy Affect Speech Processing? Not by Enhancing Cortical Responses to Speech, But by Promoting Connectivity of Acoustic-Phonetic and Graphomotor Cortices
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
How Does Literacy Affect Speech Processing? Not by Enhancing Cortical Responses to Speech, But by Promoting Connectivity of Acoustic-Phonetic and Graphomotor Cortices
Alexis Hervais-Adelman, Uttam Kumar, Ramesh K. Mishra, Vivek A. Tripathi, Anupam Guleria, Jay P. Singh, Falk Huettig
Journal of Neuroscience 23 November 2022, 42 (47) 8826-8841; DOI: 10.1523/JNEUROSCI.1125-21.2022

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
How Does Literacy Affect Speech Processing? Not by Enhancing Cortical Responses to Speech, But by Promoting Connectivity of Acoustic-Phonetic and Graphomotor Cortices
Alexis Hervais-Adelman, Uttam Kumar, Ramesh K. Mishra, Vivek A. Tripathi, Anupam Guleria, Jay P. Singh, Falk Huettig
Journal of Neuroscience 23 November 2022, 42 (47) 8826-8841; DOI: 10.1523/JNEUROSCI.1125-21.2022
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • GMFA
  • reading
  • speech perception
  • VWFA
  • writing

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Gene expression-based lesion-symptom mapping: FOXP2 and language impairments after stroke
  • Visual Distortions in Human Amblyopia Are Correlated with Deficits in Contrast Sensitivity
  • Distinct Portions of Superior Temporal Sulcus Combine Auditory Representations with Different Visual Streams
Show more Research Articles

Behavioral/Cognitive

  • Gene expression-based lesion-symptom mapping: FOXP2 and language impairments after stroke
  • Distinct Portions of Superior Temporal Sulcus Combine Auditory Representations with Different Visual Streams
  • Microsaccade Direction Reveals the Variation in Auditory Selective Attention Processes
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.