Skip to main content

Umbrella menu

  • SfN.org
  • eNeuro
  • The Journal of Neuroscience
  • Neuronline
  • BrainFacts.org

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
  • ALERTS
  • FOR AUTHORS
    • Preparing a Manuscript
    • Submission Guidelines
    • Fees
    • Journal Club
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
  • SfN.org
  • eNeuro
  • The Journal of Neuroscience
  • Neuronline
  • BrainFacts.org

User menu

  • Log in
  • Subscribe
  • My alerts

Search

  • Advanced search
Journal of Neuroscience
  • Log in
  • Subscribe
  • My alerts
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
  • ALERTS
  • FOR AUTHORS
    • Preparing a Manuscript
    • Submission Guidelines
    • Fees
    • Journal Club
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
PreviousNext
Articles, Behavioral/Systems/Cognitive

Inferior Frontal Gyrus Activation Predicts Individual Differences in Perceptual Learning of Cochlear-Implant Simulations

Frank Eisner, Carolyn McGettigan, Andrew Faulkner, Stuart Rosen and Sophie K. Scott
Journal of Neuroscience 26 May 2010, 30 (21) 7179-7186; DOI: https://doi.org/10.1523/JNEUROSCI.4040-09.2010
Frank Eisner
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Carolyn McGettigan
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Andrew Faulkner
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Stuart Rosen
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Sophie K. Scott
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

This study investigated the neural plasticity associated with perceptual learning of a cochlear implant (CI) simulation. Normal-hearing listeners were trained with vocoded and spectrally shifted speech simulating a CI while cortical responses were measured with functional magnetic resonance imaging (fMRI). A condition in which the vocoded speech was spectrally inverted provided a control for learnability and adaptation. Behavioral measures showed considerable individual variability both in the ability to learn to understand the degraded speech, and in phonological working memory capacity. Neurally, left-lateralized regions in superior temporal sulcus and inferior frontal gyrus (IFG) were sensitive to the learnability of the simulations, but only the activity in prefrontal cortex correlated with interindividual variation in intelligibility scores and phonological working memory. A region in left angular gyrus (AG) showed an activation pattern that reflected learning over the course of the experiment, and covariation of activity in AG and IFG was modulated by the learnability of the stimuli. These results suggest that variation in listeners' ability to adjust to vocoded and spectrally shifted speech is partly reflected in differences in the recruitment of higher-level language processes in prefrontal cortex, and that this variability may further depend on functional links between the left inferior frontal gyrus and angular gyrus. Differences in the engagement of left inferior prefrontal cortex, and its covariation with posterior parietal areas, may thus underlie some of the variation in speech perception skills that have been observed in clinical populations of CI users.

Introduction

Cochlear implants (CIs) can restore hearing after sensorineural hearing loss, or provide auditory input to children born deaf. These prostheses deliver tonotopically distributed electrical stimulation to the auditory nerve via an electrode array that is inserted into the cochlea. CIs provide a limited degree of spectral resolution, sufficient for good speech intelligibility in quiet (Moore and Shannon, 2009), but lose much detail of the original signal. Acoustic cues that are important for decoding verbal and nonverbal information may thus be weakened or lost. Postlingually deafened, adult CI users commonly report that speech and the sensation of pitch sound very different from their previous memories, and it can take some time for users to adapt to the new input (Tyler et al., 1997; Reiss et al., 2007; Moore and Shannon, 2009). In addition to the limited availability of acoustic cues, the placement of the electrode array can further affect the intelligibility of the speech signal (Skinner et al., 2002; Finley et al., 2008). If the array has a relatively shallow insertion into the cochlea, the spectral features of speech may be signaled at more apical places than in normal hearing. Thereby, the speech signal is in effect shifted up in frequency (Dorman et al., 1997; Shannon et al., 1998; Rosen et al., 1999).

Some CI users learn to use their device exceedingly well, and are even able to use the telephone with ease. There is, however, considerable interindividual variability in outcome, for which the main predicting factors include age of implantation, duration of deafness, and residual speech perception levels before implantation (UKCISG, 2004). Cognitive factors, such as verbal learning and phonological working memory, have also been implicated in implantation outcomes, both in adults (Heydebrand et al., 2007) and children (Fagan et al., 2007). However, no currently known set of factors can account for all of the interindividual variability that is observed clinically.

The adaptation to the novel stimulation from a CI is mediated by plasticity in the ascending auditory pathway (Fallon et al., 2008) and the cortex. Functional imaging of CI users using positron emission tomography (PET) has identified neural correlates of postimplant sound and speech perception in primary and secondary auditory cortex, prefrontal and parietal cortex, and visual cortex (Wong et al., 1999; Giraud et al., 2000, 2001; Giraud and Truy, 2002; Green et al., 2005). Variation in speech perception is associated with activity in temporal and prefrontal cortex (Mortensen et al., 2006; Lee et al., 2007). Technical limitations of functional magnetic resonance imaging (fMRI) with implant devices, and radiation exposure limits with PET, have prevented the imaging of neural changes associated with initial perceptual and linguistic processing of the novel-sounding CI input. The current study used fMRI and a simulation of the spectral resolution and spectral shifting of a cochlear implant. Its aim was to identify cortical changes in naive, hearing listeners as they learnt to understand this novel input over the course of a training session, and to relate these changes to their capacity in phonological working memory.

Materials and Methods

Noise-vocoding was used to simulate the spectral resolution and spectral shifting of a cochlear implant (Fig. 1). The number of spectral channels and degree of shift of the CI simulation was set to a level that was sufficiently difficult to understand initially, yet allowed learning to occur on a time scale that can be tracked in a functional imaging study (i.e., minutes, rather than seconds or hours). A condition in which the spectral information in the speech signal was inverted such that low frequencies in the speech input became high and vice versa, served as a control for learnability as well as for the overall acoustic properties of the stimuli. To examine the relationship between intelligibility-related neural activity and memory systems, a battery of phonological working memory and vocabulary tests was administered before scanning (see supplemental materials, available at www.jneurosci.org).

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Spectrograms of the sentence “The sweet shop was empty” in the learnable (A) and spectrally inverted (B) conditions. In both conditions the original speech signal was noise-vocoded and spectrally shifted upward. Stimuli in the inverted condition are not intelligible, and listeners cannot adapt to this distortion.

Participants.

Twenty right-handed native speakers of English (10 male, mean age 25 years, range 19–31 years) participated in the experiment, and a further five (3 male, mean age 24, range 22–25) took part in a pretest. None reported having a history of hearing disorder or neurological illness, taking medication, or having prior experience with CI simulations. All volunteers gave informed written consent and were paid for their participation. The study was approved by the University College London Department of Psychology Ethics Committee.

Materials and methods.

Stimulus materials were created from recordings of sentence lists (Bench et al., 1979) which comprise 336 syntactically and semantically simple sentences. Recordings were made in a sound-damped room by a male native English speaker, recorded to MiniDV tape (Brüel & Kjaer 4165 microphone, digitized at a 48 kHz sampling rate with 16 bit quantization) and edited using Final Cut Pro (Apple Inc.) and Matlab (MathWorks) software.

The sentences for the “learnable” condition were individually manipulated using noise-vocoding (Shannon et al., 1995) and spectral shifting (Dorman et al., 1997; Shannon et al., 1998; Rosen et al., 1999). These techniques simulate two critical aspects of how stimulation produced by a cochlear implant may differ from that of normal hearing; respectively, a coarse spectral resolution resulting from the limited number of effective frequency channels, and a misalignment of the frequencies delivered to the implant's electrode array with the tonotopy of the basilar membrane. Noise-vocoding involves dividing the frequency spectrum into analysis bands, extracting the amplitude envelope from each band, and multiplying the envelope with a noise-excited carrier band whose center frequency and cutoffs are matched to its respective analysis band. The amplitude-modulated carrier bands are then added together. Spectral shifting additionally alters the cutoff frequencies of the carrier bands by a factor that reflects a given misalignment of cochlear place according to Greenwood's frequency–position function (Greenwood, 1990). Our manipulations simulated eight effective frequency channels, and a basalward shift of 4.8 mm from the apex of the basilar membrane. The filter cutoff frequencies that were used for the analysis- and noise-bands are shown in Table 1; otherwise the signal processing procedure followed Rosen et al. (1999). Stimuli for the “inverted” control condition were processed identically, except that the mapping from analysis bands to carrier bands was inverted in the frequency domain; that is, the amplitude envelope from the lowest analysis band was mapped to the highest carrier band, the second lowest analysis band mapped to the second highest carrier band, etc. This produced stimuli with acoustic characteristics that were highly matched to the learnable condition, but were unintelligible and could not be understood even with training. Examples of the learnable and inverted stimuli are in supplemental materials (available at www.jneurosci.org).

View this table:
  • View inline
  • View popup
Table 1.

Parameters of the filter bank used in vocoding for the analysis bands of the original speech signal and the shifted noise carrier bands (in Hz)

Pretest.

A behavioral pilot study was conducted to ensure that the spectrally inverted stimuli were indeed unintelligible and could not be understood after training. The design and procedure of the pretest was identical to that of the fMRI experiment, except that instead of collecting verbal responses during test phases, listeners were asked to type on a computer keyboard what they heard after the first presentation of each stimulus; after this they saw the sentence presented on the screen. The responses to “vocoded” and “vocoded-inverted” stimuli were binned in four blocks of 25 and scored in terms of the percentage of correctly reported key words. In the vocoded condition, the average scores from blocks one to four were 39.3, 47.0, 62.8, and 65.2; representing an average improvement of 26%. The corresponding scores in the vocoded-inverted condition were 1.0, 0.3, 1.3, and 1.0, indicating that, as expected, listeners are unable to adjust to the spectral inversion manipulation within the context and time frame of this experiment.

Training design and procedure.

The training materials comprised of 100 vocoded-shifted and 100 vocoded-inverted sentences. On each trial, subjects would first hear a sentence while the instruction “Listen” was displayed on the screen. This was then followed by a second presentation at which, simultaneously, a written version of the sentence was shown on-screen. The mean interstimulus interval between these “listening” and “listen + feedback” trials was 5 s (jittered up to ±1 s). The order of learnable and inverted trials was pseudo-randomized such that not more than three trials of one type could occur in a row. The training session was broken up into four blocks of 50 sentences (25 vocoded and 25 vocoded-inverted in each; Fig. 2). In between the training blocks, and at the beginning and end of the experiment, there were five test phases which consisted of 10 sentences each. During a test phase, subjects were asked to repeat aloud what they could understand after having heard a vocoded sentence once, while the instruction “Repeat” was displayed on-screen. These verbal responses were recorded and scored off-line as the percentage of key words that were repeated correctly in each test phase. For each individual subject, the items that made up the “learnable,” “inverted,” and “test” conditions were drawn at random from the corpus of 336 sentences, such that a particular sentence would not occur more than once in the experiment. A repeated-measures ANOVA with the factors “test phase” and “percentage of correct keywords,” with a planned t-contrast comparison of block 1 and block 5, was used to test for a change in intelligibility over the course of the experiment.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

The fMRI session consisted of five test phases and four training phases (A). During each training phase (B), listeners were presented with 50 pairs of listening and feedback trials, half of which were potentially learnable. Listeners always heard the spoken version of a sentence first, followed by a second auditory presentation with simultaneous written feedback. During the test phases, participants heard 10 sentences without feedback, and repeated back what they understood after each sentence (C).

fMRI data acquisition and analysis.

Stimuli were delivered using Matlab with the Psychophysics Toolbox extension (Brainard, 1997) and responses recorded in Audacity (audacity.sourceforge.net). Subjects wore a headset with electrodynamic headphones and an optical microphone (MR Confon GmbH). Whole-brain functional and structural magnetic resonance imaging (MRI) data were acquired on a Siemens Avanto 1.5 tesla scanner (Siemens AG) with a 12-channel birdcage head coil. A gradient-echo echo-planar imaging sequence was used for the functional scans (repetition time = 3 s, echo time = 50 ms, flip angle = 90°, isotropic voxel size = 3 mm3, 35 axial slices, 805 volumes; total duration = 40.3 min). A T1-weighted anatomical scan was acquired after the functional run (high-resolution magnetization-prepared rapid-acquisition gradient echo, voxel size = 1 mm3, 160 sagittal slices).

MRI data were analyzed using SPM5 (Wellcome Department of Imaging Neuroscience, London, UK) with the MarsBaR extension for region-of-interest analyses (Brett et al., 2002). Functional MRI volumes that had been acquired during the test phases were discarded. The remaining 668 fMRI volumes were realigned, slice-timing corrected, coregistered with the structural scan, segmented, normalized to a standard stereotactic space (Montreal Neurological Institute) on the basis of the segmentation parameters, and smoothed with an isotropic Gaussian kernel of 6 mm full-width at half-maximum. Statistical whole-brain analyses were conducted in the context of the general linear model, and included four effects of interest at the single-subject level (learnable vs inverted, each under listening and listening + feedback conditions). Event-related hemodynamic responses for each event type were modeled as a canonical hemodynamic (gamma type) response function.

Results

Effect of learnability

We first compared the hemodynamic response elicited by listening to learnable cochlear-implant simulations to the response elicited by listening to spectrally inverted control stimuli. A random-effects group analysis was used to compare the learnable and inverted conditions on listening trials. This included a covariate for the mean repetition test scores from each subject, averaged across the session. The statistical threshold was set at p < 0.001 at the voxel level (uncorrected for multiple comparisons) with a cluster extent threshold k = 30 voxels. This analysis revealed greater activation for learnable stimuli in the left superior temporal sulcus (STS) and the left inferior frontal gyrus (IFG; Table 2). The average percentage of signal change across the session was obtained for each condition and for each subject in the two clusters of significantly activated voxels. These regionally averaged percentages of signal change confirmed that the overall effect of learnability was present at both sites, both in the trials where participants initially listened to a stimulus and in the trials where they received simultaneous written feedback during stimulus presentation (Fig. 3).

View this table:
  • View inline
  • View popup
Table 2.

Summary of activation clusters in the random-effects group analyses

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Overall effect of learnability. Two regions, on the left IFG and left STS, were sensitive to the learnability of the stimuli (A). Results are shown for a t-contrast of the learnable and the inverted conditions in a random-effects group analysis (height threshold p < 0.001; cluster extent threshold k > 30 voxels). The bar graphs show the pattern of mean signal change in the two regions for the learnable and inverted conditions, separately for trials in which listeners initially heard a sentence, and for when they received feedback (B). Error bars represent SEM. Stereotactic coordinates of the peak activations are given in the Montreal Neurological Institute system. L, Left hemisphere.

Individual differences in learning

Behavioral data collected over five test blocks indicated that participants improved significantly in the identification of words within sentences in the learnable condition, albeit with considerable variability of scores over time (Fig. 4). While nearly all participants repeated fewer than 10% of keywords correctly in the first test phase, some improved considerably (by >60%) over the course of the experiment, while others barely improved at all (by <5%). The neural correlates of this variability were investigated as follows. Voxels showing interindividual variation in the effect of learnability were defined by masking inclusively (at a height threshold of p < 0.001) the effect of overall learnability and the effect of a covariate coding the improved intelligibility over the course of the experiment for each subject (calculated as the difference between the first and last test phase). This revealed an area on the inferior frontal gyrus in which the neural learnability effect, that is, the difference between the responses to the learnable and inverted conditions, correlated significantly with individual participants' behavioral learning scores. From these voxels, regionally averaged percentages of signal change were obtained as an index of effect size in each condition for each subject (Fig. 5).

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Box plot showing the behavioral learning effect. Over the course of the training, listeners reported more keywords correctly during the test phases (main effect of test phase, F(4,19) = 26.17, p < 0.001; planned comparison of test phase 1 and 5, t(1,19) = 7.65, p < 0.001). Red lines indicate the median, blue edges represent 25th and 75th percentiles, dashed lines cover the range of data points excluding outliers, and red crosses mark individual outliers.

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Effects of individual variation. The learnability effect was modulated by interindividual differences in the amount of learning (left) and in working memory capacity (right). These effects were observed in the left inferior frontal gyrus, but not in superior temporal cortex, in a whole-brain analysis. Stereotactic coordinates of the peak activations are given in the Montreal Neurological Institute system. L, Left hemisphere.

The analysis was repeated with a predictor that, instead of intelligibility, coded individual composite working memory scores, measured using a battery of phonological working memory tests (see supplemental materials, available at www.jneurosci.org). This, again, revealed an area of the inferior frontal gyrus in which the overall neural effect of learnability correlated positively with individual working memory scores (Fig. 5). This area partially overlapped with the one identified on the basis of intelligibility. In contrast, neither the intelligibility scores nor the phonological working memory scores correlated significantly with activation in the superior temporal lobe.

Neural basis of intelligibility changes over time

To identify brain areas which change over time as a function of learning, a covariate of interest was included at single-subject level in addition to the four experimental conditions, which represented the individual learning curve. The learning curve was derived by cubic interpolation of the five behavioral test scores over the course of the experiment. A random-effects group analysis (p < 0.001, uncorrected; k = 30) of this covariate revealed significant activations in two regions in the inferior parietal lobe, specifically the left supramarginal gyrus (SMG), and the left angular gyrus (AG), in which the fMRI signal intensity correlated with the participants' intelligibility scores over the course of the experiment (Fig. 6). For these two regions we again obtained the percentages of signal change in each condition and for each subject (Fig. 6). A repeated-measures ANOVA with the factors area, trial type, and learnability showed that there was no significant difference in these regions between the learnable and inverted stimuli, during either the listening or listening + feedback trials, although there was a trend for the learnable stimuli to activate the angular gyrus more, over time, during the listening + feedback trials (p = 0.07).

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Areas of the brain that change over time with learning. Two regions in inferior parietal cortex (in blue) show an increase in activation over time which reflects learning curves modeled at a single-subject level (A). The bar graphs show the patterns of mean signal change in these two regions for the learnable and inverted conditions. Error bars represent SEM. These areas exhibit differential correlation patterns with activity in the superior temporal sulcus and inferior frontal gyrus, which were sensitive to overall intelligibility (B). Solid lines indicate significant correlations (p < 0.004) in signal change in the learnable and inverted listening conditions; dashed lines indicate nonsignificant correlations. Stereotactic coordinates of the peak activations in A are given in the Montreal Neurological Institute system. L, Left hemisphere.

Patterns of functional connectivity, as indexed by correlations of the average signal strength over time in a particular region, were investigated between the two parietal regions, and the regions in left IFG and STS that had been identified on the basis of the effect of overall learnability. This was done by calculating the correlations in regional signal change in each of the four regions, separately for the learnable and inverted conditions (two-tailed Pearson's product-moment correlations; α level set to 0.004 to correct for multiple comparisons). This analysis showed significant correlations of responses between STS and IFG, and between AG and SMG, for both learnable and inverted conditions. There was also a significant correlation between responses in IFG and AG in the learnable condition only. No other connections between the four regions were significantly correlated (Fig. 6B).

Discussion

A left-lateralized system in IFG and STS was sensitive to the learnability of the cochlear implant simulations, showing greater activity when listening to learnable sentences than when listening to spectrally inverted sentences. The STS has consistently been implicated in the processing and representation of intelligible speech, including in studies that used noise-vocoding as a way of manipulating the intelligibility of speech (Scott et al., 2000, 2006; Davis and Johnsrude, 2003; Obleser et al., 2007a, 2008), consequent to acoustic-phonetic processing in the superior temporal gyrus (STG) (Jacquemot et al., 2003; Obleser et al., 2007b; Obleser and Eisner, 2009). Many of these studies have revealed a sensitivity in the left and right STS to the number of channels in the vocoded speech signal (Davis and Johnsrude, 2003; Scott et al., 2006; Obleser et al., 2008). Consistent with this work, we propose that in the present study, the left STS is responding to the speech characteristics that are preserved in the cochlear implant simulation.

In contrast to STS, the left IFG has typically been implicated in higher-order language processes, and activity in this area has been described in a number of previous functional imaging studies of passive speech perception (Davis and Johnsrude, 2003; Friederici et al., 2003; Rodd et al., 2005; Obleser et al., 2007a). For example, activation has been observed in the left IFG when participants heard sentences containing ambiguous words (Hoenig and Scheef, 2009), during aspects of syntax processing (Tettamanti et al., 2009), and when the semantic predictability of a sentence supported comprehension (Obleser et al., 2007a). Left IFG has been proposed to act as a unification space, integrating linguistic information across the phonological, semantic, and syntactic levels (Hagoort, 2005). Prefrontal regions receive projections from both the anterior and posterior auditory streams of processing (Romanski et al., 1999; Scott and Johnsrude, 2003; Rauschecker and Scott, 2009). In the present study, the functionality of the left IFG could thus include the specific use of the simultaneous written feedback to enhance comprehension of the speech, or more generally in the use of linguistic knowledge—lexical, syntactic, or contextual—to support comprehension.

The training paradigm of this study aimed to elicit relatively rapid perceptual learning of the type that may be targeted in computer-based rehabilitation programs (Fu and Galvin, 2007; Stacey and Summerfield, 2007). The profile of behaviorally measured intelligibility effects (Fig. 4) shows that the biggest differences in intelligibility occurred over the first trials, and performance continued to improve more slowly across the rest of the session. This pattern is consistent with other auditory learning paradigms, which have found that the biggest improvements in performance often occur at the start of training (Wright and Fitzgerald, 2001; Hawkey et al., 2004). The improvement in the current study was far from reaching ceiling, and previous behavioral training studies suggest that learning would likely continue with further training sessions (Rosen et al., 1999; Fu and Galvin, 2007; Stacey and Summerfield, 2007). Fitting the individual learning profiles to the neural activity revealed two regions in left SMG and AG in which intelligibility-related change was correlated with change in activation over the course of the training session. This suggests that, alongside the activation in left STS and IFG—which was associated with increases in the intelligibility of spectrally degraded speech—neural activity in the left inferior parietal lobe underlies the behavioral adaptation to the stimuli. In AG, the time-dependent effect was broadly modulated by intelligibility during the perception trials in which written feedback was provided. In contrast, activity in the SMG did not differ between the learnable and inverted trials, nor between trials in which the subjects listened to the stimuli and those in which written feedback was presented simultaneously with the CI simulation. This may imply that the AG is implicated in the specific use of other linguistic information to support learning, while the SMG may be more generally sensitive to time-dependent exposure to the stimuli, rather than to the linguistic content.

The increase in speech intelligibility over time suggests that the effects of learnability and adaptation should be related. There was indeed a significant correlation in activation between the AG and the IFG for only the learnable trials, which may be subserved by a bidirectional anatomical connection between these regions via the superior longitudinal fasciculus (Frey et al., 2008). Increased functional connectivity between the left IFG and angular gyrus has been demonstrated when context helps the comprehension of noise vocoded speech (Obleser et al., 2007a), and when subjects are making overt semantic decisions about noise vocoded speech (Sharp et al., 2009). Both regions have been implicated in other learning processes in speech perception, such as learning to perceive a non-native phonemic contrast (Golestani and Zatorre, 2004), which supports the possibility that they are part of a more generalized learning mechanism which not only applies to spectrally degraded speech, but also to other listening situations where perceptual processing is effortful. We suggest that in the present study, the functional connectivity between the left IFG and AG may have a specific role in the task of mapping between the written sentence information and the heard CI simulation when the simulation is learnable.

An extensive network of brain regions thus underlies adaptation to a CI simulation. Among recipients of CIs there is considerable variation in outcome, and this variation can only in part be explained by known predictive factors such as the preimplantation level of residual hearing. CI users' neural capacity for adaptation may be facilitated by cognitive functions which are not primarily part of the central auditory system (Moore and Shannon, 2009). In our study, we observed a wide range of scores in participants' comprehension of CI simulations, which were correlated with activity in the left IFG. In contrast to the IFG, the response in STS did not vary with individual differences in intelligibility. This pattern of results is consistent with claims that the basic speech perception system, as represented by the activity in the STS, works to its fullest extent with what it can process from the incoming signal, and is not modulated by feedback projections from high-order language areas (Norris et al., 2000). The lack of association between activity in STS and individual differences in learning, in contrast with the strong association between comprehension and left IFG, suggests that variation in successful processing of CI simulations can depend on high-level, linguistic and cognitive factors that go beyond relatively early, acoustic-phonetic processes.

One candidate higher-order cognitive factor that has been implicated in the successful use of CIs is phonological working memory (pWM). Several behavioral studies have found that phonological working memory scores are positively correlated with successful perception of speech following cochlear implantation in children (Pisoni and Geers, 2000; Cleary et al., 2001; Pisoni and Cleary, 2003; Dillon et al., 2004). More generally, developmental language disorders such as specific language impairment or dyslexia often involve a deficit in working memory (Bishop, 2006; Ramus and Szenkovits, 2008). For the participants in the current study, a composite measure of phonological working memory correlated with activity in the left IFG. Functional imaging studies have outlined a network of brain areas underlying phonological working memory, including the inferior parietal lobe and the inferior frontal gyrus (Buchsbaum and D'Esposito, 2008), and one study specifically linked IFG activation in a pWM task with the encoding, maintenance, and response elements of pWM (Strand et al., 2008). The current study shows a neural link between variation in pWM capacity and variation in speech intelligibility, which thus represents a potential functional anatomical basis for the variation that is observed in the responsiveness to postimplant rehabilitation.

It is possible that the use of sentences as stimuli has emphasized individual differences in higher-order language processing (Hervais-Adelman et al., 2008), and this may interact with pWM processes. Neuropsychological studies have reported patients who had deficits in pWM and who made errors processing phonemes in sentences, but not in isolated words (Jacquemot et al., 2006). This possibility should be tested with further studies using adaptation to single-word or sublexical stimuli.

Our results suggest that individual variation in the comprehension of a cochlear implant simulation is at least in part determined by differences in the employment of higher-order language processes to help decode the speech sequences, and functional connectivity between the frontal and parietal lobes, rather than differences in the quality of acoustic-phonetic processing or representations in the dorsolateral temporal lobes. Furthermore, the results show that variation in phonological working memory scores shares an anatomical location in left IFG with individual variability in the learning of the CI simulations. Problems with speech and language processing commonly co-occur with problems in phonological working memory tasks. We suggest that one of the linguistic properties of the left IFG is to act as an interface for the interaction of speech perception and phonological working memory when processing spoken sentences, and that activation differences in this area across individuals are associated with differences in the successful adaptation to a CI simulation.

Footnotes

  • F.E., C.M., and S.K.S. were funded by Wellcome Trust Grant WT074414MA. We thank Joe Devlin, Rob Leech, Fred Dick, and Marty Sereno at the Birkbeck-UCL Centre for Neuroimaging for technical advice, and we gratefully acknowledge helpful comments on the manuscript by D. Sauter and two anonymous reviewers.

  • Correspondence should be addressed to Frank Eisner, University College London, Institute of Cognitive Neuroscience, 17 Queen Square, London WC1N 3AR, UK. f.eisner{at}mpi.nl

References

  1. ↵
    1. Bench J,
    2. Kowal A,
    3. Bamford J
    (1979) The BKB (Bamford-Kowal-Bench) sentence lists for partially-hearing children. Br J Audiol 13:108–112.
    OpenUrlCrossRefPubMed
  2. ↵
    1. Bishop DVM
    (2006) Developmental cognitive genetics: how psychology can inform genetics and vice versa. Q J Exp Psychol 59:1153–1168.
    OpenUrlCrossRef
  3. ↵
    1. Brainard DH
    (1997) The Psychophysics Toolbox. Spat Vis 10:433–436.
    OpenUrlCrossRefPubMed
  4. ↵
    1. Brett M,
    2. Anton J,
    3. Valabregue A,
    4. Poline J
    (2002) Proceedings of the 8th International Conference on Functional Mapping of the Human Brain (June 2–6, 2002, Sendai, Japan), Region of interest analysis using an SPM toolbox. Available on CD-ROM in Neuroimage, Vol 16, No 2.
  5. ↵
    1. Buchsbaum BR,
    2. D'Esposito M
    (2008) The search for the phonological store: from loop to convolution. J Cogn Neurosci 20:762–778.
    OpenUrlCrossRefPubMed
  6. ↵
    1. Cleary M,
    2. Pisoni DB,
    3. Geers AE
    (2001) Some measures of verbal and spatial working memory in eight- and nine-year-old hearing-impaired children with cochlear implants. Ear Hear 22:395–411.
    OpenUrlCrossRefPubMed
  7. ↵
    1. Davis MH,
    2. Johnsrude IS
    (2003) Hierarchical processing in spoken language comprehension. J Neurosci 23:3423–3431.
    OpenUrlAbstract/FREE Full Text
  8. ↵
    1. Dillon CM,
    2. Burkholder RA,
    3. Cleary M,
    4. Pisoni DB
    (2004) Nonword repetition by children with cochlear implants: Accuracy ratings from normal-hearing listeners. J Speech Lang Hear Res 47:1103–1116.
    OpenUrlCrossRefPubMed
  9. ↵
    1. Dorman MF,
    2. Loizou PC,
    3. Rainey D
    (1997) Simulating the effect of cochlear-implant electrode insertion depth on speech understanding. J Acoust Soc Am 102:2993–2996.
    OpenUrlCrossRefPubMed
  10. ↵
    1. Fagan MK,
    2. Pisoni DB,
    3. Horn DL,
    4. Dillon CM
    (2007) Neuropsychological correlates of vocabulary, reading, and working memory in deaf children with cochlear implants. J Deaf Stud Deaf Educ 12:461–471.
    OpenUrlAbstract/FREE Full Text
  11. ↵
    1. Fallon JB,
    2. Irvine DR,
    3. Shepherd RK
    (2008) Cochlear implants and brain plasticity. Hear Res 238:110–117.
    OpenUrlCrossRefPubMed
  12. ↵
    1. Finley CC,
    2. Holden TA,
    3. Holden LK,
    4. Whiting BR,
    5. Chole RA,
    6. Neely GJ,
    7. Hullar TE,
    8. Skinner MW
    (2008) Role of electrode placement as a contributor to variability in cochlear implant outcomes. Otol Neurotol 29:920–928.
    OpenUrlCrossRefPubMed
  13. ↵
    1. Frey S,
    2. Campbell JS,
    3. Pike GB,
    4. Petrides M
    (2008) Dissociating the human language pathways with high angular resolution diffusion fiber tractography. J Neurosci 28:11435–11444.
    OpenUrlAbstract/FREE Full Text
  14. ↵
    1. Friederici AD,
    2. Rüschemeyer SA,
    3. Hahne A,
    4. Fiebach CJ
    (2003) The role of left inferior frontal and superior temporal cortex in sentence comprehension: localizing syntactic and semantic processes. Cereb Cortex 13:170–177.
    OpenUrlAbstract/FREE Full Text
  15. ↵
    1. Fu QJ,
    2. Galvin JJ 3rd.
    (2007) Perceptual learning and auditory training in cochlear implant recipients. Trends Amplif 11:193–205.
    OpenUrlAbstract/FREE Full Text
  16. ↵
    1. Giraud AL,
    2. Truy E
    (2002) The contribution of visual areas to speech comprehension: a PET study in cochlear implants patients and normal-hearing subjects. Neuropsychologia 40:1562–1569.
    OpenUrlCrossRefPubMed
  17. ↵
    1. Giraud AL,
    2. Truy E,
    3. Frackowiak RJS,
    4. Grégoire MC,
    5. Pujol JF,
    6. Collet L
    (2000) Differential recruitment of the speech processing system in healthy subjects and rehabilitated cochlear implant patients. Brain 123:1391–1402.
    OpenUrlAbstract/FREE Full Text
  18. ↵
    1. Giraud AL,
    2. Price CJ,
    3. Graham JM,
    4. Truy E,
    5. Frackowiak RS
    (2001) Cross-modal plasticity underpins language recovery after cochlear implantation. Neuron 30:657–663.
    OpenUrlCrossRefPubMed
  19. ↵
    1. Golestani N,
    2. Zatorre RJ
    (2004) Learning new sounds of speech: reallocation of neural substrates. Neuroimage 21:494–506.
    OpenUrlCrossRefPubMed
  20. ↵
    1. Green KM,
    2. Julyan PJ,
    3. Hastings DL,
    4. Ramsden RT
    (2005) Auditory cortical activation and speech perception in cochlear implant users: effects of implant experience and duration of deafness. Hear Res 205:184–192.
    OpenUrlCrossRefPubMed
  21. ↵
    1. Greenwood DD
    (1990) A cochlear frequency-position function for several species—29 years later. J Acoust Soc Am 87:2592–2605.
    OpenUrlCrossRefPubMed
  22. ↵
    1. Hagoort P
    (2005) On Broca, brain, and binding: a new framework. Trends Cogn Sci 9:416–423.
    OpenUrlCrossRefPubMed
  23. ↵
    1. Hawkey DJ,
    2. Amitay S,
    3. Moore DR
    (2004) Early and rapid perceptual learning. Nat Neurosci 7:1055–1056.
    OpenUrlCrossRefPubMed
  24. ↵
    1. Hervais-Adelman A,
    2. Davis MH,
    3. Johnsrude IS,
    4. Carlyon RP
    (2008) Perceptual learning of noise vocoded words: effects of feedback and lexicality. J Exp Psychol Hum Percept Perform 34:460–474.
    OpenUrlCrossRefPubMed
  25. ↵
    1. Heydebrand G,
    2. Hale S,
    3. Potts L,
    4. Gotter B,
    5. Skinner M
    (2007) Cognitive predictors of improvements in adults' spoken word recognition six months after cochlear implant activation. Audiol Neurootol 12:254–264.
    OpenUrlCrossRefPubMed
  26. ↵
    1. Hoenig K,
    2. Scheef L
    (2009) Neural correlates of semantic ambiguity processing during context verification. Neuroimage 45:1009–1019.
    OpenUrlCrossRefPubMed
  27. ↵
    1. Jacquemot C,
    2. Pallier C,
    3. LeBihan D,
    4. Dehaene S,
    5. Dupoux E
    (2003) Phonological grammar shapes the auditory cortex: a functional magnetic resonance imaging study. J Neurosci 23:9541–9546.
    OpenUrlAbstract/FREE Full Text
  28. ↵
    1. Jacquemot C,
    2. Dupoux E,
    3. Decouche O,
    4. Bachoud-Levi AC
    (2006) Misperception in sentences but not in words: speech perception and the phonological buffer. Cogn Neuropsychol 23:949–971.
    OpenUrlPubMed
  29. ↵
    1. Lee HJ,
    2. Giraud AL,
    3. Kang E,
    4. Oh SH,
    5. Kang H,
    6. Kim CS,
    7. Lee DS
    (2007) Cortical activity at rest predicts cochlear implantation outcome. Cereb Cortex 17:909–917.
    OpenUrlAbstract/FREE Full Text
  30. ↵
    1. Moore DR,
    2. Shannon RV
    (2009) Beyond cochlear implants: awakening the deafened brain. Nat Neurosci 12:686–691.
    OpenUrlCrossRefPubMed
  31. ↵
    1. Mortensen MV,
    2. Mirz F,
    3. Gjedde A
    (2006) Restored speech comprehension linked to activity in left inferior prefrontal and right temporal cortices in post-lingual deafness. Neuroimage 31:842–852.
    OpenUrlCrossRefPubMed
  32. ↵
    1. Norris D,
    2. McQueen JM,
    3. Cutler A
    (2000) Merging information in speech recognition: feedback is never necessary. Behav Brain Sci 23:299–370.
    OpenUrlCrossRefPubMed
  33. ↵
    1. Obleser J,
    2. Eisner F
    (2009) Pre-lexical abstraction of speech in the auditory cortex. Trends Cogn Sci 13:14–19.
    OpenUrlCrossRefPubMed
  34. ↵
    1. Obleser J,
    2. Wise RJ,
    3. Alex Dresner M,
    4. Scott SK
    (2007a) Functional integration across brain regions improves speech perception under adverse listening conditions. J Neurosci 27:2283–2289.
    OpenUrlAbstract/FREE Full Text
  35. ↵
    1. Obleser J,
    2. Zimmermann J,
    3. Van Meter J,
    4. Rauschecker JP
    (2007b) Multiple stages of auditory speech perception reflected in event-related fMRI. Cereb Cortex 17:2251–2257.
    OpenUrlAbstract/FREE Full Text
  36. ↵
    1. Obleser J,
    2. Eisner F,
    3. Kotz SA
    (2008) Bilateral speech comprehension reflects differential sensitivity to spectral and temporal features. J Neurosci 28:8116–8123.
    OpenUrlAbstract/FREE Full Text
  37. ↵
    1. Pisoni DB,
    2. Cleary M
    (2003) Measures of working memory span and verbal rehearsal speed in deaf children after cochlear implantation. Ear Hear 24:106S–120S.
    OpenUrlCrossRefPubMed
  38. ↵
    1. Pisoni DB,
    2. Geers AE
    (2000) Working memory in deaf children with cochlear implants: correlations between digit span and measures of spoken language processing. Ann Otol Rhinol Laryngol Suppl 185:92–93.
    OpenUrlPubMed
  39. ↵
    1. Ramus F,
    2. Szenkovits G
    (2008) What phonological deficit? Q J Exp Psychol 61:129–141.
    OpenUrlCrossRef
  40. ↵
    1. Rauschecker JP,
    2. Scott SK
    (2009) Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing. Nat Neurosci 12:718–724.
    OpenUrlCrossRefPubMed
  41. ↵
    1. Reiss LA,
    2. Turner CW,
    3. Erenberg SR,
    4. Gantz BJ
    (2007) Changes in pitch with a cochlear implant over time. J Assoc Res Otolaryngol 8:241–257.
    OpenUrlCrossRefPubMed
  42. ↵
    1. Rodd JM,
    2. Davis MH,
    3. Johnsrude IS
    (2005) The neural mechanisms of speech comprehension: fMRI studies of semantic ambiguity. Cereb Cortex 15:1261–1269.
    OpenUrlAbstract/FREE Full Text
  43. ↵
    1. Romanski LM,
    2. Tian B,
    3. Fritz J,
    4. Mishkin M,
    5. Goldman-Rakic PS,
    6. Rauschecker JP
    (1999) Dual streams of auditory afferents target multiple domains in the primate prefrontal cortex. Nat Neurosci 2:1131–1136.
    OpenUrlCrossRefPubMed
  44. ↵
    1. Rosen S,
    2. Faulkner A,
    3. Wilkinson L
    (1999) Adaptation by normal listeners to upward spectral shifts of speech: Implications for cochlear implants. J Acoust Soc Am 106:3629–3636.
    OpenUrlCrossRefPubMed
  45. ↵
    1. Scott SK,
    2. Johnsrude IS
    (2003) The neuroanatomical and functional organization of speech perception. Trends Neurosci 26:100–107.
    OpenUrlCrossRefPubMed
  46. ↵
    1. Scott SK,
    2. Blank CC,
    3. Rosen S,
    4. Wise RJ
    (2000) Identification of a pathway for intelligible speech in the left temporal lobe. Brain 123:2400–2406.
    OpenUrlAbstract/FREE Full Text
  47. ↵
    1. Scott SK,
    2. Rosen S,
    3. Lang H,
    4. Wise RJ
    (2006) Neural correlates of intelligibility in speech investigated with noise vocoded speech—a positron emission tomography study. J Acoust Soc Am 120:1075–1083.
    OpenUrlCrossRefPubMed
  48. ↵
    1. Shannon RV,
    2. Zeng FG,
    3. Kamath V,
    4. Wygonski J,
    5. Ekelid M
    (1995) Speech perception with primarily temporal cues. Science 270:303–304.
    OpenUrlAbstract/FREE Full Text
  49. ↵
    1. Shannon RV,
    2. Zeng FG,
    3. Wygonski J
    (1998) Speech recognition with altered spectral distribution of envelope cues. J Acoust Soc Am 104:2467–2476.
    OpenUrlCrossRefPubMed
  50. ↵
    1. Sharp DJ,
    2. Turkheimer F,
    3. Bose S,
    4. Scott SK,
    5. Wise RJS
    (2009) Increased fronto-parietal integration after stroke and cognitive recovery. Ann Neurol doi:10.1002/ana.21866, Advance online publication. Retrieved Jan. 5, 2010..
  51. ↵
    1. Skinner MW,
    2. Ketten DR,
    3. Holden LK,
    4. Harding GW,
    5. Smith PG,
    6. Gates GA,
    7. Neely JG,
    8. Kletzker GR,
    9. Brunsden B,
    10. Blocker B
    (2002) CT-derived estimation of cochlear morphology and electrode array position in relation to word recognition in Nucleus-22 recipients. J Assoc Res Otolaryngol 3:332–350.
    OpenUrlCrossRefPubMed
  52. ↵
    1. Stacey PC,
    2. Summerfield AQ
    (2007) Effectiveness of computer-based auditory training in improving the perception of noise-vocoded speech. J Acoust Soc Am 121:2923–2935.
    OpenUrlCrossRefPubMed
  53. ↵
    1. Strand F,
    2. Forssberg H,
    3. Klingberg T,
    4. Norrelgen F
    (2008) Phonological working memory with auditory presentation of pseudo-words—an event related fMRI study. Brain Res 1212:48–54.
    OpenUrlCrossRefPubMed
  54. ↵
    1. Tettamanti M,
    2. Rotondi I,
    3. Perani D,
    4. Scotti G,
    5. Fazio F,
    6. Cappa SF,
    7. Moro A
    (2009) Syntax without language: neurobiological evidence for cross-domain syntactic computations. Cortex 45:825–838.
    OpenUrlCrossRefPubMed
  55. ↵
    1. Tyler RS,
    2. Parkinson AJ,
    3. Woodworth GG,
    4. Lowder MW,
    5. Gantz BJ
    (1997) Performance over time of adult patients using the Ineraid or nucleus cochlear implant. J Acoust Soc Am 102:508–522.
    OpenUrlCrossRefPubMed
  56. ↵
    1. UKCISG [UK Cochlear Implant Study Group]
    (2004) Criteria of candidacy for unilateral cochlear implantation in postlingually deafened adults II: cost-effectiveness analysis. Ear Hear 25:336–360.
    OpenUrlCrossRefPubMed
  57. ↵
    1. Wong D,
    2. Miyamoto RT,
    3. Pisoni DB,
    4. Sehgal M,
    5. Hutchins GD
    (1999) PET imaging of cochlear-implant and normal-hearing subjects listening to speech and nonspeech. Hear Res 132:34–42.
    OpenUrlCrossRefPubMed
  58. ↵
    1. Wright BA,
    2. Fitzgerald MB
    (2001) Different patterns of human discrimination learning for two interaural cues to sound-source location. Proc Natl Acad Sci U S A 98:12307–12312.
    OpenUrlAbstract/FREE Full Text
View Abstract
Back to top

In this issue

The Journal of Neuroscience: 30 (21)
Journal of Neuroscience
Vol. 30, Issue 21
26 May 2010
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Advertising (PDF)
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Inferior Frontal Gyrus Activation Predicts Individual Differences in Perceptual Learning of Cochlear-Implant Simulations
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Citation Tools
Inferior Frontal Gyrus Activation Predicts Individual Differences in Perceptual Learning of Cochlear-Implant Simulations
Frank Eisner, Carolyn McGettigan, Andrew Faulkner, Stuart Rosen, Sophie K. Scott
Journal of Neuroscience 26 May 2010, 30 (21) 7179-7186; DOI: 10.1523/JNEUROSCI.4040-09.2010

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Inferior Frontal Gyrus Activation Predicts Individual Differences in Perceptual Learning of Cochlear-Implant Simulations
Frank Eisner, Carolyn McGettigan, Andrew Faulkner, Stuart Rosen, Sophie K. Scott
Journal of Neuroscience 26 May 2010, 30 (21) 7179-7186; DOI: 10.1523/JNEUROSCI.4040-09.2010
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Articles

  • Choice Behavior Guided by Learned, But Not Innate, Taste Aversion Recruits the Orbitofrontal Cortex
  • Maturation of Spontaneous Firing Properties after Hearing Onset in Rat Auditory Nerve Fibers: Spontaneous Rates, Refractoriness, and Interfiber Correlations
  • Insulin Treatment Prevents Neuroinflammation and Neuronal Injury with Restored Neurobehavioral Function in Models of HIV/AIDS Neurodegeneration
Show more Articles

Behavioral/Systems/Cognitive

  • Influence of Reward on Corticospinal Excitability during Movement Preparation
  • Identification and Characterization of a Sleep-Active Cell Group in the Rostral Medullary Brainstem
  • Gravin Orchestrates Protein Kinase A and β2-Adrenergic Receptor Signaling Critical for Synaptic Plasticity and Memory
Show more Behavioral/Systems/Cognitive
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
  • Feedback
(JNeurosci logo)
(SfN logo)

Copyright © 2021 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.