Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Brief Communications

Hearing Loss in Older Adults Affects Neural Systems Supporting Speech Comprehension

Jonathan E. Peelle, Vanessa Troiani, Murray Grossman and Arthur Wingfield
Journal of Neuroscience 31 August 2011, 31 (35) 12638-12643; https://doi.org/10.1523/JNEUROSCI.2559-11.2011
Jonathan E. Peelle
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Vanessa Troiani
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Murray Grossman
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Arthur Wingfield
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Hearing loss is one of the most common complaints in adults over the age of 60 and a major contributor to difficulties in speech comprehension. To examine the effects of hearing ability on the neural processes supporting spoken language processing in humans, we used functional magnetic resonance imaging to monitor brain activity while older adults with age-normal hearing listened to sentences that varied in their linguistic demands. Individual differences in hearing ability predicted the degree of language-driven neural recruitment during auditory sentence comprehension in bilateral superior temporal gyri (including primary auditory cortex), thalamus, and brainstem. In a second experiment, we examined the relationship of hearing ability to cortical structural integrity using voxel-based morphometry, demonstrating a significant linear relationship between hearing ability and gray matter volume in primary auditory cortex. Together, these results suggest that even moderate declines in peripheral auditory acuity lead to a systematic downregulation of neural activity during the processing of higher-level aspects of speech, and may also contribute to loss of gray matter volume in primary auditory cortex. More generally, these findings support a resource-allocation framework in which individual differences in sensory ability help define the degree to which brain regions are recruited in service of a particular task.

Introduction

Speech comprehension relies on a distributed network of brain regions that minimally includes bilateral superior and middle temporal gyri, left prefrontal and premotor cortex, and left inferior temporal cortex. However, at its heart, successfully understanding speech depends on a listener's ability to process a complex acoustic signal. In normal aging, auditory processing is often affected by hearing loss, arising from mechanical and neural disruptions along the pathway between the outer ear and auditory cortex (Morrell et al., 1996; Schneider, 1997). Not surprisingly, these challenges to acoustic analysis impede older adults' understanding of spoken language. In addition to sensory declines, however, adult aging is associated with changes in cognitive function in domains critical for speech comprehension, including working memory and information processing speed (Wingfield and Stine-Morrow, 2000). These findings suggest that age-related changes in speech comprehension are due to a complex combination of perceptual and cognitive factors (Wingfield et al., 2005). To date, however, there is no account linking sensory change to the cortical speech processing network that might explain this interaction.

The neural response to degraded speech suggests that effects of both age and hearing should be evident in older listeners. When young adults with no report of hearing difficulty are presented with acoustically degraded speech, they show changes in the degree and localization of neural recruitment throughout the speech processing network (Davis and Johnsrude, 2003; Obleser et al., 2007; Harris et al., 2009). We hypothesized that challenges to peripheral hearing would produce qualitatively similar effects, resulting in an impoverished acoustic signal reaching auditory cortex. The reduced clarity of this signal should alter the degree of neural activation in response to speech, and over time may also lead to structural transformation in auditory cortex. Although we would predict individual differences in hearing ability to exert this influence regardless of age, the loss of auditory sensitivity and increased variability in hearing associated with normal aging (Humes et al., 2010) suggest that such neural effects would be more apparent in older listeners. Behavioral data support this expectation: older adults with poorer hearing perform worse on speech comprehension tasks not only compared to older adults with better hearing (Stewart and Wingfield, 2009; Adank and Janse, 2010; Tun et al., 2010), but also compared to young adults with equivalently poor hearing (Wingfield et al., 2006).

Here, we investigate whether normal variations in hearing ability affect the structure or function of neural systems supporting speech comprehension in older adults. We report the results of two studies: a functional magnetic resonance imaging (fMRI) study examining effects of hearing ability on neural activity, and a voxel-based morphometry (VBM) study examining the relationship between hearing ability and cortical brain volume. We hypothesize that reduced hearing ability will be associated with differences in language-related (and not merely acoustic) processing, reflecting an interaction of perceptual and cognitive factors at the neural level. Although critically important for speech perception, these results also speak to the broader issue of how variability in sensory processing is expressed at multiple levels of cognitive function.

Materials and Methods

Participants.

Participants in the fMRI study were 16 healthy right-handed adults aged 60–77 years [mean (M) = 64.9, SD = 4.9; 9 female, 7 male], all with normal self-reported hearing. Audiograms based on pure-tone audiometry for these participants are shown in Figure 1A. We summarized hearing ability using a pure tone average (PTA; the average threshold in each participant's better ear for 1, 2, and 4 kHz). PTAs in participants' better ears ranged from 10 to 33.3 dB hearing level (HL) (M = 18.4, SD = 6.9). As PTAs ≤25 dB HL are considered clinically normal for speech (Hall and Mueller, 1997), all but three listeners had clinically normal hearing, and these three had what would typically be considered a mild hearing loss. Hearing did not differ across ear (left PTA mean = 21.3 dB HL, SD = 6.98; right PTA mean = 20.1 dB HL, SD = 8.60), t(15) = 0.87, p = 0.40, nor did better-ear PTA correlate with age (Pearson r = 0.10, p = 0.70).

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Hearing levels for both groups of participants as assessed using pure-tone audiometry. A, Hearing profile for 16 fMRI participants. B, Hearing profile for 25 VBM participants. For both groups, hearing profiles for individual listeners are drawn in light gray, with the average values drawn in black. The cutoff for clinically normal hearing for speech is ≤25 dB HL (shaded) for frequencies up to 4000 Hz.

Participants in the VBM study were 25 adults (16 of whom participated in the fMRI study) between the ages of 60–77 years (M = 66.3 years, SD = 5.5 years; 12 female, 13 male). All were right handed and had good self-reported health and hearing. Audiograms for the VBM participants are shown in Figure 1B. PTAs in participants' better ears ranged from 10 to 38.3 dB HL (M = 21.3, SD = 7.8); again, this placed most listeners in the clinically normal hearing range, with a few having mild to moderate hearing loss. Hearing in this group was largely symmetrical (left PTA mean = 24.3 dB HL, SD = 8.47; right PTA mean = 24.1 dB HL, SD = 10.91) and did not differ across ear, t(24) = 0.12, p = 0.91, nor did better-ear PTA correlate with age (Pearson r = 0.28, p = 0.17).

Written informed consent was obtained from all participants according to a protocol approved by the University of Pennsylvania Institutional Review Board.

Materials.

Stimuli were 240 six-word sentences that contained either a subject-relative (e.g., “Boys that help girls are nice”) or syntactically more complex object-relative (e.g., “Boys that girls help are nice”) center-embedded clause (Peelle et al., 2010b). Because this difference hinged solely on word order, sentences were equated for lexical factors across syntactic condition. Sentences were presented at three speech rates (mean duration = 1.1 s, SD = 0.22); given the lack of a rate effect in a previous analysis (Peelle et al., 2010b), in the current analysis we have collapsed across speech rate.

Procedure.

Each participant in the fMRI study heard all 240 sentences, half with subject-relative and half with object-relative structure. Following each sentence, listeners pressed a button to indicate whether the person performing the action in the sentence was a male or a female. Hearing ability was assessed using pure-tone thresholds, measured on a Maico MA39 audiometer (Maico Diagnostics) using standard audiometric practices.

MRI data acquisition and processing.

MRI data were acquired on a Siemens Trio scanner (Siemens Medical Systems) at 3 T, beginning with acquisition of a T1-weighted structural volume using an MPRAGE sequence [repetition time (TR) = 1620 ms, echo time (TE) = 3 ms, flip angle = 15°, 1 mm slice thickness, 192 × 256 matrix, voxel size = 0.98 × 0.98 × 1 mm]. Blood oxygenation level-dependent functional MRI images were acquired with 3 mm isotropic voxels, flip angle = 15°, TR = 8 s, acquisition time (TA) = 3 s, TEeff = 30 ms, and a 64 × 64 matrix. We used a sparse imaging design in which the TR was longer than the TA to allow presentation of sentences with minimal acoustic scanner noise.

Analysis of the fMRI data was performed using SPM5 software (Wellcome Trust Centre for Neuroimaging). For each participant, images were realigned to the first image, coregistered to the structural image, and normalized to Montreal Neurological Institute (MNI) space using unified segmentation (Ashburner and Friston, 2005), including resampling to 2 × 2 × 2 mm voxels, and spatially smoothed with a 10 mm full-width at half-maximum (FWHM) Gaussian kernel. Only sentences that resulted in a correct behavioral response were included in the analysis. Each event onset was convolved with a canonical hemodynamic response function to produce a predicted neural response; additional regressors were included to account for session effects. The modeled conditions for the effect of syntactic complexity were object-relative > subject-relative sentences. To assess the degree to which individual hearing ability predicted language-driven activity, a one-sample t test was used, with hearing level as a mean-centered covariate. For this analysis, a single contrast image reflecting the effect of syntactic complexity was entered for each participant, as well as their PTA. We then tested the hypothesis that hearing ability predicted the magnitude of this effect by examining the parameter estimate reflecting the PTA covariate relative to the error across participants.

For the VBM study, data were analyzed with SPM8. Before normalization using the diffeomorphic DARTEL approach, each individual's structural image was coregistered to an MNI-space average template distributed with SPM using normalized mutual information and segmented into tissue classes using unified segmentation as implemented in SPM8, making use of tissue probability maps for gray matter, white matter, and CSF. Segmented images were normalized to MNI space (preserving volume by Jacobian modulation) and smoothed using a 10 mm FWHM Gaussian kernel.

Statistical maps for the MRI analyses were rendered on 3D MNI-space templates from SPM8 and overlaid on slices from an MNI-space template included with MRIcron (Rorden and Brett, 2000).

Results

Participants performed well on the task, showing the anticipated advantage for subject-relative sentences (mean accuracy = 93%, SD = 5.1) compared to the more difficult object-relative sentences (mean accuracy = 80%, SD = 10.5). We examined participants' behavioral data to see whether there was any evidence for a relationship between comprehension performance and hearing ability for the more complex object-relative sentences by running bivariate Pearson correlations. There was no significant relationship between hearing ability and either accuracy (Pearson r = 0.47, n.s.) or response times to correct responses (Pearson r = −0.17, n.s.). Thus, behaviorally, mild hearing loss did not appear to affect participants' performance.

The primary comparison of interest was to see whether hearing ability significantly predicted neural responses related to linguistic complexity. We operationally defined language-driven activity as increased activity for grammatically complex object-relative sentences compared to subject-relative sentences. We then conducted a whole-brain correlation analysis, entering each participant's hearing ability (PTA) as a mean-centered covariate against estimates of language-driven neural activity. We used a cluster-defining threshold of p < 0.005 (uncorrected), corrected at p < 0.05 across the whole brain using cluster extent (Worsley et al., 1992) (FWHM = 14.6 × 14.2 × 11.9 mm, resel count = 510.1). There were several regions in which participants with poorer hearing showed reduced linguistic activation, shown in Figure 2 and listed in Table 1. They were largely bilateral and included superior temporal gyri, thalamus, and brainstem. There were no regions in which listeners with worse hearing showed increased activity related to linguistic complexity.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Regions in which language-driven activity (object-relative > subject-relative sentences) showed a significant correlation with hearing ability. A, Regions in which poorer-hearing listeners showed less language-driven brain activity. B, Overlap of these regions with cytoarchitectonically defined probable primary auditory cortex (blue outline). C, Thalamic cluster from A (outlined in white) displayed along with probable regions of cortical connectivity from the Oxford Thalamic Connectivity Atlas. The predominant connectivity is to prefrontal cortex, with additional projections to prefrontal and temporal cortices.

View this table:
  • View inline
  • View popup
Table 1.

Maxima of clusters where poorer-hearing listeners showed decreased language-driven BOLD response

To assess the degree to which these differences included primary auditory regions, we used the SPM anatomy toolbox (Eickhoff et al., 2005) to delineate regions TE1.0 and TE1.1 of bilateral primary auditory cortex (Morosan et al., 2001) as regions of interest: voxels included had at least an 80% chance of belonging to these subdivisions based on cytoarchitectonic characteristics of postmortem samples. These auditory regions are outlined in blue in Figure 2B, and clearly encompass areas of significant hearing-related differences. To ensure that these effects were not driven by age differences, we extracted the mean parameter estimates from these primary auditory cortex regions of interest and performed a multiple regression analysis in which we included both age and PTA as predictors. For left auditory cortex, the overall model fit was significant, F(2,15) = 5.6, p = 0.017, explaining 47% of the total variance. Age did not show a significant contribution (β = −0.14, SE = 0.14, standardized β = −0.20, t = 0.99, p = 0.34), but hearing did (β = −0.31, SE = 0.09, standardized β = −0.63, t = 3.08, p < 0.01). A similar pattern held in right auditory cortex, with the overall model fit also being significant, F(2,15) = 5.5, p = 0.019, explaining 46% of the variance. Again, age did not contribute significantly (β = −0.14, SE = 0.17, standardized β = −0.17, t = 0.82, p = 0.43), but PTA did (β = −0.37, SE = 0.12, standardized β = −0.64, t = 3.09, p < 0.01).

To examine whether the observed changes in thalamic activity might impact frontal and temporal components of the speech processing network, we characterized the cortical connectivity of the thalamic cluster using the Oxford Thalamic Connectivity Atlas (Behrens et al., 2003), which provides probabilistic connectivity values based on diffusion tractography. Figure 2C shows the significant thalamic activation outlined in white overlaid with connectivity probabilities for each of seven cortical regions; the strongest cortical connectivity is to prefrontal cortex, followed by premotor and temporal cortices.

These initial results demonstrate a coupling between hearing ability and neural activity, suggesting more generally a link between sensory ability and cortical structural integrity—that is, given widespread age-related gray matter loss (Good et al., 2001), these functional changes may reflect an acceleration of underlying cortical atrophy. To explicitly test this possibility, we performed a voxel-based morphometric analysis of gray matter volume in 25 older adults. For all participants, we calculated PTA values for their better ear, and correlated these values with estimates of regional gray matter volume obtained from segmented structural MRI images. We focused our analysis on the same cytoarchitectonically defined probabilistic primary auditory regions of interest we used in interpreting the fMRI data. As control regions, we used left and right motor cortex (Brodmann area 4a) (Geyer et al., 1996). For all regions, we constructed a statistical model that included a second-order polynomial expansion of PTA, total gray matter, total intracranial volume (TIV), and a constant term. For display purposes, we plotted the gray matter residuals, having removed effects of total gray matter and total intracranial volume; statistics displayed are based on the full model, but just for the linear term of PTA (the quadratic component had a negligible contribution). As shown in Figure 3, this analysis showed a significant relationship between hearing ability and gray matter volume in right auditory cortex, and a nonsignificant trend in left auditory cortex. We found no relationship between hearing ability and gray matter volume in motor cortex.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Relationship between regional gray matter volume and hearing ability. We extracted average gray matter values from 25 participants for four cortical regions. Poorer hearing was associated with reduced gray matter volume in right auditory cortex, and showed a similar nonsignificant trend in left auditory cortex; neither of the motor cortex control regions approached significance. Larger markers represent two participants with overlapping scores.

Finally, we performed an additional exploratory whole-brain analysis using the same approach to identify any other regions showing a similar relationship located outside auditory cortex. For this analysis, we used a voxelwise threshold of p < 0.001 (uncorrected), and included total gray matter and TIV as covariates. There were no regions outside of right superior temporal gyrus that had a significant negative relationship with peripheral hearing ability. This effect remained after including age as a covariate in the design matrix.

Discussion

The hierarchy of cortical regions that support speech processing begins with primary auditory cortex before branching into multiple parallel processing pathways (Davis and Johnsrude, 2003; Hickok and Poeppel, 2007; Rauschecker and Scott, 2009). Here we demonstrate that peripheral hearing acuity predicted both the neural response to speech and gray matter volume in these primary auditory regions. These findings provide some of the first measures of neural activity to support behavioral evidence of perceptual-cognitive interactions in connected speech processing, and provide a potential explanation for how they may contribute to age-related changes in spoken language comprehension.

Although it is well accepted that age-related hearing loss affects auditory processing, there has been increasing agreement that these sensory declines increase listening effort in the course of successful speech comprehension. Behaviorally, this increase in effort can exert influence on a variety of cognitive operations (Rabbitt, 1991; Stewart and Wingfield, 2009; Miller and Wingfield, 2010; Piquado et al., 2010; Tun et al., 2010; Heinrich and Schneider, 2011; Zekveld et al., 2011). Our current observation of changes in neural activity in core auditory processing areas makes a mechanistic neurobiological account of these interactions tractable. Importantly, we observed changes in activation that were associated specifically with grammatically complex sentences relative to simpler sentences. Thus, modulations of neural activity related to sensory loss appear to impact not simply the ability to process the perceptual aspects of the speech signal, but the resources required to perform higher-level cognitive operations. This is consistent with the view that, during speech comprehension, cognitive and neural resources are dynamically assigned based on acoustic and linguistic task demands and the individual abilities of listeners (Peelle et al., 2010b; Tyler et al., 2010; Obleser et al., 2011).

It is worth noting that increased listening effort has often been associated with increased activity in prefrontal and premotor cortices (Davis and Johnsrude, 2003, 2007; Peelle et al., 2010a), areas with high probabilistic connections to regions of the thalamus in which we found hearing-related changes in neural activity. The thalamus is essential for transfer of ascending auditory information to prefrontal and premotor areas, playing a crucial function in the network of regions supporting speech comprehension. In addition, poorer-hearing participants also showed less syntax-related activity in the left putamen, which in concert with frontal regions also plays an important role in prediction and timing for auditory tasks (Grahn and Rowe, 2009; Kotz and Schwartze, 2010). Thus, although we observed direct changes in neural activity primarily in auditory regions, it is likely these changes reverberate throughout a much broader network.

Our finding that gray matter density in primary auditory areas is predicted by peripheral hearing ability suggests a link between sensory stimulation and cortical volume. In previous studies of single word recognition, decreases in gray matter volume in primary auditory cortex have been linked to both behavioral performance and neural activity elsewhere in the speech comprehension system (Eckert et al., 2008; Harris et al., 2009). Although we cannot make a definitive conclusion regarding the nature of this linkage, there is some support for interpreting the direction of causality in this relationship. Reorganization of sensory cortex is common when sensory input is altered (Rauschecker, 1995), including tonotopic reorganization of auditory cortex following peripheral hearing loss (Schwaber et al., 1993; Cheung et al., 2009). Thus, we think it plausible that changes in older adults' peripheral hearing ability had a causal role in reducing gray matter volume in auditory cortex.

To the extent that these functional and structural changes are associated with perceptual difficulty in processing speech sounds, these interactions fit well within a broader resource-allocation framework of speech comprehension. That is, the neural activity required to successfully navigate a cognitive task is a function of both task demands and individual cognitive ability. Perceptual demands can be increased externally (e.g., through acoustic properties of the stimuli) or internally (e.g., through biologically mediated sensory decline). From this vantage point, it may be less constructive to think about a single network of brain regions supporting any given task. Rather, the degree of activity—and in some instances, the set of regions active—form a family of networks that can be dynamically called upon as a function of both task demand and individual ability. Although in cases of more extreme individual differences (as when comparing young and older adults) these patterns may appear to be easily dichotomized, ultimately this may not prove to be the best way to characterize the underlying interindividual variability.

Taken together, our data provide a mechanism whereby changes in peripheral hearing impact cortical speech processing networks, and suggest that sensory acuity has cascading consequences for the neural processes supporting both perceptual and higher-level cognitive functions. Of particular note is the fact that these variations are seen within listeners with age-normal hearing ability; in cases of greater hearing loss, we would expect these findings to be exaggerated. Finally, our results also reinforce the modulatory influence of individual differences in sensory ability on brain activity, and thus the advantage of incorporating these sources of variability in models of speech and language processing.

Footnotes

  • This research was supported by the National Institutes of Health (NS54575, AG019714, AG17586, AG15116, AG32953, NS53488, and NS44266). V.T. is funded by a National Science Foundation Graduate Fellowship. We also gratefully acknowledge support from the W. M. Keck Foundation. We thank Michael Bonner for helpful comments on this manuscript, the radiographers at the Hospital of the University of Pennsylvania for their assistance with data collection, and our volunteers for their participation.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Dr. Jonathan Peelle, Department of Neurology, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, peelle{at}gmail.com

References

  1. ↵
    1. Adank P,
    2. Janse E
    (2010) Comprehension of a novel accent by young and older listeners. Psychol Aging 25:736–740.
    OpenUrlCrossRefPubMed
  2. ↵
    1. Ashburner J,
    2. Friston KJ
    (2005) Unified segmentation. Neuroimage 26:839–851.
    OpenUrlCrossRefPubMed
  3. ↵
    1. Behrens TEJ,
    2. Johansen-Berg H,
    3. Woolrich MW,
    4. Smith SM,
    5. Wheeler-Kingshott CAM,
    6. Boulby PA,
    7. Barker GJ,
    8. Sillery EL,
    9. Sheehan K,
    10. Ciccarelli O,
    11. Thompson AJ,
    12. Brady JM,
    13. Matthews PM
    (2003) Non-invasive mapping of connections between human thalamus and cortex using diffusion imaging. Nat Neurosci 6:750–757.
    OpenUrlCrossRefPubMed
  4. ↵
    1. Cheung SW,
    2. Bonham BH,
    3. Schreiner CE,
    4. Godey B,
    5. Copenhaver DA
    (2009) Realignment of interaural cortical maps in asymmetric hearing loss. J Neurosci 29:7065–7078.
    OpenUrlAbstract/FREE Full Text
  5. ↵
    1. Davis MH,
    2. Johnsrude IS
    (2003) Hierarchical processing in spoken language comprehension. J Neurosci 23:3423–3431.
    OpenUrlAbstract/FREE Full Text
  6. ↵
    1. Davis MH,
    2. Johnsrude IS
    (2007) Hearing speech sounds: top-down influences on the interface between audition and speech perception. Hear Res 229:132–147.
    OpenUrlCrossRefPubMed
  7. ↵
    1. Eckert MA,
    2. Walczak A,
    3. Ahlstrom J,
    4. Denslow S,
    5. Horwitz A,
    6. Dubno JR
    (2008) Age-related effects on word recognition: reliance on cognitive control systems with structural declines in speech-responsive cortex. JARO 9:252–259.
    OpenUrlCrossRefPubMed
  8. ↵
    1. Eickhoff S,
    2. Stephan K,
    3. Mohlberg H,
    4. Grefkes C,
    5. Fink G,
    6. Amunts K,
    7. Zilles K
    (2005) A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data. Neuroimage 25:1325–1335.
    OpenUrlCrossRefPubMed
  9. ↵
    1. Geyer S,
    2. Ledberg A,
    3. Schleicher A,
    4. Kinomura S,
    5. Schormann T,
    6. Bürgel U,
    7. Klingberg T,
    8. Larsson J,
    9. Zilles K,
    10. Roland PE
    (1996) Two different areas within the primary motor cortex of man. Nature 382:805–807.
    OpenUrlCrossRefPubMed
  10. ↵
    1. Good CD,
    2. Johnsrude IS,
    3. Ashburner J,
    4. Henson RNA,
    5. Friston KJ,
    6. Frackowiak RSJ
    (2001) A voxel-based morphometric study of ageing in 465 normal adult human brains. Neuroimage 14:21–36.
    OpenUrlCrossRefPubMed
  11. ↵
    1. Grahn JA,
    2. Rowe JB
    (2009) Feeling the beat: premotor and striatal interactions in musicians and nonmusicians during beat perception. J Neurosci 29:7540–7548.
    OpenUrlAbstract/FREE Full Text
  12. ↵
    1. Hall J,
    2. Mueller G
    (1997) Audiologist desk reference (Singular, San Diego).
  13. ↵
    1. Harris KC,
    2. Dubno JR,
    3. Keren NI,
    4. Ahlstrom JB,
    5. Eckert MA
    (2009) Speech recognition in younger and older adults: a dependency on low-level auditory cortex. J Neurosci 29:6078–6087.
    OpenUrlAbstract/FREE Full Text
  14. ↵
    1. Heinrich A,
    2. Schneider BA
    (2011) Elucidating the effects of ageing on remembering perceptually distorted word pairs. Q J Exp Psychol (Colchester) 64:186–205.
    OpenUrlCrossRefPubMed
  15. ↵
    1. Hickok G,
    2. Poeppel D
    (2007) The cortical organization of speech processing. Nat Rev Neurosci 8:393–402.
    OpenUrlCrossRefPubMed
  16. ↵
    1. Humes LE,
    2. Kewley-Port D,
    3. Fogerty D,
    4. Kinney D
    (2010) Measures of hearing threshold and temporal processing across the adult lifespan. Hear Res 264:30–40.
    OpenUrlCrossRefPubMed
  17. ↵
    1. Kotz SA,
    2. Schwartze M
    (2010) Cortical speech processing unplugged: a timely subcortico-cortical framework. Trends Cogn Sci 14:392–399.
    OpenUrlCrossRefPubMed
  18. ↵
    1. Miller P,
    2. Wingfield A
    (2010) Distinct effects of perceptual quality on auditory word recognition, memory formation and recall in a neural model of sequential memory. Front Syst Neurosci 4:14.
    OpenUrlPubMed
  19. ↵
    1. Morosan P,
    2. Rademacher J,
    3. Schleicher A,
    4. Amunts K,
    5. Schormann T,
    6. Zilles K
    (2001) Human primary auditory cortex: cytoarchitectonic subdivisions and mapping into a spatial reference system. Neuroimage 13:684–701.
    OpenUrlPubMed
  20. ↵
    1. Morrell CH,
    2. Gordon-Salant S,
    3. Pearson JD,
    4. Brant LJ,
    5. Fozard JL
    (1996) Age- and gender-specific reference ranges for hearing level and longitudinal changes in hearing level. J Acoust Soc Am 100:1949–1967.
    OpenUrlCrossRefPubMed
  21. ↵
    1. Obleser J,
    2. Wise RJS,
    3. Dresner MA,
    4. Scott SK
    (2007) Functional integration across brain regions improves speech perception under adverse listening conditions. J Neurosci 27:2283–2289.
    OpenUrlAbstract/FREE Full Text
  22. ↵
    1. Obleser J,
    2. Meyer L,
    3. Friederici AD
    (2011) Dynamic assignment of neural resources in auditory comprehension of complex sentences. Neuroimage 56:2310–2320.
    OpenUrlCrossRefPubMed
  23. ↵
    1. Peelle JE,
    2. Johnsrude IS,
    3. Davis MH
    (2010a) Hierarchical processing for speech in human auditory cortex and beyond. Front Hum Neurosci 4:51.
    OpenUrlPubMed
  24. ↵
    1. Peelle JE,
    2. Troiani V,
    3. Wingfield A,
    4. Grossman M
    (2010b) Neural processing during older adults' comprehension of spoken sentences: age differences in resource allocation and connectivity. Cereb Cortex 20:773–782.
    OpenUrlAbstract/FREE Full Text
  25. ↵
    1. Piquado T,
    2. Cousins KAQ,
    3. Wingfield A,
    4. Miller P
    (2010) Effects of degraded sensory input on memory for speech: behavioral data and a test of biologically constrained computational models. Brain Res 1365:48–65.
    OpenUrlCrossRefPubMed
  26. ↵
    1. Rabbitt PMA
    (1991) Mild hearing loss can cause apparent memory failures which increase with age and reduce with IQ. Acta Otolaryngol Suppl 476:167–176.
    OpenUrl
  27. ↵
    1. Rauschecker JP
    (1995) Compensatory plasticity and sensory substitution in the cerebral cortex. Trends Neurosci 18:36–43.
    OpenUrlCrossRefPubMed
  28. ↵
    1. Rauschecker JP,
    2. Scott SK
    (2009) Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing. Nat Neurosci 12:718–724.
    OpenUrlCrossRefPubMed
  29. ↵
    1. Rorden C,
    2. Brett M
    (2000) Stereotaxic display of brain lesions. Behav Neurol 12:191–200.
    OpenUrlCrossRefPubMed
  30. ↵
    1. Schneider BA
    (1997) Psychoacoustics and aging: implications for everyday listening. J Speech Lang Pathol Audiol 21:111–124.
    OpenUrl
  31. ↵
    1. Schwaber MK,
    2. Garraghty PE,
    3. Kaas JH
    (1993) Neuroplasticity of the adult primate auditory cortex following cochlear hearing loss. Am J Otol 14:252–258.
    OpenUrlPubMed
  32. ↵
    1. Stewart R,
    2. Wingfield A
    (2009) Hearing loss and cognitive effort in older adults' report accuracy for verbal materials. J Am Acad Audiol 20:147–154.
    OpenUrlCrossRefPubMed
  33. ↵
    1. Tun PA,
    2. Benichov J,
    3. Wingfield A
    (2010) Response latencies in auditory sentence comprehension: effects of linguistic versus perceptual challenge. Psychol Aging 25:730–735.
    OpenUrlCrossRefPubMed
  34. ↵
    1. Tyler LK,
    2. Shafto MA,
    3. Randall B,
    4. Wright P,
    5. Marslen-Wilson WD,
    6. Stamatakis EA
    (2010) Preserving syntactic processing across the adult life span: the modulation of the frontotemporal language system in the context of age-related atrophy. Cereb Cortex 20:352–364.
    OpenUrlAbstract/FREE Full Text
  35. ↵
    1. Craik FIM,
    2. Salthouse TA
    1. Wingfield A,
    2. Stine-Morrow EAL
    (2000) in The handbook of aging and cognition, Language and speech, eds Craik FIM, Salthouse TA (Erlbaum, Mahwah, NJ), Ed 2, pp 359–416.
  36. ↵
    1. Wingfield A,
    2. Tun PA,
    3. McCoy SL
    (2005) Hearing loss in older adulthood: what it is and how it interacts with cognitive performance. Curr Dir Psychol Sci 14:144–148.
    OpenUrlAbstract/FREE Full Text
  37. ↵
    1. Wingfield A,
    2. McCoy SL,
    3. Peelle JE,
    4. Tun PA,
    5. Cox LC
    (2006) Effects of adult aging and hearing loss on comprehension of rapid speech varying in syntactic complexity. J Am Acad Audiol 17:487–497.
    OpenUrlPubMed
  38. ↵
    1. Worsley KJ,
    2. Evans AC,
    3. Marrett S,
    4. Neelin P
    (1992) A three-dimensional statistical analysis for CBF activation studies in human brain. J Cereb Blood Flow Metab 12:900–918.
    OpenUrlCrossRefPubMed
  39. ↵
    1. Zekveld AA,
    2. Kramer SE,
    3. Festen JM
    (2011) Cognitive load during speech perception in noise: the influence of age, hearing loss, and cognition on the pupil response. Ear Hear 32:498–510.
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 31 (35)
Journal of Neuroscience
Vol. 31, Issue 35
31 Aug 2011
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Advertising (PDF)
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Hearing Loss in Older Adults Affects Neural Systems Supporting Speech Comprehension
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Hearing Loss in Older Adults Affects Neural Systems Supporting Speech Comprehension
Jonathan E. Peelle, Vanessa Troiani, Murray Grossman, Arthur Wingfield
Journal of Neuroscience 31 August 2011, 31 (35) 12638-12643; DOI: 10.1523/JNEUROSCI.2559-11.2011

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Hearing Loss in Older Adults Affects Neural Systems Supporting Speech Comprehension
Jonathan E. Peelle, Vanessa Troiani, Murray Grossman, Arthur Wingfield
Journal of Neuroscience 31 August 2011, 31 (35) 12638-12643; DOI: 10.1523/JNEUROSCI.2559-11.2011
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

  • Heteromodal Cortical Areas Encode Sensory-Motor Features of Word Meaning
  • Pharmacologically Counteracting a Phenotypic Difference in Cerebellar GABAA Receptor Response to Alcohol Prevents Excessive Alcohol Consumption in a High Alcohol-Consuming Rodent Genotype
  • Neuromuscular NMDA Receptors Modulate Developmental Synapse Elimination
Show more Brief Communications
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.