Skip to main content

Umbrella menu

  • SfN.org
  • eNeuro
  • The Journal of Neuroscience
  • Neuronline
  • BrainFacts.org

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
  • ALERTS
  • FOR AUTHORS
    • Preparing a Manuscript
    • Submission Guidelines
    • Fees
    • Journal Club
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
  • SfN.org
  • eNeuro
  • The Journal of Neuroscience
  • Neuronline
  • BrainFacts.org

User menu

  • Log in
  • Subscribe
  • My alerts
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log in
  • Subscribe
  • My alerts
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
  • ALERTS
  • FOR AUTHORS
    • Preparing a Manuscript
    • Submission Guidelines
    • Fees
    • Journal Club
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
PreviousNext
Articles, Behavioral/Cognitive

Musical Training Enhances Neural Processing of Binaural Sounds

Alexandra Parbery-Clark, Dana L. Strait, Emily Hittner and Nina Kraus
Journal of Neuroscience 16 October 2013, 33 (42) 16741-16747; DOI: https://doi.org/10.1523/JNEUROSCI.5700-12.2013
Alexandra Parbery-Clark
1Auditory Neuroscience Laboratory,
2Institute for Neuroscience,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Dana L. Strait
1Auditory Neuroscience Laboratory,
3Department of Communication Sciences,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Emily Hittner
1Auditory Neuroscience Laboratory,
4Weinberg College of Arts and Sciences,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Nina Kraus
1Auditory Neuroscience Laboratory,
2Institute for Neuroscience,
3Department of Communication Sciences,
5Department of Neurobiology and Physiology, and
6Department of Otolaryngology, Northwestern University, Evanston, Illinois 60208
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

While hearing in noise is a complex task, even in high levels of noise humans demonstrate remarkable hearing ability. Binaural hearing, which involves the integration and analysis of incoming sounds from both ears, is an important mechanism that promotes hearing in complex listening environments. Analyzing inter-ear differences helps differentiate between sound sources–a key mechanism that facilitates hearing in noise. Even when both ears receive the same input, known as diotic hearing, speech intelligibility in noise is improved. Although musicians have better speech-in-noise perception compared with non-musicians, we do not know to what extent binaural processing contributes to this advantage. Musicians often demonstrate enhanced neural responses to sound, however, which may undergird their speech-in-noise perceptual enhancements. Here, we recorded auditory brainstem responses in young adult musicians and non-musicians to a speech stimulus for which there was no musician advantage when presented monaurally. When presented diotically, musicians demonstrated faster neural timing and greater intertrial response consistency relative to non-musicians. Furthermore, musicians' enhancements to the diotically presented stimulus correlated with speech-in-noise perception. These data provide evidence for musical training's impact on biological processes and suggest binaural processing as a possible contributor to more proficient hearing in noise.

Introduction

The mammalian auditory system adaptably encodes the sounds around us. One way it achieves this is through binaural processing: the ability to simultaneously integrate and analyze input from both ears. The auditory system is tuned to detect minute differences between the ears on the order of tens to hundreds of microseconds (Hudspeth, 1997; Grothe, 2003). Such precision is necessary for hearing in noise, which relies on timing mechanisms to segregate concurrent acoustic streams according to slight deviations in their locations, pitches, or sound qualities. Even when both ears receive identical auditory input (i.e., diotic stimuli that do not contain different level, timing, localization, or pitch cues), hearing thresholds in noise and intelligibility are improved (Plomp and Mimpen, 1979; Kaplan and Pickett, 1981; Davis and Haggard, 1982; Davis et al., 1990). Knowing how auditory experience refines diotic processing may help us better understand this facet of binaural hearing, thus guiding the development of habilitation and remediation approaches for communication abilities for which binaural hearing is a primary concern, including speech perception in noise.

While acoustic input is first relayed ipsilaterally, contralateral projections facilitate the integration of sensory input to both ears early in the central processing stream: binaural interactions are initiated within the superior olivary complex, the nuclei of the lateral lemniscus, and the inferior colliculus of the auditory brainstem (for review, see Moore, 1991). Binaural hearing proficiency does not rely on these subcortical mechanisms alone: during hearing, left and right ear inputs compete within both right and left auditory cortices (Fujiki et al., 2002). Degraded auditory experiences, such as those associated with prior reduced hearing ability (Hogan and Moore, 2003), and cognitive functions, such as attention (Shinn-Cunningham et al., 2005), influence how the inputs from both ears interact. Over the course of their training, musicians develop the ability to make sense of complex auditory environments as well as demonstrating enhanced perceptual learning abilities (Kühnis et al., 2013; Shook et al., 2013). Growing evidence indicates that musical experience is associated with strengthened perception and neural encoding of speech in the presence of noise (Zendel and Alain, 2009; Parbery-Clark et al., 2009a, 2011; Strait et al., 2012, 2013b; Strait and Kraus, 2013), in addition to cognitive abilities that modulate both perception and neural response properties (Parbery-Clark et al., 2009b, 2011; Kraus and Chandrasekaran, 2010; Strait et al., 2010; 2013a; Hanna-Pladdy and MacKay, 2011). Musicians' cognitive and speech-in-noise perceptual benefits have only been measured using binaural approaches, never directly compared across monaural and diotic conditions.

Despite the headway we have made toward delineating markers of musicianship on central auditory processing, the extent to which these enhancements involve diotic sound processing, if at all, remains unknown. To this aim, we compared musicians' and non-musicians' speech-evoked auditory brainstem responses (ABRs) across monaural and diotic listening conditions. While musician enhancements have previously been documented in ABRs collected in both monaural (Wong et al., 2007; Bidelman and Krishnan, 2010; Bidelman et al., 2011a,b; Strait et al., 2012, 2013a) and diotic conditions (Musacchia et al., 2007; Parbery-Clark et al., 2009a, 2012a,b; Strait et al., 2013a), we selected a stimulus for which there is no musician advantage when presented monaurally (see Results). In light of the importance of neural response timing for both binaural hearing and hearing in noise (Rance et al., 2007; Tzounopoulos and Kraus, 2009; Anderson et al., 2010), we centered our analyses on measures of neural timing (i.e., latency and consistency) in addition to neural response amplitude. We hypothesized that musicians' enhanced auditory processing reflects strengthened diotic processing. Accordingly, we predicted that musicians would demonstrate faster neural timing and increased response magnitudes as measured by the timing and amplitudes of discrete response peaks (i.e., earlier and larger peaks) and increased neural response consistency (i.e., higher between-trial response similarity) in the diotic relative to the monaural condition. We further predicted that these enhancements in diotic processing would relate to musicians' advantages for hearing in noise.

Materials and Methods

Participants

Thirty subjects (mean age 20 ± 2 years, 11 males) were recruited from the Chicago area. All subjects were native English speakers, had normal hearing thresholds (≤15 dB HL from 250 to 8000 Hz), and reported no histories of learning or neurological disorders. Participants were screened for normal IQ as measured by the Test of Nonverbal Intelligence (Brown et al., 1997). Subjects were categorized as musicians and non-musicians (N = 15 each). Musicians were self-categorized and had consistently practiced an instrument at least three times a week since 7 years of age. Non-musicians had <3 years of musical training at any point in their lives. See Table 1 for each group's musical experience. Groups were matched for age (F(1,29) = 0.855 p = 0.477), nonverbal IQ (F(1,29) = 2.13 p = 0.217), hearing thresholds (F(1,29) = 1.017 p = 0.552), and sex (χ(1,29)2 = 0.741, p = 0.389). Groups had equivalent neural timing to a 100 μs click stimulus (Wave V: p > 0.1) presented at 31.3 Hz and 70 dB sound pressure level (SPL). All subjects gave informed consent before participating in accordance with the Northwestern University Institutional Review Board and were paid for their participation.

View this table:
  • View inline
  • View popup
Table 1.

Musical practice history for musicians: years of musical experience, age at which musical training began, and major instruments played are indicated for all musicians

Electrophysiology

Stimuli and recording parameters.

Auditory brainstem responses were elicited by a 40 ms speech syllable, /da/, at 70 dB SPL under three conditions: monaural right, monaural left, and diotic presentation. The five-formant speech stimulus was synthesized at a sampling rate of 10 kHz using a Klatt-based synthesizer (Klatt, 1980). The stimulus comprised an initial 10 ms onset burst and voiced formant transition between the consonant and the vowel with a fundamental frequency that linearly increased from 103 to 125 Hz. Voicing began at 5 ms. The first formant increased from 220 to 720 Hz. The second and third formants decreased from 1700 and 2580 to 1240 Hz and 2500 Hz, respectively. The fourth and fifth formants were constant at 3600 and 4500 Hz. While the stimulus was short and did not contain a steady-state vowel, it is perceived as a consonant-vowel syllable.

The responses were collected at a 20 kHz sampling rate using NeuroScan Acquire 4.3 recording system (Compumedics) with four Ag-AgCl scalp electrodes in a vertical montage (Cz active, forehead ground, and linked-earlobe reference). Electrodes were coupled to the skin with Ten20 conductive paste (Weaver) and affixed with medical tape. Contact impedance was 2 kΩ or less across all electrodes. Responses were off-line, bandpass filtered from 70 to 2000 Hz, with a 12 dB/octave filter roll-off. Sweeps with activity exceeding ±35 μV were considered artifacts and excluded. The speech stimulus was presented through insert ear phones (ER-3; Etymotic Research) in all three conditions with alternating polarities to limit the inclusion of stimulus artifact and cochlear microphonic (Gorga et al., 1985; Aiken and Picton, 2008; Skoe and Kraus, 2010). For each stimulus polarity in each condition, two subaverages representing 3000 artifact-free responses were averaged to generate an average response comprising 6000 response trials for each condition.

Diotic and monaural conditions were randomized across participants, ruling out contributions of neural fatigue or adaptation to any between-condition effects observed. Participants watched captioned movies of their choice to facilitate a calm yet wakeful recording session.

Data analysis

Timing and response magnitude.

To assess neural response timing and magnitude, we identified major response peaks corresponding to the stimulus onset (peaks V and A, occurring at ∼7 and 8 ms), spectrotemporally dynamic consonant-vowel transition (i.e., frequency following response, or FFR; peaks D–F, occurring at ∼23, 31, and 40 ms, respectively) and offset (peak O, occurring at ∼47 ms; see Fig. 1). Peaks were first identified by two independent peak pickers, after which the first author compared their judgments. The two peak pickers' judgments were identical across all peaks in all subjects except two instances; in these cases, the first author arbitrated. All peak pickers were blind to subjects' groups. All peaks were easily identifiable in all participants in all conditions, with peak minima extending beyond the magnitudes of the interpeak intervals. Peak minima also extended beyond the noise floor (i.e., magnitude of the prestimulus period).

Response consistency.

The consistency of an individual's response to the speech stimulus was measured over the 11–42 ms portion of the recording period—a period encompassing the FFR. Three hundred iterations of randomly selected, subtracted-polarity pairs of 3000 subaverages were created and the degree of correlation (Pearson's r) between the pairs was calculated. These correlations were then averaged to form a composite response consistency score. All processing was performed in MATLAB (MathWorks). Higher correlation coefficients indicate greater neural response consistency. All statistical analyses were performed on Fisher transformed z-values.

Speech in noise

Hearing in noise.

The Hearing in Noise Test (Biologic Systems; Nilsson et al., 1994) is an adaptive speech-in-noise test that uses Bamford–Kowal–Bench phonetically balanced sentences (Bench et al., 1979) superposed on a speech-shaped noise masker (65 dB SPL). The noise is acoustically fixed, being identical both within and across trials. Participants are instructed to ignore the noise and repeat 20 short semantically and syntactically simple sentences (e.g., “Sugar is very sweet”) that are presented from a loudspeaker placed 1 m directly ahead of the participant. Sentences are counted as correct only when all the words are repeated correctly. The intensity level of the target sentence varies based on the performance of the participant. Performance is assessed by determining the signal-to-noise ratio defined as the difference between the intensity of the target relative to the background noise at which a participant can repeat 50% of the target items correctly. A lower score reflects greater speech-in-noise ability.

Statistical analysis

Statistical analyses were conducted in SPSS 19.0 (SPSS). Two 3 (condition) × 6 (peak) × 2 (group) repeated-measures ANOVAs (RMANOVAs) were conducted to assess effects of condition on (1) peak latencies and (2) peak amplitudes in musicians and non-musicians. A 3 (condition) × 2 (group) RMANOVA was conducted to assess effects of condition on response consistency in musicians and non-musicians. Post hoc one-way ANOVAs and paired t tests were conducted to define statistically significant (p < 0.05) interaction effects. A one-way ANOVA was used to compare musicians' to non-musicians' speech-in-noise perception. Relationships among speech-in-noise perception and neural response characteristics were explored using Pearson's correlations. Assumptions of normality, linearity, outliers, and multicollinearity were met for all analyses, assessed by normality plots, Shapiro–Wilks test, Mahalanobis distances, and formal Variance Inflation Factor, respectively. All reported statistics reflect two-tailed significance values (α = 0.05).

Results

Summary of results

While musicians demonstrated faster neural timing and more consistent auditory brainstem responses to diotically presented sounds, musicians and non-musicians were not distinct in response to the same sound when presented monaurally. Furthermore, musicians demonstrated greater enhancements from monaural to diotic conditions relative to non-musicians, evident in musicians' faster neural timing, greater response magnitudes, and more consistent responses to diotic relative to monaural stimulation. Musicians' better speech-in-noise perception correlated with faster neural timing and greater response consistency in the diotic but not the monaural conditions.

Brainstem response

Timing

There was a significant main effect of peak (F(5,24) > 166,500.58, p < 0.0001), with no main effects of condition or group. There were significant interactions between condition and peak (F(10,19) = 3.32, p < 0.02) and between condition and group (F(2,27) = 4.62, p < 0.02) as well as a three-way interaction between condition, peak and group (F(10,19) = 5.76, p < 0.001). Musicians had faster responses than non-musicians to diotic but not to right- or left-monaural stimulation for all three peaks corresponding to the neural response to the formant transition (Fig. 1 and Tables 2 and 3; diotic peak D: F(1,29) = 2.38, p < 0.001; peak E: F(1,29) = 2.00, p = 0.06; peak F: F(1,29) = 1.83, p < 0.02; monaural transition peaks, all F < 0.5, p > 0.25). Musicians and non-musicians did not differ in response the speech sound onset or offset in any of the three conditions (all F < 0.5, all p > 0.1). Post hoc paired t tests indicated that musicians' responses to two of the transition peaks (i.e., D and E) and the offset peak (i.e., O) in the response to diotically presented stimuli occurred earlier than those to monaurally presented stimuli (diotic vs right presentation, peak D: t(14) = 3.8, p < 0.005; peak E: t(14) = 2.6, p < 0.02; peak O: t(14) = 2.3, p < 0.05; diotic vs left presentation, peak D: t(14) = 3.6, p < 0.005; peak E: t(14) = 2.6, p < 0.02; peak O: t(14) = 2.3, p < 0.05). Non-musicians' responses did not differ for any peak other than F for the diotic versus right monaural comparison and E for the diotic versus left monaural comparison; in both cases, this peak occurred earlier in the diotic than monaural condition (peak F, diotic vs right presentation: t(14) = 3.0, p < 0.02; peak E, diotic vs left presentation: t(14) = 2.8, p < 0.02). Onset peaks V and A diotically compared with monaurally presented stimuli did not differ in either group, nor did right- and left-monaural responses (all t ≤ 2.0, p ≥ 0.07).

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

ABR to the stimulus “da.” The speech-evoked ABR for musicians (black) and non-musicians (gray) for right, left, and diotic presentation. A, B, Musicians and non-musicians had equivalent peak timing in both monaural paradigms. C, Musicians had earlier diotic peak timing than non-musicians in the FFR region of the response. D, Musicians have a greater average difference in formant transition peak timing than non-musicians between responses to diotically and monaurally presented stimuli (i.e., average monaural-to-diotic timing differences for peaks D–F), with earlier neural timing in the diotic condition (F(29) = 17.0, p < 0.001). Error bars represent ±1 standard error. ***p < 0.001.

View this table:
  • View inline
  • View popup
Table 2.

Diotic presentation group differences: group means (SDs) and significance values for measures of neural timing (ms) and response consistency (r-value) in the diotic condition

View this table:
  • View inline
  • View popup
Table 3.

Monaural presentation group differences: group means (SDs) and group difference p values for timing (ms) and response consistency (r-value) in right and left monaural conditions

We further quantified the degree of timing shift invoked by diotic stimulation by subtracting the latencies for peaks D–F and O in the left- and right-monaural conditions from the latencies of these same peaks in the diotic condition. Additionally, we computed a mean of D–F shifts to characterize a global formant-transition shift. Negative values indicate that responses to diotically presented stimuli precede those to monaurally presented stimuli. A one-way ANOVA confirmed greater timing shifts between monaural and diotic conditions in musicians relative to non-musicians (peak D: F(29) = 16.5, p < 0.001; peak E: F(29) = 5.7, p < 0.05; peak F: F(29) = 3.2, p = 0.08; transition composite: F(29) = 17.0, p < 0.001; peak O: F(29) = 6.5, p < 0.05). Figure 1D displays the transition composite shift for musicians and non-musicians.

Response magnitude

There were significant main effects of condition and peak (both F > 20.0, p < 0.001), with all six peaks having greater magnitudes in the diotic relative to either monaural condition and four of the six response peaks being larger with left-ear relative to right-ear stimulation. Furthermore, we observed a three-way interaction between condition, peak, and group (F(10,19) = 2.4, p < 0.05). Post hoc paired-samples t tests indicated that musicians' had marginally significant enhancements in response magnitudes to diotically relative to right-ear but not left-ear monaurally presented stimuli relative to non-musicians for three of the six response peaks (diotic vs right presentation, peak A: F(1,29) = 3.7, p < 0.06; peak D: F(1,29) = 8.9, p < 0.01; peak O: F(1,29) = 3.9, p < 0.06; diotic vs left presentation, all F < 0.3, p > 0.1).

Response consistency

Main effects of condition (F(2,27) = 32.1, p < 0.0001) with response consistency being greater in the diotic rather than the monaural conditions, and group (F(1,28) = 6.6, p < 0.02), with musicians demonstrating greater neural response consistency (r = 0.24, SD = 0.132) across conditions relative to non-musicians (r = 0.16, SD = 0.116), were found. There was also a significant group × condition interaction (F(2,27) = 13.5, p < 0.001). Post hoc one-way ANOVAs indicated that musicians' greater response consistency across conditions was driven by an enhancement in the diotic condition; musicians had greater neural response consistency than non-musicians only in response to diotic but not monaural stimulation (Fig. 2; diotic: F(1,29) = 33.4, p < 0.001; right and left: both F < 0.5, p > 0.5)

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Response consistency measured across listening conditions. While musicians (black) and non-musicians (gray) had equally consistent responses in both monaural conditions, musicians have more consistent responses in the diotic condition. Error bars represent ±1 standard error. ***p < 0.001.

Relationships between neural measures and speech-in-noise perception

Musicians demonstrated superior hearing-in-noise ability compared with age-matched non-musicians (F(1,29) = 6.606, p < 0.01). Across all subjects, earlier peak timing and greater neural response consistency in response to diotically presented stimuli related to better speech-in-noise perception (peak E timing: r = 0.46, p < 0.02; response consistency: r = −0.591, p < 0.001). These relationships were exclusive to the diotic condition (right- and left-monaural conditions: all p > 0.1).

Discussion

We herein demonstrate musicians' enhancements for processing diotic sounds, with musicians demonstrating faster, larger, and more consistent responses to speech sounds presented simultaneously to both ears but not to the same sounds presented alone to the right or left ears. Furthermore, these physiological indices relate to speech-in-noise perception, suggesting that musicians' advantages may be driven, at least in part, by better processing of diotically presented sounds. Musicians' enhancements to the diotically presented stimulus may reflect music listening's persistent reliance on binaural sound processing; localization of a sound source in ensemble playing or conducting, for example, requires the precise and robust encoding of sounds by both ears to differentiate sounds presented closer to one ear from those presented closer to the other. This extensive auditory experience may equip musicians to preferentially process diotically presented sounds even in the absence of interaural differences providing timing or level cues.

Binaural processing and its malleability with sensory experience

Acoustic signals that arrive at the two ears are initially transmitted ipsilaterally along the auditory pathway. While sound input to each ear at first remains separate, it quickly reaches three points of convergence: the superior olive, the nuclei of the lateral lemniscus, and the inferior colliculus (Bocca, 1955; for review, see Moore, 1991; Wallace et al., 1996; McAlpine et al., 2000). At these points, the auditory system integrates information, detecting differences in the phase, intensity, and timing of the signals from the two ears (Moore, 1991). While cortical auditory structures receive integrated sound input from both ears, their respective weightings within auditory cortex are not predetermined—they can be modulated by auditory experiences early in life (Hogan and Moore, 2003) and the engagement of cognitive functions during the listening process (Shinn-Cunningham et al., 2005).

While the diotic stimulus presentation used here does not tap into traditional binaural processing cues such as timing and level differences, it may be that musicians' diotic advantage reflects binaural processing's known experience-related malleability. Although the structural development of the human auditory brainstem is thought to be complete within the first two years of life (Moore et al., 1995; Moore and Linthicum, 2007) and is guided by genetics (Clopton and Silverman, 1977; Taniguchi, 1981), how the auditory system makes use of this circuitry to achieve sound identification and localization is experience dependent. The functional organization of brainstem nuclei involved in binaural hearing is modulated by sensory experiences that occur during development (Clopton and Silverman, 1977; Knudsen et al., 1984a, b; Knudsen, 1985; for review, see King et al., 2000) and, to a lesser extent, adulthood (Kacelnik et al., 2006). Because of this, even the mature auditory system is capable of adapting to changes in the balance between the two ears (Bauer et al., 1966; Florentine, 1976; Hofman et al., 1998; Shinn-Cunningham et al., 1998; Van Wanrooij and Van Opstal, 2005). Experience-related changes in subcortical binaural hearing structures may stem, at least in part, from the structural and functional organization of auditory cortex that continues into adolescence (Gleeson and Walsh, 2000; Moore and Guan, 2001; Moore and Linthicum, 2007) via top-down modulation of neuronal response properties. In fact, deactivating descending inputs to inferior colliculus prevents sound localization using binaural hearing cues, evident in conditions where the balance in hearing between the two ears has been experimentally altered (Bajo et al., 2010).

While we have interpreted our results in the context of training-related changes in musicians' binaural sound processing, group comparisons cannot disentangle innate and training-related factors contributing to musicians' binaural processing enhancements. Future studies, most notably longitudinal work, should define both the developmental trajectory of monaural and binaural auditory processing advantages in musicians and their direct relationships to training over and above innate predispositions. Additionally, it is possible that our stimulus presentation mode—lacking interaural timing or level cues—enabled musicians to take advantage of their “better ear.” This alternative interpretation would support the possibility that, when presented with diotic stimulation, musicians preferentially benefited from the use of their dominant ear, resulting in an enhanced physiological response.

Interpretation according to previous monaural and binaural approaches in musicians

While these results evidence enhanced diotic but not monaural processing in musicians, musician enhancements have previously been documented in ABRs collected in both monaural (Wong et al., 2007; Strait et al., 2012, 2013a) and diotic conditions (Musacchia et al., 2007; Parbery-Clark et al., 2009a, 2012a,b; Strait et al., 2013a). To test our hypothesis that musicians' enhanced speech-in-noise perception reflects strengthened processing to diotically presented sounds, the present study intentionally used a stimulus that did not elicit a musician advantage when presented monaurally. This stimulus was unique in that it did not contain the acoustically rich vowel portion of the syllable, which is known to elicit more robust neural encoding of the spectral components of speech in musicians (Musacchia et al., 2007; Wong et al., 2007; Parbery-Clark et al., 2009a, 2012a; Strait et al., 2012; Strait and Kraus, 2013). The absence of this vowel region could account for the lack of monaural enhancement observed in musicians to this stimulus due to decreased acoustic content and less backward masking of the formant transition by the broadband vowel. Still, further work should disentangle acoustic parameters that successfully induce musician enhancements in both monaural and diotic conditions from those that do not. Outcomes may shed light on musical training as a model of experience-related neuroplasticity, specifying the markers of musicianship against aspects of auditory processing that remain unaffected.

Practical applications

These results stimulate further investigation into how auditory training might be used in clinical populations that demonstrate compromised binaural processing. Binaural hearing's relationship to speech-in-noise perception (Nábělek and Robinson, 1982; Ellermeier and Hellbrück, 1998; Strouse et al., 1998; Hawley et al., 2004) encourages its relevance for school-aged children given that their everyday learning occurs in noisy classroom environments; in fact, a typical in-session elementary school classroom is ∼60 dB (Rosenberg et al., 1999; Bradley, 2005; Cameron et al., 2006). Given that conversational voice levels are ∼50 dB, it is not surprising that low signal-to-noise ratios can make hearing what the teacher is saying a challenging task (Barton, 1989; Blair, 1990). Our results suggest that strengthening the neural pathways involved in binaural processing through auditory training, such as music lessons, may alleviate some of the classroom difficulties faced by children that involve understanding speech in noise.

Future directions

Future work aimed at defining additional behavioral and neural assessments of binaural processing differences in musicians and non-musicians, including assessments comprising more traditional interaural timing and level difference paradigms, will expand our understanding of the extent to which binaural processing is experience dependent. Furthermore, this work might consider subgroups of musicians, differentiated by the degree to which they depend on binaural hearing cues in their musical practice. While musicians have enhanced sound localization relative to non-musicians (Tervaniemi et al., 2006), there is also the possibility of between-musician group differences. Conductors, for example, must regularly locate sound sources within a busy orchestra and demonstrate strengthened sound localization (Münte et al., 2001); accordingly, conductors may demonstrate binaural processing enhancements above and beyond those reported here in performing musicians. Similarly, orchestral musicians (e.g., string players) may demonstrate enhancements relative to soloists (e.g., pianists). Throughout this work, musicians' cognitive and speech-in-noise perceptual benefits might be assessed using both binaural and monaural approaches, in addition to assessing binaural unmasking and sound localization, which are both dependent on hearing with both ears.

Notes

Supplemental information can be found online at http://www.soc.northwestern.edu/brainvolts/.

Footnotes

  • This work was supported by an Undergraduate Academic Research Grant to E.H. and the Knowles Hearing Center and National Science Foundation 0921275 to N.K.. We thank the subjects for their participation in this study and Dylan Levy for his assistance with data analysis.

  • Correspondence should be addressed to Dr. Nina Kraus, Auditory Neuroscience Laboratory, Northwestern University, 2240 Campus Drive, Evanston, IL 60208. nkraus{at}northwestern.edu. www.brainvolts.northwestern.edu.

References

  1. ↵
    1. Aiken SJ,
    2. Picton TW
    (2008) Envelope and spectral frequency-following responses to vowel sounds. Hear Res 245:35–47, doi:10.1016/j.heares.2008.08.004, pmid:18765275.
    OpenUrlCrossRefPubMed
  2. ↵
    1. Anderson S,
    2. Skoe E,
    3. Chandrasekaran B,
    4. Kraus N
    (2010) Neural timing is linked to speech perception in noise. J Neurosci 30:4922–4926, doi:10.1523/JNEUROSCI.0107-10.2010, pmid:20371812.
    OpenUrlAbstract/FREE Full Text
  3. ↵
    1. Bajo VM,
    2. Nodal FR,
    3. Moore DR,
    4. King AJ
    (2010) The descending corticocollicular pathway mediates learning-induced auditory plasticity. Nat Neurosci 13:253–260, pmid:20037578.
    OpenUrlCrossRefPubMed
  4. ↵
    1. Barton L
    (1989) Sound levels in occupied elementary school classrooms. Unpublished master's thesis (Utah State University).
  5. ↵
    1. Bauer RW,
    2. Matuzsa JL,
    3. Blackmer F,
    4. Glucksberg S
    (1966) Noise localization after unilateral attenuation. J Acoust Soc Am 40:441–444, doi:10.1121/1.1910093.
    OpenUrlCrossRef
  6. ↵
    1. Bench J,
    2. Kowal A,
    3. Bamford J
    (1979) The BKB (Bamford-Kowal-Bench) sentence lists for partially-hearing children. Br J Audiol 13:108–112, doi:10.3109/03005367909078884, pmid:486816.
    OpenUrlCrossRefPubMed
  7. ↵
    1. Bidelman GM,
    2. Krishnan A
    (2010) Effects of reverberation on brainstem representation of speech in musicians and non-musicians. Brain Res 1355:112–125, doi:10.1016/j.brainres.2010.07.100, pmid:20691672.
    OpenUrlCrossRefPubMed
  8. ↵
    1. Bidelman GM,
    2. Gandour JT,
    3. Krishnan A
    (2011a) Cross-domain effects of music and language experience on the representation of pitch in the human auditory brainstem. J Cogn Neurosci 23:425–434, doi:10.1162/jocn.2009.21362, pmid:19925180.
    OpenUrlCrossRefPubMed
  9. ↵
    1. Bidelman GM,
    2. Gandour JT,
    3. Krishnan A
    (2011b) Musicians and tone-language speakers share enhanced brainstem encoding but not perceptual benefits for musical pitch. Brain Cogn 77:1–10, doi:10.1016/j.bandc.2011.07.006, pmid:21835531.
    OpenUrlCrossRefPubMed
  10. ↵
    1. Blair J
    (1990) Front-row seating is not enough for classroom listening. How the student with hearing loss can succeed in college: a handbook for students, families, and professionals (Alexander Graham Bell Association for the Deaf, Washington, DC).
  11. ↵
    1. Bocca E
    (1955) Binaural hearing: another approach. Laryngoscope 65:1164–1171, pmid:13279204.
    OpenUrlPubMed
  12. ↵
    1. Bradley JS
    (2005) Does the classroom assist or impede the learning process? Can Assoc Principals J 13:32–34.
    OpenUrl
  13. ↵
    1. Brown L,
    2. Sherbenou RJ,
    3. Johnsen SK
    (1997) Test of nonverbal intelligence-3 (TONI-3) (Pro-Ed, Austin, TX).
  14. ↵
    1. Cameron S,
    2. Dillon H,
    3. Newall P
    (2006) The listening in spatialized noise test: an auditory processing disorder study. J Am Acad Audiol 17:306–320, doi:10.3766/jaaa.17.5.2, pmid:16796298.
    OpenUrlCrossRefPubMed
  15. ↵
    1. Clopton BM,
    2. Silverman MS
    (1977) Plasticity of binaural interaction: II. Critical period and changes in midline response. J Neurophysiol 40:1275–1280, pmid:925729.
    OpenUrlFREE Full Text
  16. ↵
    1. Davis AC,
    2. Haggard MP
    (1982) Some implications of audiological measures in the population for binaural aiding strategies. Scand Audiol Suppl 15:167–179, pmid:6955924.
    OpenUrlPubMed
  17. ↵
    1. Davis A,
    2. Haggard M,
    3. Bell I
    (1990) Magnitude of diotic summation in speech-in-noise tasks: performance region and appropriate baseline. Br J Audiol 24:11–16, doi:10.3109/03005369009077838, pmid:2317597.
    OpenUrlCrossRefPubMed
  18. ↵
    1. Ellermeier W,
    2. Hellbruck J
    (1998) Is level irrelevant in “irrelevant speech”? Effects of loudness, signal-to-noise ratio, and binaural unmasking. J Exp Psychol Hum Percept Perform 24:1406–1414, doi:10.1037/0096-1523.24.5.1406, pmid:9988598.
    OpenUrlCrossRefPubMed
  19. ↵
    1. Florentine M
    (1976) Relation between lateralization and loudness in asymmetrical hearing losses. J Am Audiol Soc 1:243–251, pmid:931759.
    OpenUrlPubMed
  20. ↵
    1. Fujiki N,
    2. Jousmaki V,
    3. Hari R
    (2002) Neuromagnetic responses to frequency-tagged sounds: a new method to follow inputs from each ear to the human auditory cortex during binaural hearing. J Neurosci 22:RC205, pmid:11826151.
    OpenUrlAbstract/FREE Full Text
  21. ↵
    1. Gleeson JG,
    2. Walsh CA
    (2000) Neuronal migration disorders: from genetic diseases to developmental mechanisms. Trends Neurosci 23:352–359, doi:10.1016/S0166-2236(00)01607-6, pmid:10906798.
    OpenUrlCrossRefPubMed
  22. ↵
    1. Gorga M,
    2. Abbas P,
    3. Worthington D
    (1985) in The auditory brainstem response, Stimulus calibration in ABR measurements, ed Jacobsen J (College Hill Press, San Diego), pp 49–62.
  23. ↵
    1. Grothe B
    (2003) New roles for synaptic inhibition in sound localization. Nat Rev Neurosci 4:540–550, doi:10.1038/nrn1136, pmid:12838329.
    OpenUrlCrossRefPubMed
  24. ↵
    1. Hanna-Pladdy B,
    2. MacKay A
    (2011) The relation between instrumental musical activity and cognitive aging. Neuropsychology 25:378–386, doi:10.1037/a0021895, pmid:21463047.
    OpenUrlCrossRefPubMed
  25. ↵
    1. Hawley ML,
    2. Litovsky RY,
    3. Culling JF
    (2004) The benefit of binaural hearing in a cocktail party: effect of location and type of interferer. J Acoust Soc Am 115:833–843, doi:10.1121/1.1639908, pmid:15000195.
    OpenUrlCrossRefPubMed
  26. ↵
    1. Hofman PM,
    2. Van Riswick JG,
    3. Van Opstal AJ
    (1998) Relearning sound localization with new ears. Nat Neurosci 1:417–421, doi:10.1038/1633, pmid:10196533.
    OpenUrlCrossRefPubMed
  27. ↵
    1. Hogan SC,
    2. Moore DR
    (2003) Impaired binaural hearing in children produced by a threshold level of middle ear disease. J Assoc Res Otolaryngol 4:123–129, doi:10.1007/s10162-002-3007-9, pmid:12943367.
    OpenUrlCrossRefPubMed
  28. ↵
    1. Hudspeth AJ
    (1997) How hearing happens minireview. Neuron 19:947–950, doi:10.1016/S0896-6273(00)80385-2, pmid:9390507.
    OpenUrlCrossRefPubMed
  29. ↵
    1. Kacelnik O,
    2. Nodal FR,
    3. Parsons CH,
    4. King AJ
    (2006) Training-induced plasticity of auditory localization in adult mammals. PLoS Biol 4:e71, doi:10.1371/journal.pbio.0040071, pmid:16509769.
    OpenUrlCrossRefPubMed
  30. ↵
    1. Kaplan H,
    2. Pickett JM
    (1981) Effects of dichotic/diotic versus monotic presentation on speech understanding in noise in elderly hearing-impaired listeners. Ear Hear 2:202–207, doi:10.1097/00003446-198109000-00004, pmid:7297784.
    OpenUrlCrossRefPubMed
  31. ↵
    1. King AJ,
    2. Parsons CH,
    3. Moore DR
    (2000) Plasticity in the neural coding of auditory space in the mammalian brain. Proc Natl Acad Sci U S A 97:11821–11828, doi:10.1073/pnas.97.22.11821, pmid:11050215.
    OpenUrlAbstract/FREE Full Text
  32. ↵
    1. Klatt DH
    (1980) Software for a cascade/parallel formant synthesizer. J Acoust Soc Am 67:971–995, doi:10.1121/1.383940.
    OpenUrlCrossRef
  33. ↵
    1. Knudsen EI
    (1985) Experience alters the spatial tuning of auditory units in the optic tectum during a sensitive period in the barn owl. J Neurosci 5:3094–3109, pmid:4056865.
    OpenUrlAbstract
  34. ↵
    1. Knudsen EI,
    2. Knudsen PF,
    3. Esterly SD
    (1984a) A critical period for the recovery of sound localization accuracy following monaural occlusion in the barn owl. J Neurosci 4:1012–1020, pmid:6716128.
    OpenUrlAbstract
  35. ↵
    1. Knudsen EI,
    2. Esterly SD,
    3. Knudsen PF
    (1984b) Monaural occlusion alters sound localization during a sensitive period in the barn owl. J Neurosci 4:1001–1011, pmid:6716127.
    OpenUrlAbstract
  36. ↵
    1. Kraus N,
    2. Chandrasekaran B
    (2010) Music training for the development of auditory skills. Nat Rev Neurosci 11:599–605, doi:10.1038/nrn2882, pmid:20648064.
    OpenUrlCrossRefPubMed
  37. ↵
    1. Kühnis J,
    2. Elmer S,
    3. Meyer M,
    4. Jäncke L
    (2013) Musicianship boosts perceptual learning of pseudoword-chimeras: an electrophysiological approach. Brain Topogr 26:110–125, doi:10.1007/s10548-012-0237-y, pmid:22736323.
    OpenUrlCrossRefPubMed
  38. ↵
    1. McAlpine D,
    2. Jiang D,
    3. Shackleton TM,
    4. Palmer AR
    (2000) Responses of neurons in the inferior colliculus to dynamic interaural phase cues: evidence for a mechanism of binaural adaptation. J Neurophysiol 83:1356–1365, pmid:10712463.
    OpenUrlAbstract/FREE Full Text
  39. ↵
    1. Moore DR
    (1991) Anatomy and physiology of binaural hearing. Int J Audiol 30:125–134, doi:10.3109/00206099109072878.
    OpenUrlCrossRef
  40. ↵
    1. Moore JK,
    2. Guan YL
    (2001) Cytoarchitectural and axonal maturation in human auditory cortex. J Assoc Res Otolaryngol 2:297–311, doi:10.1007/s101620010052, pmid:11833605.
    OpenUrlCrossRefPubMed
  41. ↵
    1. Moore JK,
    2. Linthicum FH Jr.
    (2007) The human auditory system: a timeline of development. Int J Audiol 46:460–478, doi:10.1080/14992020701383019, pmid:17828663.
    OpenUrlCrossRefPubMed
  42. ↵
    1. Moore JK,
    2. Perazzo LM,
    3. Braun A
    (1995) Time course of axonal myelination in the human brainstem auditory pathway. Hear Res 87:21–31, doi:10.1016/0378-5955(95)00073-D, pmid:8567438.
    OpenUrlCrossRefPubMed
  43. ↵
    1. Münte TF,
    2. Kohlmetz C,
    3. Nager W,
    4. Altenmüller E
    (2001) Superior auditory spatial tuning in conductors. Nature 409:580, doi:10.1038/35054668, pmid:11214309.
    OpenUrlCrossRefPubMed
  44. ↵
    1. Musacchia G,
    2. Sams M,
    3. Skoe E,
    4. Kraus N
    (2007) Musicians have enhanced subcortical auditory and audiovisual processing of speech and music. Proc Natl Acad Sci U S A 104:15894–15909, doi:10.1073/pnas.0701498104, pmid:17898180.
    OpenUrlAbstract/FREE Full Text
  45. ↵
    1. Nábĕlek AK,
    2. Robinson PK
    (1982) Monaural and binaural speech perception in reverberation for listeners of various ages. J Acoust Soc Am 71:1242–1248, doi:10.1121/1.387773, pmid:7085996.
    OpenUrlCrossRefPubMed
  46. ↵
    1. Nilsson M,
    2. Soli SD,
    3. Sullivan JA
    (1994) Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. J Acoust Soc Am 95:1085–1099, doi:10.1121/1.408469, pmid:8132902.
    OpenUrlCrossRefPubMed
  47. ↵
    1. Parbery-Clark A,
    2. Skoe E,
    3. Kraus N
    (2009a) Musical experience limits the degradative effects of background noise on the neural processing of sound. J Neurosci 29:14100–14107, doi:10.1523/JNEUROSCI.3256-09.2009, pmid:19906958.
    OpenUrlAbstract/FREE Full Text
  48. ↵
    1. Parbery-Clark A,
    2. Skoe E,
    3. Lam C,
    4. Kraus N
    (2009b) Musician enhancement for speech-in-noise. Ear Hear 30:653–661, doi:10.1097/AUD.0b013e3181b412e9, pmid:19734788.
    OpenUrlCrossRefPubMed
  49. ↵
    1. Parbery-Clark A,
    2. Strait DL,
    3. Anderson S,
    4. Hittner E,
    5. Kraus N
    (2011) Musical experience and the aging auditory system: implications for cognitive abilities and hearing speech in noise. PLoS One 6:e18082, doi:10.1371/journal.pone.0018082, pmid:21589653.
    OpenUrlCrossRefPubMed
  50. ↵
    1. Parbery-Clark A,
    2. Anderson S,
    3. Hittner E,
    4. Kraus N
    (2012a) Musical experience strengthens the neural representation of sounds important for communication in middle-aged adults. Front Aging Neurosci 4:30, pmid:23189051.
    OpenUrlCrossRefPubMed
  51. ↵
    1. Parbery-Clark A,
    2. Tierney A,
    3. Strait DL,
    4. Kraus N
    (2012b) Musicians have fine-tuned neural distinction of speech syllables. Neuroscience 219:111–119, doi:10.1016/j.neuroscience.2012.05.042, pmid:22634507.
    OpenUrlCrossRefPubMed
  52. ↵
    1. Plomp R,
    2. Mimpen AM
    (1979) Improving the reliability of testing the speech reception threshold for sentences. Int J Audiol 18:43–52, doi:10.3109/00206097909072618.
    OpenUrlCrossRef
  53. ↵
    1. Rance G,
    2. Barker E,
    3. Mok M,
    4. Dowell R,
    5. Rincon A,
    6. Garratt R
    (2007) Speech perception in noise for children with auditory neuropathy/dys-synchrony type hearing loss. Ear Hear 28:351–360, doi:10.1097/AUD.0b013e3180479404, pmid:17485984.
    OpenUrlCrossRefPubMed
  54. ↵
    1. Rosenberg GG,
    2. Blake-Rahter P,
    3. Heavner J,
    4. Allen L,
    5. Redmond BM,
    6. Phillips J,
    7. Stigers K
    (1999) Improving classroom acoustics (ICA): a three-year FM sound field classroom amplification study. J Educ Audiol 7:22.
    OpenUrl
  55. ↵
    1. Shinn-Cunningham BG,
    2. Durlach NI,
    3. Held RM
    (1998) Adapting to supernormal auditory localization cues. I. Bias and resolution. J Acoust Soc Am 103:3656–3666, doi:10.1121/1.423088, pmid:9637047.
    OpenUrlCrossRefPubMed
  56. ↵
    1. Shinn-Cunningham BG,
    2. Ihlefeld A,
    3. Satyavarta LE
    (2005) Bottom-up and top-down influences on spatial unmasking. Acta Acustica 91:967–979.
    OpenUrl
  57. ↵
    1. Shook A,
    2. Marian V,
    3. Bartolotti J,
    4. Schroeder SR
    (2013) Musical experience influences statistical learning of a novel language. Am J Psychol 126:95–104, doi:10.5406/amerjpsyc.126.1.0095, pmid:23505962.
    OpenUrlCrossRefPubMed
  58. ↵
    1. Skoe E,
    2. Kraus N
    (2010) Auditory brainstem response to complex sounds: a tutorial. Ear Hear 31:302–324, doi:10.1097/AUD.0b013e3181cdb272, pmid:20084007.
    OpenUrlCrossRefPubMed
  59. ↵
    1. Strait DL,
    2. Kraus N
    (2013) Biological impact of auditory expertise across the life span: musicians as a model of auditory learning. Hear Res doi:10.1016/j.heares.2013.08.004, pmid:23988583, pii: S0378-5955(13)00195-0.
    OpenUrlCrossRefPubMed
  60. ↵
    1. Strait DL,
    2. Kraus N,
    3. Parbery-Clark A,
    4. Ashley R
    (2010) Musical experience shapes top-down auditory mechanisms: evidence from masking and auditory attention performance. Hear Res 261:22–29, doi:10.1016/j.heares.2009.12.021, pmid:20018234.
    OpenUrlCrossRefPubMed
  61. ↵
    1. Strait DL,
    2. O'Connell S,
    3. Parbery-Clark A,
    4. Kraus N
    (2013a) Musicians' enhanced neural differentiation of speech sounds arises early in life: developmental evidence from ages 3 to 30. Cereb Cortex doi:10.1093/cercor/bht103, Advance online publication. Retrieved April 18, 2013.
    OpenUrlAbstract/FREE Full Text
  62. ↵
    1. Strait DL,
    2. O'Connell S,
    3. Parbery-Clark A,
    4. O'Connell S,
    5. Kraus N
    (2013b) Biological impact of preschool music classes on processing speech in noise. Dev Cogn Neurosci 6C:51–60, pmid:23872199.
    OpenUrlCrossRefPubMed
  63. ↵
    1. Strait DL,
    2. Parbery-Clark A,
    3. Hittner E,
    4. Kraus N
    (2012) Musical training during early childhood enhances the neural encoding of speech in noise. Brain Lang 123:191–201, doi:10.1016/j.bandl.2012.09.001, pmid:23102977.
    OpenUrlCrossRefPubMed
  64. ↵
    1. Strouse A,
    2. Ashmead DH,
    3. Ohde RN,
    4. Grantham DW
    (1998) Temporal processing in the aging auditory system. J Acoust Soc Am 104:2385–2399, doi:10.1121/1.423748, pmid:10491702.
    OpenUrlCrossRefPubMed
  65. ↵
    1. Taniguchi I
    (1981) in Neuronal mechanisms of hearing, Plastic changes in inferior colliculus following cochlear destruction, eds Syka J, Aitkin L (Plenum, New York), pp 377–380.
  66. ↵
    1. Tervaniemi M,
    2. Castaneda A,
    3. Knoll M,
    4. Uther M
    (2006) Sound processing in amateur musicians and nonmusicians: event-related potential and behavioral indices. Neuroreport 17:1225–1228, doi:10.1097/01.wnr.0000230510.55596.8b, pmid:16837859.
    OpenUrlCrossRefPubMed
  67. ↵
    1. Tzounopoulos T,
    2. Kraus N
    (2009) Learning to encode timing: mechanisms of plasticity in the auditory brainstem. Neuron 62:463–469, doi:10.1016/j.neuron.2009.05.002, pmid:19477149.
    OpenUrlCrossRefPubMed
  68. ↵
    1. Van Wanrooij MM,
    2. Van Opstal AJ
    (2005) Relearning sound localization with a new ear. J Neurosci 25:5413–5424, doi:10.1523/JNEUROSCI.0850-05.2005, pmid:15930391.
    OpenUrlAbstract/FREE Full Text
  69. ↵
    1. Wallace MT,
    2. Wilkinson L,
    3. Stein BE
    (1996) Representation and integration of multiple sensory inputs in primate superior colliculus. J Neurophysiol 76:1246–1266, pmid:8871234.
    OpenUrlAbstract/FREE Full Text
  70. ↵
    1. Wong PC,
    2. Skoe E,
    3. Russo NM,
    4. Dees T,
    5. Kraus N
    (2007) Musical experience shapes human brainstem encoding of linguistic pitch patterns. Nat Neurosci 10:420–422, pmid:17351633.
    OpenUrlCrossRefPubMed
  71. ↵
    1. Zendel BR,
    2. Alain C
    (2009) Concurrent sound segregation is enhanced in musicians. J Cogn Neurosci 21:1488–1498, doi:10.1162/jocn.2009.21140, pmid:18823227.
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 33 (42)
Journal of Neuroscience
Vol. 33, Issue 42
16 Oct 2013
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Advertising (PDF)
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Musical Training Enhances Neural Processing of Binaural Sounds
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Musical Training Enhances Neural Processing of Binaural Sounds
Alexandra Parbery-Clark, Dana L. Strait, Emily Hittner, Nina Kraus
Journal of Neuroscience 16 October 2013, 33 (42) 16741-16747; DOI: 10.1523/JNEUROSCI.5700-12.2013

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Musical Training Enhances Neural Processing of Binaural Sounds
Alexandra Parbery-Clark, Dana L. Strait, Emily Hittner, Nina Kraus
Journal of Neuroscience 16 October 2013, 33 (42) 16741-16747; DOI: 10.1523/JNEUROSCI.5700-12.2013
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Notes
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Articles

  • Choice Behavior Guided by Learned, But Not Innate, Taste Aversion Recruits the Orbitofrontal Cortex
  • Maturation of Spontaneous Firing Properties after Hearing Onset in Rat Auditory Nerve Fibers: Spontaneous Rates, Refractoriness, and Interfiber Correlations
  • Insulin Treatment Prevents Neuroinflammation and Neuronal Injury with Restored Neurobehavioral Function in Models of HIV/AIDS Neurodegeneration
Show more Articles

Behavioral/Cognitive

  • Computational and neurobiological substrates of cost-benefit integration in altruistic helping decision
  • Automatic and fast encoding of representational uncertainty underlies the distortion of relative frequency
  • Both default and multiple-demand regions represent semantic goal information
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
  • Feedback
(JNeurosci logo)
(SfN logo)

Copyright © 2021 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.