Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Articles, Behavioral/Systems/Cognitive

Own-Song Recognition in the Songbird Auditory Pathway: Selectivity and Lateralization

Colline Poirier, Tiny Boumans, Marleen Verhoye, Jacques Balthazart and Annemie Van der Linden
Journal of Neuroscience 18 February 2009, 29 (7) 2252-2258; https://doi.org/10.1523/JNEUROSCI.4650-08.2009
Colline Poirier
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Tiny Boumans
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Marleen Verhoye
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jacques Balthazart
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Annemie Van der Linden
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

The songbird brain is able to discriminate between the bird's own song and other conspecific songs. Determining where in the brain own- song selectivity emerges is of great importance because experience-dependent mechanisms are necessarily involved and because brain regions sensitive to self-generated vocalizations could mediate auditory feedback that is necessary for song learning and maintenance. Using functional MRI, here we show that this selectivity is present at the midbrain level. Surprisingly, the selectivity was found to be lateralized toward the right side, a finding reminiscent of the potential right lateralization of song production in zebra finches but also of own-face and own-voice recognition in human beings. These results indicate that a midbrain structure can process subtle information about the identity of a subject through experience-dependent mechanisms, challenging the classical perception of subcortical regions as primitive and nonplastic structures. They also open questions about the evolution of the cognitive skills and lateralization in vertebrates.

  • birdsong
  • auditory
  • lateralization
  • midbrain
  • functional MRI
  • cognition

Introduction

Like human speech, birdsong is a learned behavior that supports species and individual recognition. Behavioral studies indicate that songbirds perceive their own song differently from other conspecific songs (Cynx and Nottebohm, 1992; Cynx, 1993). At the neuronal level, the forebrain regions involved in song production and learning, the so-called song control system, are also auditory responsive and display a strong selectivity for the bird's own song (BOS) (Doupe and Konishi, 1991; Margoliash and Fortune, 1992; Janata and Margoliash, 1999). BOS selectivity is commonly expected to emerge at least at an intermediate level in the ascending auditory system (Prather and Mooney, 2004; Theunissen et al., 2004). Understanding how and where this exquisite selective response arises in the auditory system is critical for understanding auditory-vocal integration necessary for song learning and song maintenance. Previous electrophysiological studies have looked for this selectivity in the primary and secondary auditory telencephalic regions, but without success (Lewicki and Arthur, 1996; Janata and Margoliash, 1999; Amin et al., 2004; Shaevitz and Theunissen, 2007).

Electrophysiological techniques require having an a priori hypothesis about the localization of the neuronal substrate supporting the investigated process. They are also limited by the number of locations that can be sampled in one experiment. In contrast, functional MRI (fMRI), which was recently adapted to starlings (Van Meir et al., 2005) and zebra finches (Boumans et al., 2007, 2008a,b; Voss et al., 2007), allows a whole-brain approach and can be used to perform assumption-free experiments. Taking advantage of this technique, we measured the blood oxygen level- dependent (BOLD) neural responses in anesthetized male zebra finches that were exposed to BOS, a conspecific song (CON) and a heterospecific song (HET) to determine where BOS selectivity occurs in the ascending auditory pathway. Comparison of neural activity triggered by BOS versus CON and BOS versus HET revealed that BOS selectivity measured at the neural level (i.e., at the level of large pool of neurons) is present in the right auditory midbrain.

Materials and Methods

Subjects.

Adult male zebra finches (Taenopygia guttata) purchased from local suppliers were used in this experiment. Birds were housed together in a big aviary, had access to food and water ad libitum and were maintained under a 12 h light/dark photoperiod throughout the experiment. Experimental procedures were in agreement with the Belgian laws on the protection and welfare of animals and were approved by the ethical committee of the University of Antwerp.

Stimuli and stimulation device.

Undirected song of each bird was recorded in a soundproof chamber with a Sony ECM MS 907 microphone (Sony) and a Phonic MU 502 amplifier (Phonic). BOS/CON stimuli were constituted of several individual songs of each bird interleaved with silence periods of 0.5 s (see supplemental Fig. S1, available at www.jneurosci.org as supplemental material). This procedure conserved the natural intraindividual and interindividual variability of song length. The total length of each stimulus was 16 s. The intensity of each song was normalized (in term of matched root-mean-square) before being integrated into the complete stimulus (songs and silence periods). These manipulations were done using Praat software (University of Amsterdam, The Netherlands). Heterospecific stimuli corresponded to songs recorded from two European starlings (Sturnus vulgaris) and two canaries (Serinus canaria), resulting in four different stimuli. These songs, in addition to other starling and canary songs, had been played to the experimental subjects on average 10 times per day during at least 4 weeks in an attempt to decrease their novelty. Songs were delivered by a D-NE240 CD player (Sony) and VS 2220 amplified speaker system (Altec). The duration of each heterospecific stimulus was 16 s, and its intensity was matched to the intensity of zebra finch songs (in term of root-mean-square).

During imaging experiments, auditory signals were presented to the birds with magnetless dynamic speakers connected to an amplifier as described by Van Meir et al. (2005). Stimulus application was controlled by Presentation software 0.76 (Neurobehavioral Systems). To test whether some sound frequencies were not absorbed or enhanced into the magnet, stimuli were first emitted with the magnetless dynamic speakers in the presence of a test bird and recorded with a nonmagnetic Electret microphone. Frequencies between 2500–5000 Hz were found to be enhanced. To compensate this artificial enhancement, an equalizer function was applied to each stimulus using WaveLab software. The function consisted of a Gaussian kernel with the following parameters: maximum amplitude: −20 dB, centered on 3750 Hz, width: 0.05 octaves (corresponding to the range 2500–5000 Hz). During the experiments, stimuli were delivered monophonically, with a sampling frequency of 22050 Hz and their global intensity was ∼67 dB sound pressure level. By comparison, the magnet noise was ∼65 dB.

Experimental design.

The experiment consisted of an ON/OFF block design alternating auditory stimulation periods (ON blocks) with resting periods (OFF blocks). Each block (ON and OFF) lasted 16 s, which corresponds to the acquisition time of 2 images (see supplemental Fig. S2, available at www.jneurosci.org as supplemental material). Each stimulus type was presented 25 times, resulting in the acquisition of 50 images per stimulus and per subject. The presentation order of the conditions was randomized within and between subjects. Since the BOS is unique for each bird, each subject was tested with one BOS, one CON and one HET stimulus.

Anesthesia and physiology monitoring.

During the experiment, birds were continuously anesthetized with isoflurane (anesthesia induction: 3%, maintenance: 2%). The anesthetic gas was inhaled with a mixture of oxygen and nitrogen (at a flow rate of 100 and 200 cc/min, respectively) through a small mask over the bird's beak. Isoflurane is the most common anesthetic in clinical applications and it has also been used in most of the imaging studies with monkeys and cats (Logothetis et al., 1999; Kim et al., 2000; Duong et al., 2001). Isoflurane has the great advantage of having relatively minor side effects, and, indeed, 100% of our tested birds recovered within a few minutes after the end of the fMRI experiment. Body temperature was continuously monitored with a cloacal temperature probe and maintained at 40°C by a feedback controlled heating system (SA-Instruments). Respiration rate and amplitude were constantly monitored with a small pneumatic sensor (SA-Instruments) positioned under the bird.

Image acquisition.

Specifications of our 7T scanner and the coils used for the experiment can be found in Boumans et al. (2007). Functional MRI data were acquired using a T2-weighted spin-echo sequence [echo time (TE)/repetition time (TR): 60/2000 ms]. Fifteen continuous sagittal slices of 0.75 mm thickness covering nearly the entire brain were acquired. Voxel size was 0.25 × 0.25 × 0.75 mm3. Anatomical 3D images required for localization of the functional data (see below) were obtained for each bird using a RARE T2-weighted sequence (TE/TR: 60/2000 ms, voxel size: 0.0625 × 0.0625 × 0.0625 mm).

Image processing.

Intraindividual head motion was corrected using a six-parameter rigid body spatial transformation. In parallel, each individual anatomical three-dimensional (3D) dataset was coregistered with the zebra finch brain atlas that we developed in our lab (Poirier et al., 2008), using mutual information implemented in home-written Matlab code. The transformation matrix of this coregistration was then applied to the realigned functional data, resulting in functional data precisely coregistered to the atlas dataset. Finally, functional data were smoothed with a 0.5 mm width Gaussian kernel.

Statistical analysis.

Statistical voxel-based analyses were performed using a mass-univariate approach based on General Linear Models, implemented in SPM5 (http://www.fil.ion.ucl.ac.uk/spm/). Because the shape of the hemodynamic response is not known in the zebra finch and because the temporal resolution of the spin-echo fMRI used in the present experiment was weak (two time points per ON block), the data were simply modeled as a box-car and filtered with a high-pass filter of 360 s. Model parameters were then estimated using a classical restricted maximum likelihood algorithm.

The statistical analysis was restricted to some a priori defined regions of interest (ROI), the song control nuclei: HVC, used as a proper name (Reiner et al., 2004), the nucleus robustus of the arcopallium (RA), area X and the LMAN (lateral magnocellular nucleus of the anterior nidopallium); and the auditory regions: the dorsal part of the lateral mesencephalic nucleus (MLd), Field L, the caudal mesopallium (CM), and NCM (caudomedial nidopallium). Because they were too small to be sampled by at least one sagittal slice, DLM (dorsolateral nucleus of the medial thalamus), Ov (nucleus ovoidalis), and the nucleus interface of the nidopallium (NIf) were not investigated. We first computed the mean effect of each stimulus (compared with the rest period) in each voxel of the ROI, for each subject (fixed-effect analysis). In a second step, a group analysis was computed on the effects identified by the previous analysis with a repeated measure ANOVA (one factor with three levels representing the three stimuli). The main “stimulus” effect was computed, then post hoc one-tailed paired t tests were performed only in the voxels found to be significant in the main effect. This second step corresponds to a mixed-effect analysis and allows establishing the invariant behavior of the population from which the group was sampled. Correction for multiple comparisons was then applied on these voxels using the Family Wise Error method based on random field theory. The combination of voxel-based statistics and mixed-effect analysis, implemented for the first time on nonhuman fMRI data, represents a major improvement on small animal fMRI data analysis.

For lateralization analyses, we first calculated for each subject the differential effect between stimuli (BOS vs CON and CON vs HET) in the two MLd clusters found to be significantly activated in the group analysis. Then, these differential effects were compared between each hemisphere using two-tailed paired t tests. Differences were considered as statistically significant when p < 0.05.

Results

Neural substrates of bird's own song recognition

We first compared the BOLD response elicited by BOS versus CON stimuli. BOS stimuli were undirected songs produced by the tested bird whereas CON stimuli were undirected songs produced by a cage mate. BOS perception induced a greater activity in parts of two telencephalic song control regions, the right HVC and the right area X, and in one auditory midbrain region, the right MLd (Figs. 1, 2) (t ≥ 2.85, p ≤ 0.02, N = 15) (for detailed statistics and coordinates of activations, see Table 1). The anatomical resolution of these fMRI data did not allow determining whether the HVC cluster was located in HVC itself and/or in the shelf region around HVC. For simplicity, this cluster will be called HVC in the text that follows.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Neural substrates of own-song recognition (BOS vs CON) in the song control and auditory regions. A, Superimposition of the statistical results to anatomical sagittal images coming from a recently developed MRI zebra finch atlas (Poirier et al., 2008). Coordinates expressed in millimeters from the midline are indicated under each map. The + sign indicates that slices (and statistical results) are from the right hemisphere. t values are color coded according to the scale displayed on the right side of the panel. B, Rendered images of the whole zebra finch brain (gray), song control nuclei (blue), and auditory nuclei (yellow) (right sagittal view). For definitions of abbreviations, see text. C, Superimposition of the statistical results to rendered images of the same nuclei as in B (right sagittal view). D, Position of the right MLd activation (indicated by the + sign) relative to the adjacent motor dorsomedial nucleus of the mesencephalon (DM) in a coronal view, illustrating the clear separation between these two nuclei and specificity of the activation in the auditory MLd.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Relative amplitudes (in percentage) of neural activations elicited by CON (open bars) and BOS (closed bars) stimuli. Amplitudes have been averaged across the voxels constituting each cluster illustrated in Figure 1A. The zero level corresponds to the mean activation level during rest periods (exposure to scanner noise only). The error bars correspond to SEs across subjects (and not across voxels of each cluster). Asterisks indicate statistical significance of CON versus Rest and BOS versus Rest comparisons, whereas circles indicate statistical significance of BOS versus CON comparisons (corrected p < 0.05).

View this table:
  • View inline
  • View popup
Table 1.

Statistics of BOS versus CON comparison (N = 15)

To exclude the possibility that BOS selectivity observed in the right HVC, right X and right MLd was due to auditory characteristics specific to the BOS stimuli, we repeated the comparison BOS versus CON in a subgroup of 10 out of the 15 birds where BOS and CON stimuli were physically identical (each song was used as BOS in one bird and as CON in another bird). This analysis confirmed the greater recruitment of the right HVC, right X and right MLd during BOS perception compared with CON (t ≥ 2.52, p ≤ 0.025, N = 10) (Fig. 3).

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Relative amplitudes (in percentage) of the MLd activation in two representative subjects. The zero level corresponds to the mean activation level during rest periods. This figure illustrates that the intensity of the neural activity induced by one given song depends on the receiver of the signal (i.e., the experimental bird) and not on the song itself. The activation elicited by song A (left side of the graph) is greater when the receiver is the owner of the song (BOS, Subj A, open bar) and weaker when the receiver is a cage mate of A (CON, Subj B, closed bar). A similar relationship applies to song B (right side of the graph).

To confirm the specific involvement of these three regions in own-song recognition, we also compared BOLD responses elicited by BOS versus HET. The heterospecific stimulus, a familiar starling or canary song, was used as a natural complex auditory control stimulus. The difference between these stimuli was found significant in the right HVC, right X and right MLd (t ≥ 2.25, p ≤ 0.037, N = 15; data not shown).

Neuronal activity can decrease in intensity with stimulus repetition, a phenomenon called habituation in birdsong literature (Chew et al., 1995). This phenomenon could potentially occur at the neural (i.e., neuronal population) level and differential activations, as described so far, could merely result from differential habituation rates between stimuli. To test for the potential effect of this confounding factor, we used two complementary methods. First, we fitted the data with a linear regression and tested whether the slope of this regression significantly differed between the three stimuli. In the three regions, the slopes were not significantly different (right MLd: F = 0.3, p = 0.62; right HVC: F = 1.2, p = 0.44; right Area X: F = 2.9, p = 0.14, all N = 15). Because this approach requires defining an a priori linear shape of the trend, we also compared the activations between the first and the second half of the experiment by a two-way repeated measures ANOVA (within factors: stimuli and period of imaging). This second approach, although less powerful, has the advantage of being assumption-free regarding the shape of the habituation curve. Interaction between the two halves of the experiment and the three stimuli was not significant in the three regions (F ≤ 1.25, p ≥ 0.39, N = 15). The differential response to BOS versus CON and BOS versus HET thus cannot be solely explained by a differential habituation rate to the stimuli.

Neural substrates of conspecific song recognition

To test whether BOS and conspecific song recognition are supported by the same or different neural substrates, we identified the neural substrates of conspecific song recognition by comparing the neural activation elicited by CON and HET. This comparison revealed the specific involvement in CON perception of the left MLd, left Field L and of a region at the limit of left area X (Figs. 4, 5) (t ≥ 2.6, p ≤ 0.003, N = 15) (for detailed statistics and coordinates of activations, see Table 2).

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Neural substrates of conspecific song recognition (CON vs HET) in the song control and auditory regions. A, Superimposition of the statistical results to anatomical sagittal images coming from a recently developed MRI zebra finch atlas (Poirier et al., 2008). Coordinates expressed in millimeters from the midline are indicated under each map. The − sign indicates that slices (and statistical results) are from the left hemisphere. t values are color coded according to the scale displayed on the right side of the panel. B, Position in a coronal view of the left MLd activation (indicated by the + sign) in relation to the adjacent motor DM.

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Relative amplitudes (in percentage) of neural activations elicited by HET (closed bars) and CON (open bars) stimuli. Amplitudes have been averaged across the voxels constituting each cluster illustrated in Figure 4A. The zero level corresponds to the mean activation level during rest periods (exposure to scanner noise only). The error bars correspond to SEs across subjects. Asterisks indicate statistical significance of HET versus Rest and CON versus Rest comparisons, whereas circles indicate statistical significance of CON versus HET comparisons (corrected p < 0.05).

View this table:
  • View inline
  • View popup
Table 2.

Statistics of CON versus HET comparison (N = 15)

To confirm the specific involvement of the left MLd, left Field L and left area X in conspecific song recognition, we also tested the comparison BOS versus HET, where BOS was considered as a particular conspecific song. The difference between these stimuli was also found significant in these three regions (t ≥2.0, p ≤ 0.035, N = 15).

Potential differences in habituation rates between stimuli were controlled with the two methods described previously. The habituation rates to these stimuli were not significantly different in left MLd (F ≤2.1, p ≥ 0.27, N = 15), left Field L (F ≤ 0.1, p ≥ 0.61, N = 15) and left Area X (F ≤ 3.14, p ≥ 0.14, N = 15).

Lateralization of selectivity in MLd

In the analyses presented above, BOS selectivity was significant in the right MLd only but was not present on the left side (BOS vs CON on the left side, p = 0.51). In contrast, conspecific song selectivity was significant in left MLd only (CON vs HET on the right side, p = 0.31). To further test for the potential lateralization of these differential activations, we compared them directly in the left and right MLd (as defined functionally by previous comparisons) (Fig. 6). The differential activations elicited by BOS versus CON and by CON versus HET were both significantly lateralized (BOS minus CON, left vs right: t = −2.7, p = 0.02, N = 15; CON minus HET, left vs right: t = 2.4, p = 0.03, N = 15).

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Relative amplitudes (in percentage) of conspecific and own-song recognition in left (closed bars) and right (open bars) MLd. Positive values indicate that the region was more activated by CON than by HET (or by BOS than by CON). Negative values indicate that the region was more activated by HET than by CON (or by CON than by BOS). The error bars correspond to SEs across subjects. Circles indicate that the difference between left and right MLd clusters is significant (corrected p < 0.05).

Discussion

Consistently with the electrophysiological (neuronal) literature (Margoliash and Fortune, 1992; Janata and Margoliash, 1999), the present fMRI study revealed BOS selectivity in HVC and area X at the neuron population (neural) level. Additionally, a similar selectivity was discovered in the ascending auditory pathway at the MLd level. Because BOS and CON stimuli were physically identical in a subgroup of subjects, BOS selectivity could in no way be biased by acoustic peculiarities in the signals. The absence of significant differential neural activation in the other auditory and song control regions that were investigated cannot be interpreted as an absence of neuronal BOS selectivity. To be detectable by fMRI, neuronal BOS selectivity has to be expressed by a substantial number of neurons concentrated at one location. BOLD fMRI signal is best correlated to local field potentials (Logothetis et al., 2001) and does not allow distinguishing between excitatory and inhibitory neurons. In a recent electrophysiological study investigating neuronal BOS selectivity in the secondary auditory region CM, Bauer et al. (2008) found that some neurons were inhibitory and not selective to BOS whereas other neurons were excitatory and that among these excitatory neurons, some of them were BOS selective. This neuronal heterogeneity could explain why we did not observe any significant selectivity in CM at the neural level with fMRI.

Auditory feedback in song learning and production

Juvenile songbirds learn their song by listening to an adult conspecific male, the tutor (sensory phase), and by progressively matching their own vocalizations to this memorized model (sensory-motor phase). The first phase depends on auditory perception of the tutor song while the second one depends on auditory perception of the own song via auditory feedback. Both types of auditory experiences are necessary for normal song development. Auditory feedback is also important for adult song maintenance in zebra finches. Adult deafening (Nordeen and Nordeen, 1992) or artificially distorted auditory feedback imposed during singing (Leonardo and Konishi, 1999) induce alterations of the song structure.

For now more than 25 years, playback experiments have revealed that some neurons in telencephalic nuclei of the song control system fire selectively in response to playbacks of the bird's own song. These BOS-selective responses are thought to support the auditory feedback and/or the integration of this feedback with the motor circuitry for vocal learning and maintenance (Prather and Mooney, 2004; Theunissen et al., 2004). Perception of BOS playback through loudspeakers is to some extent different from the natural auditory feedback that the bird experiences when it is spontaneously singing (e.g., pitch distortion due to bone conduction, …). However, numerous experiments have demonstrated the specificity of BOS-selective responses. These auditory responses have been shown to be sensitive to specific temporal and spectral features found in the BOS. It is therefore quite unlikely if not impossible that the differential response observed here in birds exposed to BOS compared with CON was induced by these limited differences between sounds associated with auditory feedback and with BOS playback through loudspeakers.

Song learning and maintenance is thought to depend on matching motor efference copies with a reafferent auditory feedback. Whether MLd activation as measured in the present study represents a signal from the auditory input, the motor efference copies, the reafferent auditory feedback, or an integration of these signals, remains difficult to determine.

MLd belongs to the ascending auditory pathway and sends ascending auditory information to nucleus ovoidalis, which is suspected to project to NIf (Wild, 2004), a BOS-selective song control nucleus (Janata and Margoliash, 1999) representing a major auditory input to HVC (Fortune and Margoliash, 1995). MLd activation could thus reflect only these ascending inputs. However the area surrounding the premotor nucleus RA, called the RA cup projects to MLd and these projections are thought to convey auditory information (Mello et al., 1998). MLd activation might thus as well reflect activity in these descending projections.

Finally, MLd activation might also reflect a potential motor efference copy signal. Indeed the relatively weak responses elicited by BOS in awake zebra finches were found, quite surprisingly, to be enhanced in anesthetized or sleeping birds (Dave et al., 1998; Cardin and Schmidt, 2004). Additionally, it has been shown that spontaneous ongoing premotor activity can still be recorded when the birds are sleeping, a kind of song replay (Dave and Margoliash, 2000). Since BOS selectivity in MLd was detected in the present study in anesthetized birds, this selectivity possibly reflects, at least in part, mechanisms occurring when the bird is sleeping., i.e., the premotor activity that was previously identified. Activation of auditory areas has indeed already been found to be triggered by somato-sensory (human and nonhuman primates: Schroeder et al., 2001; Dhanjal et al., 2008) or premotor (zebra finches: Keller and Hahnloser, 2009) signals. However, to be responsible of BOS selectivity, the premotor activity would need to occur only during BOS playback periods. If we cannot exclude this hypothesis, there is, to our knowledge, no experimental data supporting it.

Lateralization in song and speech control

In humans, speech learning and maintenance are supposed to be supported by a feedforward and a feedback control. Under feedback control, speech production is monitored during execution and deviations from the expected signal are corrected according to auditory information. Under feedforward control, speech is produced based on previously learned commands, without reliance on incoming sensory information. Speech learning and maintenance are thought to result from the tuning of the feedforward motor commands over time by the auditory feedback (Guenther, 2006). Recent studies suggest that the feedforward control is lateralized to the left hemisphere and the auditory feedback control is, as least partially, lateralized to the right (Toyomura et al., 2007; Tourville et al., 2008). Making the link between these potential hemispheric dominances and speech lateralization, Tourville et al. (2008) suggest that while the left lateralization of speech production deducted from lesions studies would reflect the left lateralization of the feedforward control system, the right lateralization of the auditory feedback control would explain the importance of the right hemisphere observed in numerous aspects of speech production. This auditory feedback could participate to self-recognition processes (Fu et al., 2006) that seem to be lateralized to the right hemisphere in the auditory (Nakamura et al., 2001; Rosa et al., 2008) as in the visual modality (Keenan et al., 2001; Platek et al., 2006; Uddin et al., 2007). Song lateralization in birds varies according to the species (Schmidt et al., 2004). In zebra finches, lesion studies suggest a moderate right lateralization of song production (Williams et al., 1992; Floody and Arnold, 1997). The right lateralization of BOS selectivity observed in the present study could thus reflect this potential motor lateralization. However, the rapid effect of lesions on song production compared with the slow effects induced by auditory feedback disturbance indicate two distinct phenomena. The more pronounced effects of peripheral and central lesions of the right motor system could thus rather reflects the right lateralization of the feedforward control system, whereas the right lateralization of BOS selectivity could be due to a right lateralization of the auditory feedback control system, suggesting an anatomo-functional convergence between birds and humans. If this second hypothesis is true, BOS selectivity should be found lateralized to the right side even in bird species where song production is lateralized to the left.

Relationship between BOS and CON recognition

In addition to its role in song learning and song maintenance, BOS selectivity has been suggested to underlie conspecific song recognition and discrimination via motor encoding (Williams and Nottebohm, 1985). A conspecific song could also be discriminated from a heterospecific song based on its closer perceptual proximity to the BOS. Our results suggest, however, the existence of distinct neural substrates for BOS and conspecific song recognition but with a possible facilitating influence of the former on the latter. BOS recognition was found to involve the right MLd and species recognition to involve the left MLd. Whereas both hemispheres are anatomically independent at the telencephalic level (due to an absence of a corpus callosum in birds), midbrain and thalamic regions possess interhemispheric connections. Such lateral connections could thus drive a modulatory input from right to left MLd, mediating a facilitating effect of BOS recognition on conspecific song recognition.

Role of subcortical structures in learning processes

Subcortical structures are usually considered to be primitive and are known to be responsible for simple decoding of physical parameters of the stimuli, but not for signal integration. Experience-dependent plasticity is usually considered limited to cortical structures. For instance, in songbirds, innate call production is supported by midbrain structures whereas learned call and song production are thought to rely on forebrain structures (Margoliash, 1997). This probably explains why previous investigations about BOS selectivity in the ascending auditory system focused on the telencephalic auditory regions (Lewicki and Arthur, 1996; Janata and Margoliash, 1999; Amin et al., 2004; Shaevitz and Theunissen, 2007; Bauer et al., 2008). However, more and more experimental results challenge this view. In barn owls, auditory experience has been shown to modify tuning properties of inferior colliculus neurons (Gold and Knudsen, 2000). In humans, plasticity has been observed in the auditory brainstem after linguistic training (Song et al., 2008). While the capacity to process complex information about self and others was thought to be computed by the recently evolved fronto-parietal regions (Gallup, 1997; Uddin et al., 2007), the midbrain has been shown to be involved in own-face recognition (Sugiura et al., 2005). Previous electrophysiological studies in zebra finches revealed that MLd neurons respond to a wide range of complex sounds (Woolley and Casseday, 2005) and are tuned to auditory features that enhance the acoustic differences between conspecific songs (Woolley et al., 2005), suggesting already that these neurons are more complex than expected. Our results indicate that a subcortical structure can process subtle information about the identity of a subject through experience-dependent mechanisms.

Footnotes

  • This work was supported by a Francqui grant and a Bijzonder Onderzoeks Fonds–Klein Project grant from the University of Antwerp to C.P., a grant from the National Institute of Neurological Disorders and Stroke (NS035467) to J.B., grants from the Research Foundation–Flanders (FWO, project number G.0420.02), Concerted Research Actions (Geconcerte Onderzoeks Actie funding) from the University of Antwerp, by European Commission–Sixth Framework Program (EC–FP6) project DiMI, LSHB-CT-2005-512146, and EC–FP6 project EMIL LSHC-CT-2004-503569 to A.V.d.L. T.B. is doctoral fellow of the FWO. We thank G. F. Ball, M. Gahr, O. Güntürkün, P. Joris, and V. Van Meir for their comments on an earlier version of this manuscript.

  • Correspondence should be addressed to Colline Poirier, Bio-Imaging Laboratory, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp, Belgium. colline.poirier{at}ua.ac.be

References

  1. ↵
    1. Amin N,
    2. Grace JA,
    3. Theunissen FE
    (2004) Neural response to bird's own song and tutor song in the zebra finch field L and caudal mesopallium. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 190:469–489.
    OpenUrlCrossRefPubMed
  2. ↵
    1. Bauer EE,
    2. Coleman MJ,
    3. Roberts TF,
    4. Roy A,
    5. Prather JF,
    6. Mooney R
    (2008) A synaptic basis for auditory–vocal integration in the songbird. J Neurosci 28:1509–1522.
    OpenUrlAbstract/FREE Full Text
  3. ↵
    1. Boumans T,
    2. Theunissen FE,
    3. Poirier C,
    4. Van Der Linden A
    (2007) Neural representation of spectral and temporal features of song in the auditory forebrain of zebra finches as revealed by functional MRI. Eur J Neurosci 26:2613–2626.
    OpenUrlCrossRefPubMed
  4. ↵
    1. Boumans T,
    2. Gobes SM,
    3. Poirier C,
    4. Theunissen FE,
    5. Vandersmissen L,
    6. Pintjens W,
    7. Verhoye M,
    8. Bolhuis JJ,
    9. Van der Linden A
    (2008a) A functional MRI of auditory responses in the zebra finch forebrain reveals a hierarchical organisation based on signal strength but not selectivity. PLoS ONE 3:e3184.
    OpenUrlCrossRefPubMed
  5. ↵
    1. Boumans T,
    2. Vignal C,
    3. Smolders A,
    4. Sijbers J,
    5. Verhoye M,
    6. Van Audekerke J,
    7. Mathevon N,
    8. Van der Linden A
    (2008b) Functional magnetic resonance imaging in zebra finch discerns the neural substrate involved in segregation of conspecific song from background noise. J Neurophysiol 99:931–938.
    OpenUrlAbstract/FREE Full Text
  6. ↵
    1. Cardin JA,
    2. Schmidt MF
    (2004) Auditory responses in multiple sensorimotor song system nuclei are co-modulated by behavioral state. J Neurophysiol 91:2148–2163.
    OpenUrlAbstract/FREE Full Text
  7. ↵
    1. Chew SJ,
    2. Mello C,
    3. Nottebohm F,
    4. Jarvis E,
    5. Vicario DS
    (1995) Decrements in auditory responses to a repeated conspecific song are long-lasting and require two periods of protein synthesis in the songbird forebrain. Proc Natl Acad Sci U S A 92:3406–3410.
    OpenUrlAbstract/FREE Full Text
  8. ↵
    1. Cynx J
    (1993) Conspecific song perception in zebra finches (Taeniopygia guttata) J Comp Psychol 107:395–402.
    OpenUrlCrossRefPubMed
  9. ↵
    1. Cynx J,
    2. Nottebohm F
    (1992) Role of gender, season, and familiarity in discrimination of conspecific song by zebra finches (Taeniopygia guttata) Proc Natl Acad Sci U S A 89:1368–1371.
    OpenUrlAbstract/FREE Full Text
  10. ↵
    1. Dave AS,
    2. Margoliash D
    (2000) Song replay during sleep and computational rules for sensorimotor vocal learning. Science 290:812–816.
    OpenUrlAbstract/FREE Full Text
  11. ↵
    1. Dave AS,
    2. Yu AC,
    3. Margoliash D
    (1998) Behavioral state modulation of auditory activity in a vocal motor system. Science 282:2250–2254.
    OpenUrlAbstract/FREE Full Text
  12. ↵
    1. Dhanjal NS,
    2. Handunnetthi L,
    3. Patel MC,
    4. Wise RJ
    (2008) Perceptual systems controlling speech production. J Neurosci 28:9969–9975.
    OpenUrlAbstract/FREE Full Text
  13. ↵
    1. Doupe AJ,
    2. Konishi M
    (1991) Song-selective auditory circuits in the vocal control system of the zebra finch. Proc Natl Acad Sci U S A 88:11339–11343.
    OpenUrlAbstract/FREE Full Text
  14. ↵
    1. Duong TQ,
    2. Kim DS,
    3. Uðurbil K,
    4. Kim SG
    (2001) Localized cerebral blood flow response at submillimeter columnar resolution. Proc Natl Acad Sci U S A 98:10904–10909.
    OpenUrlAbstract/FREE Full Text
  15. ↵
    1. Floody OR,
    2. Arnold AP
    (1997) Song lateralization in the zebra finch. Horm Behav 31:25–34.
    OpenUrlCrossRefPubMed
  16. ↵
    1. Fortune ES,
    2. Margoliash D
    (1995) Parallel pathways and convergence onto HVc and adjacent neostriatum of adult zebra finches (Taeniopygia guttata) J Comp Neurol 360:413–441.
    OpenUrlCrossRefPubMed
  17. ↵
    1. Fu CH,
    2. Vythelingum GN,
    3. Brammer MJ,
    4. Williams SC,
    5. Amaro E Jr.,
    6. Andrew CM,
    7. Yágüez L,
    8. van Haren NE,
    9. Matsumoto K,
    10. McGuire PK
    (2006) An fMRI study of verbal self-monitoring: neural correlates of auditory verbal feedback. Cereb Cortex 16:969–977.
    OpenUrlAbstract/FREE Full Text
  18. ↵
    1. Gallup GG Jr.
    (1997) On the rise and fall of self-conception in primates. Ann N Y Acad Sci 818:72–82.
    OpenUrlPubMed
  19. ↵
    1. Gold JI,
    2. Knudsen EI
    (2000) A site of auditory experience-dependent plasticity in the neural representation of auditory space in the barn owl's inferior colliculus. J Neurosci 20:3469–3486.
    OpenUrlAbstract/FREE Full Text
  20. ↵
    1. Guenther FH
    (2006) Cortical interactions underlying the production of speech sounds. J Commun Disord 39:350–365.
    OpenUrlCrossRefPubMed
  21. ↵
    1. Janata P,
    2. Margoliash D
    (1999) Gradual emergence of song selectivity in sensorimotor structures of the male zebra finch song system. J Neurosci 19:5108–5118.
    OpenUrlAbstract/FREE Full Text
  22. ↵
    1. Keenan JP,
    2. Nelson A,
    3. O'Connor M,
    4. Pascual-Leone A
    (2001) Self-recognition and the right hemisphere. Nature 409:305.
    OpenUrlCrossRefPubMed
  23. ↵
    1. Keller GB,
    2. Hahnloser RH
    (2009) Neural processing of auditory feedback during vocal practice in a songbird. Nature 457:187–190.
    OpenUrlCrossRefPubMed
  24. ↵
    1. Kim DS,
    2. Duong TQ,
    3. Kim SG
    (2000) High-resolution mapping of iso-orientation columns by fMRI. Nat Neurosci 3:164–169.
    OpenUrlCrossRefPubMed
  25. ↵
    1. Leonardo A,
    2. Konishi M
    (1999) Decrystallization of adult birdsong by perturbation of auditory feedback. Nature 399:466–470.
    OpenUrlCrossRefPubMed
  26. ↵
    1. Lewicki MS,
    2. Arthur BJ
    (1996) Hierarchical organization of auditory temporal context sensitivity. J Neurosci 16:6987–6998.
    OpenUrlAbstract/FREE Full Text
  27. ↵
    1. Logothetis NK,
    2. Guggenberger H,
    3. Peled S,
    4. Pauls J
    (1999) Functional imaging of the monkey brain. Nat Neurosci 2:555–562.
    OpenUrlCrossRefPubMed
  28. ↵
    1. Logothetis NK,
    2. Pauls J,
    3. Augath M,
    4. Trinath T,
    5. Oeltermann A
    (2001) Neurophysiological investigation of the basis of the fMRI signal. Nature 412:150–157.
    OpenUrlCrossRefPubMed
  29. ↵
    1. Margoliash D
    (1997) Functional organization of forebrain pathways for song production and perception. J Neurobiol 33:671–693.
    OpenUrlCrossRefPubMed
  30. ↵
    1. Margoliash D,
    2. Fortune ES
    (1992) Temporal and harmonic combination-sensitive neurons in the zebra finch's HVc. J Neurosci 12:4309–4326.
    OpenUrlAbstract
  31. ↵
    1. Mello CV,
    2. Vates GE,
    3. Okuhata S,
    4. Nottebohm F
    (1998) Descending auditory pathways in the adult male zebra finch (Taeniopygia guttata) J Comp Neurol 395:137–160.
    OpenUrlCrossRefPubMed
  32. ↵
    1. Nakamura K,
    2. Kawashima R,
    3. Sugiura M,
    4. Kato T,
    5. Nakamura A,
    6. Hatano K,
    7. Nagumo S,
    8. Kubota K,
    9. Fukuda H,
    10. Ito K,
    11. Kojima S
    (2001) Neural substrates for recognition of familiar voices: a PET study. Neuropsychologia 39:1047–1054.
    OpenUrlCrossRefPubMed
  33. ↵
    1. Nordeen KW,
    2. Nordeen EJ
    (1992) Auditory feedback is necessary for the maintenance of stereotyped song in adult zebra finches. Behav Neural Biol 57:58–66.
    OpenUrlCrossRefPubMed
  34. ↵
    1. Platek SM,
    2. Loughead JW,
    3. Gur RC,
    4. Busch S,
    5. Ruparel K,
    6. Phend N,
    7. Panyavin IS,
    8. Langleben DD
    (2006) Neural substrates for functionally discriminating self-face from personally familiar faces. Hum Brain Mapp 27:91–98.
    OpenUrlCrossRefPubMed
  35. ↵
    1. Poirier C,
    2. Vellema M,
    3. Verhoye M,
    4. Van Meir V,
    5. Wild JM,
    6. Balthazart J,
    7. Van Der Linden A
    (2008) A three-dimensional MRI atlas of the zebra finch brain in stereotaxic coordinates. Neuroimage 41:1–6.
    OpenUrlCrossRefPubMed
  36. ↵
    1. Prather JF,
    2. Mooney R
    (2004) Neural correlates of learned song in the avian forebrain: simultaneous representation of self and others. Curr Opin Neurobiol 14:496–502.
    OpenUrlCrossRefPubMed
  37. ↵
    1. Reiner A,
    2. Perkel DJ,
    3. Bruce LL,
    4. Butler AB,
    5. Csillag A,
    6. Kuenzel W,
    7. Medina L,
    8. Paxinos G,
    9. Shimizu T,
    10. Striedter G,
    11. Wild M,
    12. Ball GF,
    13. Durand S,
    14. Güntürkün O,
    15. Lee DW,
    16. Mello CV,
    17. Powers A,
    18. White SA,
    19. Hough G,
    20. Kubikova L
    (2004) Revised nomenclature for avian telencephalon and some related brainstem nuclei. J Comp Neurol 473:377–414.
    OpenUrlCrossRefPubMed
  38. ↵
    1. Rosa C,
    2. Lassonde M,
    3. Pinard C,
    4. Keenan JP,
    5. Belin P
    (2008) Investigations of hemispheric specialization of self-voice recognition. Brain Cogn 68:204–214.
    OpenUrlCrossRefPubMed
  39. ↵
    1. Schmidt MF,
    2. Ashmore RC,
    3. Vu ET
    (2004) Bilateral control and interhemispheric coordination in the avian song motor system. Ann N Y Acad Sci 1016:171–186.
    OpenUrlCrossRefPubMed
  40. ↵
    1. Schroeder CE,
    2. Lindsley RW,
    3. Specht C,
    4. Marcovici A,
    5. Smiley JF,
    6. Javitt DC
    (2001) Somatosensory input to auditory association cortex in the macaque monkey. J Neurophysiol 85:1322–1327.
    OpenUrlAbstract/FREE Full Text
  41. ↵
    1. Shaevitz SS,
    2. Theunissen FE
    (2007) Functional connectivity between auditory areas field L and CLM and song system nucleus HVC in anesthetized zebra finches. J Neurophysiol 98:2747–2764.
    OpenUrlAbstract/FREE Full Text
  42. ↵
    1. Song JH,
    2. Skoe E,
    3. Wong PC,
    4. Kraus N
    (2008) Plasticity in the adult human auditory brainstem following short-term linguistic training. J Cogn Neurosci 20:1892–1902.
    OpenUrlCrossRefPubMed
  43. ↵
    1. Sugiura M,
    2. Watanabe J,
    3. Maeda Y,
    4. Matsue Y,
    5. Fukuda H,
    6. Kawashima R
    (2005) Cortical mechanisms of visual self-recognition. Neuroimage 24:143–149.
    OpenUrlCrossRefPubMed
  44. ↵
    1. Theunissen FE,
    2. Amin N,
    3. Shaevitz SS,
    4. Woolley SM,
    5. Fremouw T,
    6. Hauber ME
    (2004) Song selectivity in the song system and in the auditory forebrain. Ann N Y Acad Sci 1016:222–245.
    OpenUrlCrossRefPubMed
  45. ↵
    1. Tourville JA,
    2. Reilly KJ,
    3. Guenther FH
    (2008) Neural mechanisms underlying auditory feedback control of speech. Neuroimage 39:1429–1443.
    OpenUrlCrossRefPubMed
  46. ↵
    1. Toyomura A,
    2. Koyama S,
    3. Miyamaoto T,
    4. Terao A,
    5. Omori T,
    6. Murohashi H,
    7. Kuriki S
    (2007) Neural correlates of auditory feedback control in human. Neuroscience 146:499–503.
    OpenUrlCrossRefPubMed
  47. ↵
    1. Uddin LQ,
    2. Iacoboni M,
    3. Lange C,
    4. Keenan JP
    (2007) The self and social cognition: the role of cortical midline structures and mirror neurons. Trends Cogn Sci 11:153–157.
    OpenUrlCrossRefPubMed
  48. ↵
    1. Van Meir V,
    2. Boumans T,
    3. De Groof G,
    4. Van Audekerke J,
    5. Smolders A,
    6. Scheunders P,
    7. Sijbers J,
    8. Verhoye M,
    9. Balthazart J,
    10. Van der Linden A
    (2005) Spatiotemporal properties of the BOLD response in the songbirds' auditory circuit during a variety of listening tasks. Neuroimage 25:1242–1255.
    OpenUrlCrossRefPubMed
  49. ↵
    1. Voss HU,
    2. Tabelow K,
    3. Polzehl J,
    4. Tchernichovski O,
    5. Maul KK,
    6. Salgado-Commissariat D,
    7. Ballon D,
    8. Helekar SA
    (2007) Functional MRI of the zebra finch brain during song stimulation suggests a lateralized response topography. Proc Natl Acad Sci U S A 104:10667–10672.
    OpenUrlAbstract/FREE Full Text
  50. ↵
    1. Wild JM
    (2004) Functional neuroanatomy of the sensorimotor control of singing. Ann N Y Acad Sci 1016:438–462.
    OpenUrlCrossRefPubMed
  51. ↵
    1. Williams H,
    2. Nottebohm F
    (1985) Auditory responses in avian vocal motor neurons: a motor theory for song perception in birds. Science 229:279–282.
    OpenUrlAbstract/FREE Full Text
  52. ↵
    1. Williams H,
    2. Crane LA,
    3. Hale TK,
    4. Esposito MA,
    5. Nottebohm F
    (1992) Right-side dominance for song control in the zebra finch. J Neurobiol 23:1006–1020.
    OpenUrlCrossRefPubMed
  53. ↵
    1. Woolley SM,
    2. Casseday JH
    (2005) Processing of modulated sounds in the zebra finch auditory midbrain: responses to noise, frequency sweeps, and sinusoidal amplitude modulations. J Neurophysiol 94:1143–1157.
    OpenUrlAbstract/FREE Full Text
  54. ↵
    1. Woolley SM,
    2. Fremouw TE,
    3. Hsu A,
    4. Theunissen FE
    (2005) Tuning for spectro-temporal modulations as a mechanism for auditory discrimination of natural sounds. Nat Neurosci 8:1371–1379.
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 29 (7)
Journal of Neuroscience
Vol. 29, Issue 7
18 Feb 2009
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Acknowledgements for Reviewers
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Own-Song Recognition in the Songbird Auditory Pathway: Selectivity and Lateralization
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Own-Song Recognition in the Songbird Auditory Pathway: Selectivity and Lateralization
Colline Poirier, Tiny Boumans, Marleen Verhoye, Jacques Balthazart, Annemie Van der Linden
Journal of Neuroscience 18 February 2009, 29 (7) 2252-2258; DOI: 10.1523/JNEUROSCI.4650-08.2009

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Own-Song Recognition in the Songbird Auditory Pathway: Selectivity and Lateralization
Colline Poirier, Tiny Boumans, Marleen Verhoye, Jacques Balthazart, Annemie Van der Linden
Journal of Neuroscience 18 February 2009, 29 (7) 2252-2258; DOI: 10.1523/JNEUROSCI.4650-08.2009
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Articles

  • Memory Retrieval Has a Dynamic Influence on the Maintenance Mechanisms That Are Sensitive to ζ-Inhibitory Peptide (ZIP)
  • Neurophysiological Evidence for a Cortical Contribution to the Wakefulness-Related Drive to Breathe Explaining Hypocapnia-Resistant Ventilation in Humans
  • Monomeric Alpha-Synuclein Exerts a Physiological Role on Brain ATP Synthase
Show more Articles

Behavioral/Systems/Cognitive

  • Individual Differences in Amygdala-Medial Prefrontal Anatomy Link Negative Affect, Impaired Social Functioning, and Polygenic Depression Risk
  • Influence of Reward on Corticospinal Excitability during Movement Preparation
  • Identification and Characterization of a Sleep-Active Cell Group in the Rostral Medullary Brainstem
Show more Behavioral/Systems/Cognitive
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.