Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
PreviousNext
Articles, Behavioral/Cognitive

Beta-Band Oscillations Represent Auditory Beat and Its Metrical Hierarchy in Perception and Imagery

Takako Fujioka, Bernhard Ross and Laurel J. Trainor
Journal of Neuroscience 11 November 2015, 35 (45) 15187-15198; DOI: https://doi.org/10.1523/JNEUROSCI.2397-15.2015
Takako Fujioka
1Centre for Computer Research in Music and Acoustics and
2Neuroscience Institute, Stanford University, Stanford, California 94305,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Bernhard Ross
3Rotman Research Institute, Toronto, Ontario M6A 2E1, Canada,
4Department of Medical Biophysics, University of Toronto, Toronto, Ontario M5G 1L7, Canada, and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Bernhard Ross
Laurel J. Trainor
3Rotman Research Institute, Toronto, Ontario M6A 2E1, Canada,
5McMaster Institute for Music and the Mind and
6Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario L8S 4L8, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Dancing to music involves synchronized movements, which can be at the basic beat level or higher hierarchical metrical levels, as in a march (groups of two basic beats, one–two–one–two …) or waltz (groups of three basic beats, one–two–three–one–two–three …). Our previous human magnetoencephalography studies revealed that the subjective sense of meter influences auditory evoked responses phase locked to the stimulus. Moreover, the timing of metronome clicks was represented in periodic modulation of induced (non-phase locked) β-band (13–30 Hz) oscillation in bilateral auditory and sensorimotor cortices. Here, we further examine whether acoustically accented and subjectively imagined metric processing in march and waltz contexts during listening to isochronous beats were reflected in neuromagnetic β-band activity recorded from young adult musicians. First, we replicated previous findings of beat-related β-power decrease at 200 ms after the beat followed by a predictive increase toward the onset of the next beat. Second, we showed that the β decrease was significantly influenced by the metrical structure, as reflected by differences across beat type for both perception and imagery conditions. Specifically, the β-power decrease associated with imagined downbeats (the count “one”) was larger than that for both the upbeat (preceding the count “one”) in the march, and for the middle beat in the waltz. Moreover, beamformer source analysis for the whole brain volume revealed that the metric contrasts involved auditory and sensorimotor cortices; frontal, parietal, and inferior temporal lobes; and cerebellum. We suggest that the observed β-band activities reflect a translation of timing information to auditory–motor coordination.

SIGNIFICANCE STATEMENT With magnetoencephalography, we examined β-band oscillatory activities around 20 Hz while participants listened to metronome beats and imagined musical meters such as a march and waltz. We demonstrated that β-band event-related desynchronization in the auditory cortex differentiates between beat positions, specifically between downbeats and the following beat. This is the first demonstration of β-band oscillations related to hierarchical and internalized timing information. Moreover, the meter representation in the β oscillations was widespread across the brain, including sensorimotor and premotor cortices, parietal lobe, and cerebellum. The results extend current understanding of the role of β oscillations in neural processing of predictive timing.

  • event-related desynchronization
  • ERD
  • magnetoencephalography
  • predictive coding
  • timing processing

Introduction

When people dance to music with isochronous beats, they synchronize their movements with each beat while emphasizing or accenting the downbeats (beat one) compared to upbeats of perceived musical meters, such as a march (perceived as one–two–one …) or a waltz (perceived as one–two–three–one …). Spontaneous dance movements to music involve different repetitive limb and trunk motions corresponding to different levels of the metrical hierarchy (Toiviainen et al., 2009). Thus, adults can extract hierarchical timing information and plan entrained movements in a predictive manner without specific training. Predictive timing behavior is illustrated in finger tapping to a metronome beat, wherein the beat-to-tap interval is notably shorter than the fastest reaction time (Repp, 2005; Repp and Su, 2013). Predictive timing also appears to be an integral part of dynamic attention allocation (Jones and Boltz, 1989), such that auditory discrimination is facilitated at predictable time points (Jones et al., 2002; Grube and Griffiths, 2009; Repp, 2010), and even enhanced during rhythmic finger tapping in synchrony with these time points (Morillon et al., 2014). Altogether, these findings suggest that privileged connections between auditory and motor systems underlie predictive timing processing.

As for neural processing of beat and meter, evoked (phase-locked) activity elicited by acoustically identical beats has been shown to reflect the subjective sense of meter in magnetoencephalography (MEG; Fujioka et al., 2010) and electroencephalography (EEG; Potter et al., 2009; Nozaradan et al., 2011; Schaefer et al., 2011). However, only our previous study examined the brain areas sensitive to differences in metrical structure (Fujioka et al., 2010). The results showed involvement of sensorimotor cortices and basal ganglia and hippocampal areas, in addition to auditory areas. We also demonstrated that the magnitude of induced (non-phase-locked) β-band (13–30 Hz) oscillations during metronome listening is modulated in a synchronized manner in bilateral auditory cortices (Fujioka et al., 2009) and motor-related areas, including sensorimotor cortices, supplementary motor areas, basal ganglia, and cerebellum (Fujioka et al., 2012). Because the observed induced β activity does not contain phase-locked evoked responses, and our listening task did not require any movements (unlike tapping studies; Pollok et al., 2005a,b), these findings strongly suggest that the auditory system is part of a functional sensorimotor network characterized by β-band oscillations (Neuper et al., 2006). Recently, β activity has been associated with anticipation and predictive timing (Arnal et al., 2011; van Ede et al., 2011) as part of a hierarchical network of oscillators involved in predictive sensory processing, including contributions from δ to θ (1–3 and 4–7 Hz, respectively) and α to β (8–12 and 13–30 Hz, respectively) frequencies (Saleh et al., 2010; Arnal et al., 2014).

The present study examined whether induced β oscillations reflect musical meter and, if so, which brain areas are involved. We hypothesized that a shared representation for auditorily perceived and subjectively imagined meter exists, although the latter would be more effortful, involving extended brain areas. Participants listened to alternations of 12-beat sequences of metrically accented beats (every second or third beat was louder) and unaccented beats (all beats at the same loudness), probing metrical perception and imagery, respectively. This design allowed us to examine induced β modulation, as shown previously for listening to beats (Fujioka et al., 2009, 2012), and extend it to metrical perception, while avoiding effects of movements.

Materials and Methods

Participants.

Fourteen healthy young adults (eight females; age, 19–32 years; mean, 22.8 years) who were active professionally performing musicians at or above the university level (3–29 years of musical training; mean, 12.7 years; 3–28 h of weekly practice; mean, 18.5) participated in the study. All were right-handed except one with ambidexterity. None of the participants reported any history of neurological, otological, or psychological disorders and their hearing was tested with clinical audiometry between 250 and 4000 Hz. All gave signed informed consent before participation after receiving a detailed explanation of the nature of study. The procedure was approved by the Ethics Board at the Rotman Research Institute, Baycrest Centre.

Stimuli and task.

Auditory stimuli were 250 Hz pure tones with 5 ms rise and fall times and 15 ms steady-state duration, created by MATLAB using a sampling rate of 44,100 Hz (MathWorks). The tones were repeated with a regular onset-to-onset interval of 390 ms and combined into a looping 24 tone sequence. During the first half of the sequence, every second or third tone was acoustically accented (+13 dB) to create either a march or a waltz metric structure, whereas in the second half, unaccented tones of equal intensity (40 dB above individual sensation threshold, measured immediately before each MEG recording) were repeated 12 times. The resultant 24 tone sequence was then repeated continuously 28 times (about 4.5 min) in the march condition and 43 times (about 7 min) in the waltz condition in separate blocks. The waltz block was made ∼50% longer than the march block to accommodate the same number of each different beat type so as to equate signal-to-noise ratios for each beat type in MEG recordings. The stimulation was controlled by Presentation software (Neurobehavioral Systems). Participants were instructed to perceive the meter in the accented segments and to imagine the same meter in the unaccented segments. Occasionally a 500 Hz pure-tone target sound with the same duration as the other tones was inserted in one of the unaccented 12 beat segments at a nominal downbeat or upbeat position. The target position and beat type (down vs up) were randomized by the stimulation program to insert targets in 10% of sequences (i.e., about three to five times in a block) with equal probability in downbeat and upbeat positions. The actual number of targets varied across blocks and participants. No targets occurred in nominal middle-beat positions in the case of the waltz condition for the sake of simplicity. When hearing the high-pitched tone, participants indicated whether it was at a downbeat or an upbeat position by pressing a button with their left or right index finger on a keypad assigned to downbeats and upbeats. This target detection task was primarily designed to keep the participants vigilant and attending to the respective metric structure, rather than assessing their behavioral performance, given the level of musicianship in the participants. Another reason to keep the number of the targets extremely small was to prevent contamination from movements. Other than the occasional button presses, participants were instructed to stay still, avoid any movements, and keep their eyes open and fixated on a visual target placed in front of them. The march and waltz blocks were alternated, and each condition repeated three times. The order of the blocks as well as the hand-target (downbeat or upbeat) assignment was randomized across participants. Before going to the MEG testing, the participants received detailed instruction about the stimuli and task, and practiced until they felt confident. The sound was delivered binaurally with ER3A transducers (Etymotic Research), which were connected to the participant's ears via 3.4-m-long plastic tubes and foam earplugs.

MEG recording.

MEG was performed in a quiet magnetically shielded room with a 151 channel whole-head axial gradiometer-type MEG system (VSM Medtech) at the Rotman Research Institute. The participants were seated comfortably in an upright position with the head resting inside the helmet-shaped MEG sensor. The magnetic field data were low-pass filtered at 200 Hz, sampled at 625 Hz, and stored continuously. The head location relative to MEG sensors was registered at the beginning and end of each recording block using small electromagnetic coils attached to three fiducial points at the nasion and left and right preauricular points. The mean of the repeated fiducial recordings defined the head-based coordinate system with origin at the midpoint between the bilateral preauricular points. The posteroanterior x-axis was oriented from the origin to the nasion, the mediolateral y-axis (positive toward the left ear) was the perpendicular to x in the plane of the three fiducials, and the inferosuperior z-axis was perpendicular to the x–y plane (positive toward the vertex). The block was repeated when the fiducial locations deviated in any direction by more than ±5 mm from the mean. A surface electromyogram (EMG) was recorded with brass electrodes placed below the first dorsal interosseous muscle and the first knuckle of the index finger in the left and right hands using two channels of a bipolar EMG amplifier system. The EMG signals as well as the trigger signals from the stimulus computer were recorded simultaneously with the magnetoencephalogram.

Data analysis.

Artifacts in the MEG recording were corrected in the following procedure. First, the time points of eye-blink and heartbeat artifacts were identified using independent component analysis (Ille et al., 2002). The first principle components of the averaged artifacts were used as spatiotemporal templates to eliminate artifacts in the continuous data (Kobayashi and Kuriki, 1999). Thereafter, the continuous data were parsed into epochs according to experimental trials containing 24 beat intervals of 390 ms (9.36 s) plus preceding and succeeding intervals of 1.0 s each, resulting in a total epoch length of 11.36 s.

We used two types of source analysis. First, to examine the dynamics of induced β-band oscillations in bilateral auditory cortices, we obtained a dipole source model and examined the time series of auditory evoked responses and its time-frequency decomposition representing event-related changes in oscillatory activity. Second, to examine the involvement of different brain areas in the β-band activities, we applied a model-free source analysis using a spatial filter based on an MEG beamformer to investigate β activity across the whole brain.

β-Band oscillations in bilateral auditory cortical sources.

To examine source activities in the bilateral auditory cortices, we used a source localization approach with equivalent current dipole model. Short segments of MEG data ±200 ms around the time points of the accented beats were averaged to obtain the auditory evoked response. At the N1 peak of the auditory evoked response, single equivalent dipoles were modeled in the left and right temporal lobes. In right-handed subjects, the auditory source in the right hemisphere is consistently found to be several millimeters more anterior than in the left hemisphere (Pantev et al., 1998). Therefore, we compared locations between the two hemispheres in our data to verify the quality of the source localization. The accented beats were used for this modeling purpose because their N1 peak was particularly enhanced, thus offering superior signal-to-noise ratio for the dipole modeling. Dipole locations and orientations from all blocks were averaged to obtain individual dipole models.

Based on the individual dipole models, the source waveforms for all single trials were calculated (Tesche et al., 1995; Teale et al., 2013). The resulting dipole source waveforms, sometimes termed “virtual channel” or “virtual electrode” waveforms, served as estimates of the neuroelectric activity in the auditory cortices. The polarities of the dipoles were adjusted to follow the convention from EEG recording such that the N1 response showed negative polarity at frontocentral electrodes.

The single-trial source waveforms were submitted to time-frequency analysis. To obtain induced oscillatory activities, the time-domain-averaged evoked response was regressed out from all waveforms. The time-frequency decomposition used a modified Morlet wavelet (Samar et al., 1999) at 64 logarithmically spaced frequencies between 2 and 50 Hz. The half maximum width of the wavelet was adjusted across the frequency range to contain two cycles at 2 Hz and six cycles at 50 Hz. This design accounted for the expectation of a larger number of cycles in a burst of oscillations at higher than low frequencies. The signal power was calculated for each time-frequency coefficient. For each frequency bin, the signal power was normalized to the mean across the 9.36 s epoch (e.g., 24 beat cycle) and expressed as the signal power change. This normalization was conducted separately for each stimulus interval and meter context. The percentage signal power changes were averaged across repeated trials and across participants. The resulting time-frequency map of the whole epoch was segmented according to the 390 ms beat interval, the two-beat interval in the march condition, and the three-beat interval in the waltz condition, separately for the intervals of meter perception (containing accented beats) and imagery (no physical accents present). The 13–30 Hz frequency range subsumes multiple functionally and individually different narrowband oscillations, and signal analyses were performed on subsets of the β band (Kropotov, 2009). The power modulation in the auditory cortex β oscillations was first inspected using the aforementioned time-frequency decompositions for the average of responses across all beat types and then selectively for each beat type, resulting in the TFRs shown in Figures 2⇓–4. Based on the previous observation of the strongest event-related desynchronization (ERD) at 20 Hz, we examined the power modulation by averaging the wavelet coefficients across bins with center frequencies between 18 and 22 Hz. The combined signal had a bandpass characteristic with points of 50% amplitude reduction at 15.1 and 25.5 Hz, as determined by the properties of the short Morlet wavelet kernels. In the resulting β-band waveforms, the 95% confidence interval of the grand average as a representation of subject variability was estimated with bootstrap resampling (N = 1000). The magnitude of power decrease at ∼200 ms following tone onsets compared to the baseline was computed as the mean in a 120 ms window around the grand-average peak latency for each beat type. This magnitude of β-ERD was further examined by a repeated measures ANOVA using three within-subject factors—hemisphere (left vs right), beat type (downbeat vs upbeat, plus middle beat in the case of waltz), and stimulus interval (perception vs imagery)—separately for the march and waltz meter conditions.

β-Band oscillations in beamformer sources.

We identified areas in the whole brain that showed a contrast in responses to different beat types and thus likely contributed to the meter representation. This analysis was conducted through three major steps: first, an MEG source model was constructed as a spatial filter across the brain volume; second, β-band power of the spatially filtered source activity at each volume element was calculated; and finally, the brain areas were extracted at which β activities matched the prescribed contrast between beat types using a multivariate analysis.

First, to capture source activities across the brain volume, we constructed a spatial filter using a beamformer approach called synthetic aperture magnetometry (SAM) and applied it to the magnetic field data to calculate the time series of local source activity at 8 × 8 × 8 mm volume elements covering the whole brain. The SAM approach uses a linearly constrained minimum variance beamformer algorithm (Van Veen et al., 1997; Robinson and Vrba, 1999), normalizes source power across the whole cortical volume (Robinson, 2004), and is capable of revealing deep brain sources (Vrba and Robinson, 2001; Vrba, 2002). SAM source analysis has been successfully applied for identifying activities in auditory (Ross et al., 2009) and sensorimotor cortices (Jurkiewicz et al., 2006), and deep sources such as hippocampus (Riggs et al., 2009), fusiform gyrus, and amygdala (Cornwell et al., 2008). The SAM spatial filter was computed with 15–25 Hz bandpass filtered MEG data, according to our previous SAM analysis of beat-related β oscillations (Fujioka et al., 2012). This operation was aimed at obtaining the SAM filter that can suppress correlated activities across the brain volume (thus spurious artifacts observed at different areas likely originated from the shared source) specifically in this frequency range, based on the covariance matrix. In this computation, we used a template brain magnetic resonance image (MRI) in standard Talairach coordinates (positive axes toward anterior, right, and superior directions) using the Analysis of Functional NeuroImages (AFNI) software package (Cox, 1996). MEG source analysis based on individual coregistration with a spherical head model, and group analysis based on a template brain, is sufficiently accurate (Steinstraeter et al., 2009) and equivalent to the typical spatial uncertainty in group analysis based on Talairach normalization using individual MRIs (Hasnain et al., 1998). Thus, this approach has been used when individual MRIs are not available (Jensen et al., 2005; Ross et al., 2009; Fujioka et al., 2010).

As a next step, time series of β-power change were calculated at each volume element using the SAM virtual channel data, after applying a bandpass filter to the artifact-corrected magnetic field data. The bandpass filter was constructed using a MATLAB filter design routine (fir1) to obtain similar frequency characteristics as for the wavelet analysis of auditory β-band activity with points of 50% amplitude reduction at 15.0 and 25.0 Hz. Epochs of magnetic field data were first transformed to the SAM virtual channel data for each single trial and normalized to the SD of the whole epoch segment. Thereafter, we averaged the one-beat onset-to-onset interval (0–390 ms) plus a short segment before and after (each about 48 ms) for each combination of beat type, stimulus interval, and meter condition. The baseline was adjusted using a time window in the latency range between −48 and 0 ms.

Finally, we compared the four-dimensional source data (3D maps × time) across beat types using partial least squares (PLS) analysis (McIntosh et al., 1996; Lobaugh et al., 2001; McIntosh and Lobaugh, 2004). This multivariate analysis, using singular value decomposition, is an extension of principal component analysis to identify a set of orthogonal factors [latent variables (LVs)] to model the covariance between two matrices such as spatiotemporal brain activities and contrasts between conditions. A latent variable consists of three components: (1) a singular value representing the strength of the identified differences; (2) a design LV, which characterizes a contrast pattern across conditions and indicates which conditions have different data-contrast correlations; and (3) a brain LV characterizing which time points and source locations represent the spatiotemporal pattern most related to the associated contrast described as the design LV. Here we used a nonrotated version of PLS analysis (McIntosh and Lobaugh, 2004), which allows a hypothesis-driven comparison of the dependent variables using a set of predefined design contrasts about conditions. (For example, in our march case, we assigned −1 to the downbeat and +1 to the upbeat, and in the waltz case, we added another design LV, assigning −1 to the middle beat and +1 to the upbeat.) Our main goal here was to identify differences between beat types in acoustically and subjectively maintained meter processing. Accordingly, we conducted four separate nonrotated PLS analyses (march perception, march imagery, waltz perception, waltz imagery). In the march perception and march imagery conditions, a design LV contrasting downbeat and upbeat data was applied. For the waltz perception and waltz imagery conditions, downbeat, middle beat, and upbeat data were compared as a combination of two pairwise comparisons (e.g., LV1, down vs up; LV2, middle vs up). The significance of obtained LVs was validated through two types of resampling statistics. The first step, using random permutation, examined whether each latent variable represented a significant contrast between the conditions. The PLS analysis was repeatedly applied to the data set with permuted conditions within subjects, to observe a probability reflecting the number of times the permuted singular value was higher than the originally observed singular value. We used 200 permutations, and the significance level was set at 0.05. For each significant LV, the second step examined where and at which time point the corresponding brain LV (the obtained brain activity pattern) was significantly consistent across participants. At each volume element and each time point, the SD of the brain LV value was estimated with bootstrap resampling (N = 200) with replacing participants. The results were expressed as the ratio of the brain LV value to the SD. Note that the ratio reflects the signal strength compared to the interindividual variability, corresponding to a z-score. Using this bootstrap ratio as a threshold, the locations in which this bootstrap ratio was larger than 2.0 (corresponding to the 95% confidence interval) as a mean within the beat interval were visualized using AFNI, as illustrated in Figures 5 and 6. Finally, the same data were further analyzed to extract local maxima and minima to determine the brain areas contributing to the obtained contrast, indicated in the Tables 1 and 2.

View this table:
  • View inline
  • View popup
Table 1.

Stereotaxic Talairach coordinates of brain area locations with a statistically significant effect of the beat-type contrast in β-ERD for the march condition

View this table:
  • View inline
  • View popup
Table 2.

Stereotaxic Talairach coordinates of brain area locations with a statistically significant effect of the beat-type contrast in β-ERD for the waltz condition

Results

Behavioral performance in target detection

During the MEG recording, participants were instructed to pay attention to the metrical structure and indicate with a button press whether an occasional high-pitched target tone in the imagery sections of the stimuli occurred at a downbeat or upbeat position. This task was intended more for keeping participants alert than comparing behavioral performance across conditions. The number of the targets was, on average, 21 per participant through six blocks. All the participants successfully maintained vigilance, as evident in the very small number of missed targets. Seven of 14 participants missed no targets, and the remaining seven missed no more than three targets. Participants were also quite successful in identifying the beat type, as indicated by the correct identification rates: means, 74.4% (SEM, 8.0) for the march downbeat, 75.3% (SEM, 8.1) for march upbeat, 81.0% (SEM, 7.5) for the waltz downbeat, 85.2% (SEM, 5.9) for the waltz upbeat. There were no significant differences in performance across beat-types, as assessed by ANOVA and t tests.

Auditory source localization

Localization of equivalent current dipole sources for the N1 peak of the evoked response to the accented beats was successful in all participants in all six blocks. Mean dipole locations in the head-based coordinate system were x = −4.4 mm, y = −47.8 mm, z = 47.6 mm in the right hemisphere and x = −8.0 mm, y = 46.3 mm, z = 47.7 mm in the left hemisphere, corresponding to Talairach coordinates of the MNI–Colin27 template brain of x = −47 (right), y = 18 (posterior), and z = 9 (superior) in the right hemisphere and x = 42, y = 19, and z = 7 in the left hemisphere. The right hemispheric dipole location was significantly more anterior than the left across all acquired dipoles (t(83) = 5.7, p < 0.001), demonstrating the integrity of the obtained source localizations.

Evoked responses in the auditory cortex

Waveforms of auditory evoked responses were calculated for each experimental trial as the dipole moment of the estimated cortical sources. Grand-averaged waveforms across all participants for left and right auditory cortices are shown in Figure 1A for the march condition and in Figure 1B for the waltz condition. The top traces shows the time course of the stimuli, which were presented with a constant stimulus onset asynchrony of 390 ms. In the first 12 beat segment, the vertically longer bars indicate the accented beats, which were 13 dB higher in intensity (six in the march condition, four in the waltz condition), whereas the second 12 beat segment of the imagery condition contained the unaccented softer beats only. Although each beat stimulus elicited a series of positive and negative deflections, the morphology of the responses changed systematically over the time course of the stimulus sequence. Most prominent were the early P1 waves with latencies of ∼50–80 ms, and the N1 waves with latencies of ∼100–130 ms. However, pronounced N1 waves were expressed only in response to the louder accented stimuli in the perception interval. For the softer stimuli, there was only a subtle dip between the P1 and the following P2 peak. The first accented beat after the 12 equally soft stimuli of the imagery interval was the most perceptually salient stimulus. Accordingly, this first accented beat elicited the most pronounced N1 response in both hemispheres across both the march and waltz conditions. Source strengths were 17.4 and 17.8 nAm in the left hemisphere and 14.7 and 16.2 nAm in the right hemisphere, respectively. The N1 response to the second accented sound was already strongly attenuated compared to the first accented sound, as shown in Figure 1, A and B. By the end of the physically accented 12 beat sequence, the last accented sound elicited a much smaller N1 peak. Significantly reduced N1 amplitudes at a fast stimulation rate are consistent with the literature (Näätänen and Picton, 1987). In contrast to the varying N1 amplitude, the P1 peaks were more consistent across the stimulus sequence (Fig. 1A,B).

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Time courses of click stimuli and auditory evoked responses, grand averaged across left and right cortical sources and across all participants. Twenty-four isochronous clicks were used as auditory stimuli with an onset-to-onset interval of 390 ms. A, In the march condition, the first half of the stimulus sequence imposed the meter structure by acoustically accenting every second click. In this interval, the participants were instructed to perceive the march meter. The second half of the sequence remained at the softer intensity throughout. Here the participants had to imagine the meter structure subjectively. The evoked P1 was prominently expressed in response to each beat stimulus. In the perception interval, the auditory evoked N1 response was predominantly expressed for the accented downbeat stimuli only, whereas during the imagery interval, the evoked N1 was very small for all beats. B, For the waltz condition, every third stimulus was accented. Again, P1 responses were prominent to each beat and N1 responses followed the physically accented stimuli.

Meter representation in the auditory cortex β oscillations

Time-frequency representations (TFRs) in Figure 2 illustrate how each beat stimulus led to changes in oscillatory activity. The TFR in Figure 2A shows signal power changes grand averaged across all beats, regardless of the position within the metric structure, for the “perceived” meter; similarly, Figure 2B shows this for the “imagined” meter. For each frequency, the spectral power was normalized to the mean across each one-beat time interval (0–390 ms) and expressed as the percentage change, commonly termed event-related synchronization and desynchronization (Pfurtscheller and Lopes da Silva, 1999). The TFRs show temporal fluctuations following beat onsets predominantly at frequencies of ∼20 Hz and below. The amplitude of low-frequency oscillations was larger for the perception condition than the imagery condition. This corresponds to the higher stimulus intensity for the downbeat in the perception condition, which elicited enlarged N1 responses, as shown in Figure 1. To reduce such effects of the evoked response and capture solely induced oscillatory activities, the TFRs were recalculated (Fig. 2C,D) after the time-domain-averaged response was regressed out from each trial of the MEG signal. For these evoked (Fig. 2A,B) and induced (C,D) activities, the baseline was calculated and normalized separately for perception and imagery conditions. The time courses of the β-band amplitude changes, shown in Figures 2, E and F, were referenced to the level at 50 ms after the stimulus onset. The β modulations showed a steep decrease immediately after the stimulus, reached the minimum at 200 ms latency, and recovered with a shallow slope. The time courses of β-ERD for the perception and imagine conditions resembled each other closely.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Oscillatory activities related to the beat in the left and right auditory cortices, obtained by averaging across all beat and meter types. Spectral power changes were referenced to the mean across the beat interval. A, The original TFR for all the beat types averaged from the accented “perception” stimulus interval. The signal power increase at ∼100 ms latency between 5 and 15 Hz reflects spectral power of the auditory evoked response, which is enhanced by the acoustically accented beats. B, The TFR during the unaccented imagery stimulus interval. This contains less contribution from the evoked response because all the stimuli are unaccented. C, D, Induced oscillatory activities expressed in the TFRs in which the spectral power of the averaged evoked response was subtracted before averaging, thus leaving only non-phase-locked signal power changes. E, F, Time series of β modulation in the 15–25 Hz band. The β-ERD was referenced to the maximum amplitude at ∼50 ms latency.

The TFRs with attenuated contribution of the evoked response were analyzed separately for the different beat types, stimulus intervals, and metric conditions. The TFRs for the march and waltz conditions are shown in Figures 3 and 4, respectively, for the perception and imagery conditions. Time 0 corresponds to the onset of the downbeat stimulus. The periodic modulation at β frequencies was visible in all beat-related intervals. However, in the march condition, the trough of β-ERD was noticeably deeper after the downbeat than after the upbeat for both hemispheres. This was not only the case in the perception condition, where metrical accents were physically present, but also for the imagery condition. The depths of the β-ERD around 200 ms were compared with a repeated measures ANOVA with three within-participant factors: beat type (down, up), stimulus interval (perception, imagery), and hemisphere (left, right). The ANOVA revealed a main effect of the beat type (F(1,13) = 7.075, p = 0.0196) due to larger β-ERD after the downbeats compared to the upbeats (p < 0.001). Pairwise comparisons between the downbeat and upbeat in each stimulus interval revealed that for both perception and imagery conditions, the beat-type contrast was significant when the data from both hemispheres were combined (perception, t(27) = 2.74, p = 0.0108; imagery, t(27) = 2.28, p = 0.0307). In particular, for the perception condition, the beat-type contrast in the right hemisphere was significant (t(13) = 2.227, p = 0.0442). For the imagery condition, beat type was significant in the left hemisphere (t(13) = 3.356, p = 0.0052). In the ANOVA, no other main effects or interactions were found to be significant.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Induced oscillatory activity in the left and right auditory cortex for the march condition. A, B, Time-frequency representation of the auditory source activity in the perception and imagery conditions, respectively, in the left (A) and right hemispheres (B). C, D, Time course of modulation of β-band activity in the left (C) and right hemispheres (D). The shaded area represents the 95% confidence interval of the group mean.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Induced oscillatory activity in the left and right auditory cortex for the waltz condition. A, B, Time-frequency representation of the auditory source activity in the perception and imagery conditions, respectively, in the left (A) and right hemispheres (B). C, D, Time course of modulation of β-band activity in the left (C) and right hemispheres (D). The shaded area represents the 95% confidence interval of the group mean.

In the waltz condition (Fig. 4), the trough of β-ERD at ∼200 ms after the downbeat was also deeper compared with those following the upbeat or the middle beat for both hemispheres. In the ANOVA, again, beat type was the only factor showing a significant effect (F(1,13) = 10.257, p = 0.0005). This main effect was contributed to by the stronger β-ERD after the downbeat compared to the middle beat for both perception and imagery combined together (p < 0.01). This contrast was significant for the perception in the left hemisphere (t(13) = 2.23, p = 0.0443). When the data for both hemispheres were combined, the contrast in the perception condition approached the significance level, whereas that in the imagery condition reached the significance level (perception, t(27) = 1.79, p = 0.0842; imagery, t(27) = 2.36, p = 0.0258). Also, the main effect beat type was contributed to by the larger β-ERD after the upbeat than that after the middle beat (p < 0.05). This contrast in the imagery condition was significant in the left hemisphere (t(13) = 2.22, p = 0.0449). When both hemispheres were combined, the contrast for perception and imagery conditions approached the significance level (perception, t(27) = 1.56, p = 0.129; imagery, t(27) = 1.79, p = 0.0842). There was no significant difference between the upbeat and downbeat. No other main effects or interactions were found.

In summary, periodic modulation of induced β oscillation, with a minimum around 200 ms after the beat onset and a subsequent rebound, was the most prominent effect on brain oscillations regardless of whether the meter was imposed by acoustic accents or by subjective imagery. Furthermore, the β-ERD was larger for the downbeat than for the upbeat in the march condition, and larger for the downbeat and upbeat compared to the middle beat in the waltz condition.

Meter representation in β-band oscillation across the whole brain

Next, we examined the sources of β activity within each beat interval with a SAM beamformer analysis. The SAM spatial filter was constructed on an 8 × 8 × 8 mm lattice across the whole brain. The resulting source activity was bandpass filtered between 15 and 25 Hz and expressed as magnitude of β power, as source analysis does not preserve the signal polarity.

The spatiotemporal pattern of the brain activity in the 0–390 ms latency interval that specifically expressed beat-type differences across acoustically accented and imagined meters was analyzed by a nonrotated task PLS. In the nonrotated PLS, planned comparisons are conducted by using a set of contrast patterns between the conditions of interest as the design LVs. Resampling of the identified brain LV resulted in estimates of bootstrap ratios, normalized to the SD of the group, thus expressing the LV by sets of z-scores for each volume element. Note that through this conversion, the signal strength is expressed in reference to the intersubject variability. Static volumetric brain maps were obtained from averaging the four-dimensional data over the time interval corresponding to one beat (0–390 ms), as illustrated and visualized in Figures 5 and 6 for the march perception, march imagery, waltz perception, and waltz imagery conditions, respectively. Talairach coordinates of local minima and maxima were extracted among those voxels where the bootstrap score was higher than 2 and are listed in Tables 1 and 2.

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

The results of the PLS analysis for the β-band power that characterizes the contrast between the different beat types in the march condition. The LV1 represents the contrast between the downbeat and upbeat (left, top), which was significant in both march perception and march imagery conditions. The corresponding brain areas in the march perception condition (right, top) demonstrate only the cool colored voxels, in which the β power is decreased for the downbeat compared to the upbeat. In the march imagery condition (bottom), the associated brain areas (right) show both cool colored area (downbeat β decrease > upbeat β decrease) and warm colored areas (e.g., downbeat β decrease < upbeat β decrease). The list of the locations and Talairach coordinates are indicated in Table 1. Ins, Insula; MFG, middle frontal gyrus; MTG, middle temporal gyrus; pHppG, parahippocampal gyrus; PostCG, postcentral gyrus; PreCG, precentral gyrus; PreCu, precuneus; TTG, transverse temporal gyrus.

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

The results of the PLS analysis for the β-band power that characterizes the contrast between the different beat types in the waltz condition. A, The LV1 related to the contrast between the downbeat and upbeat (left) was only significant in the waltz perception condition. The corresponding brain areas (right) demonstrate the blue colored voxels, in which the β power is more decreased for the downbeat compared to the upbeat, and the red colored areas, representing the opposite pattern. B, The LV2 related to the contrast between the middle beat and upbeat (left) was significant for both the waltz perception and waltz imagery conditions, yielding the associated brain areas (right). Note that for the LV2, in both perception and imagery, the only brain areas above the significance level were associated with the larger β-power decrease for the upbeat, compared to the middle beat. The list of the locations and Talairach coordinates are indicated in Table 2. Cg, Cingulate; Crb, cerebellum; FFG, fusiform gyrus; Ins, insula; MedFG, medial frontal gyrus; MFG, middle frontal gyrus; MTG, middle temporal gyrus; PCL, paracentral lobule; PostCG, postcentral gyrus; PreCG, precentral gyrus; PreCu, precuneus; SFG, superior frontal gyrus; Th, thalamus; TTG, transverse temporal gyrus.

In the march condition, the design LV expressed the contrast between the downbeat (−1) and upbeat (+1) for the perception and imagery conditions (Fig. 5, bar plot) to capture the beat-related power decrease. In the march perception condition, the LV was significant (p = 0.0249), explaining the difference in β power between the downbeat and upbeat. The map of associated brain areas, shown in Figure 5 (top right), indicates that only the downbeat related β-ERD contributed to the contrast, involving the auditory cortex, which is in line with the results in our equivalent current dipole source analysis. However, the equivalent PLS for the march imagery condition revealed significant contributions from both downbeat and upbeat related β-ERDs in different brain areas. The downbeat-related ERD involved the right inferior parietal lobule (IPL), right superior temporal gyrus (STG), precuneus, right precentral gyrus, and middle frontal gyrus. The upbeat related β-ERD was observed in bilateral postcentral gyrus, right IPL and parahippocampal gyrus, and left transverse temporal gyrus.

In the waltz condition, the nonrotated PLS analysis examined data across the three beat types by two pairwise comparisons for the perception and imagery stimulus intervals. LV1 used the contrast between downbeats (−1) and upbeats (+1), and LV2 used the contrast between middle (−1) and upbeats (+1; Fig. 6, left, bar plots). LV1 explained 36.0% and 44.3% of the data variance in the perception and imagery conditions, respectively, but reached statistical significance only in the waltz imagery condition (p = 0.0199), even though the waltz perception condition used acoustically louder stimuli. Only brain areas in the right middle temporal gyrus were significantly involved in the downbeat-related β-ERD (Fig. 6A, right, blue colored areas), whereas the upbeat β-ERD was associated with power decreases in widespread areas in the left STG, right cingulate gyrus, precentral gyrus, precuneus, and paracentral lobule. LV2, representing the contrast between the middle beats and upbeats, explained 63.9% and 55.7% of the variance in the waltz perception and waltz imagery conditions, respectively, and reached statistical significance in both (p = 0.005 and p < 0.0001, respectively). The brain areas exceeding the significance level by the bootstrap test were associated with β-ERD related to the upbeat (Fig. 6B, yellow), compared to that for the middle beat. In the waltz perception condition, the areas included bilateral auditory and sensorimotor sites such as the STG, IPL, and precentral and postcentral gyrus. Also, medial and lateral premotor cortex and anterior cingulate cortex contributed to the contrast. The brain areas involved in the waltz imagery condition were similar to those in the waltz perception condition, but included additional subcortical areas such as the right claustrum and bilateral cerebellum.

Altogether, the beamformer source analysis followed by PLS revealed that meter structure was reflected in the modulation of β power across a wide range of brain areas such as the temporal, frontal, and parietal lobes and cerebellum. Changes in β activity were generally sensitive to the meter structure and involved wide-range networks of brain areas that were specifically different between march and waltz meters, and between perception and imagery conditions.

Discussion

Our study demonstrates four key findings. First, we replicated our previous finding of periodic modulation of induced β activity in bilateral auditory cortices, elicited by isochronous beats. Second, the amount of β-ERD 200 ms after beat onsets depended on whether beats were perceived as accented or not, regardless of whether the accents were physically present in the stimulus or imagined. Third, march and waltz metrical structures elicited different relationships between upbeats and downbeats. Fourth, despite the common metric representation of β activity in the auditory cortex between the perception and imagery conditions, the distributed brain areas representing the beat-type contrasts differed between the stimulus intervals and meter types. In general, compared to simply perceiving the meter, imagining the meter subjectively required a notably larger number of brain areas. Also, the waltz condition was associated with a wider range of sensorimotor and frontoparietal areas than the march condition, particularly for the middle beat/upbeat contrast. Altogether, the results demonstrate that meter processing likely involves orienting temporal attention to upcoming beats differently according to the beat type. Such temporal processing systematically regulates the β-band network similarly to motor imagery tasks, but without involvement of specific effectors or spatial attention.

The observed periodic β modulation synchronized with the beat interval regardless of beat type and meter (Figs. 2⇑–4) extends our previous results in passive listening (Fujioka et al., 2012) to attentional listening. The robustness of this pattern regardless of metrical structure (physically present or imagined) further supports our previous interpretation that it reflects the automatic transformation of predictable auditory interbeat time intervals into sensorimotor codes, coordinated by corticobasal ganglia–thalamocortical circuits. Other previous studies corroborate this view (Merchant et al., 2015). For example, a similar dependency between the auditory beat tempo and β modulation was found with EEG in 7-year-old children (Cirelli et al., 2014), and adults' center frequency of spontaneous β activity correlates with their preferred tapping tempo (Bauer et al., 2015). Premovement β-power time course predicted the subsequently produced time interval (Kononowicz and Rijn, 2015). In primates, β oscillations in local field potentials from the putamen showed similar entrainment during internally guided tapping, and it was stronger than in auditory-paced tapping, which suggests the importance of internalized timing information for the initiation of movement sequences (Bartolo et al., 2014; Bartolo and Merchant, 2015). These findings, including the current one, are in line with broader hypotheses on the role of β oscillation for timing and predictive sensory processing (Arnal and Giraud, 2012; Leventhal et al., 2012), specifically, that coupling between β oscillation and slower δ-to-θ modulatory activities together regulate task-relevant sensory gating (Lakatos et al., 2005; Saleh et al., 2010; Cravo et al., 2011; Arnal et al., 2014). In this respect, it should be noted that the metric levels (rate of strong beats) in the present study of the march and waltz (1.28 and 0.85 Hz, respectively) fit into the δ band.

More importantly, the experience of meter was encoded in β-ERD, which varied significantly across beat types in both the perception and imagery intervals (Figs. 3, 4). No significant interactions were found between beat type and whether the meter was given in the stimulus or imagined. This indicates that the listeners can endogenously generate internalized experiences of metric structure. Because of the cyclic nature of the stimuli, the modulation pattern likely contains contributions from both stimulus-induced β-ERD and subsequent β rebound, which may relate to endogenous processes. Interestingly, the enhancement of the β-ERD related to beat types was different between the march and waltz conditions. Specifically, for the march, larger β-ERD was observed after the downbeat compared to the upbeat. The pattern was more complex in the waltz, for which listeners showed larger β-ERD for both downbeats and upbeats compared to middle beats. For the target detection task during imagery, targets never occurred on middle beats. However, the different β-ERD pattern for middle beats is unlikely a consequence of this. First, it is only by performing the imagery while internalizing the metric structure that participants would even know which beats were middle beats. Without internalizing the metric structure, the memory demands to differentiate all 12 beat positions during the imagery interval would be unrealistic. As well, attention to target positions cannot explain the similar pattern of middle beat/upbeat β-ERD in the perception interval, where there were no targets. The qualitative differences between march and waltz meters, especially when guided by imagery, are similar to previous findings in auditory evoked responses with MEG (Fujioka et al., 2010) and EEG (Schaefer et al., 2011). For example, Fujioka et al. (2010) found significant differences between evoked responses to downbeats in march and waltz conditions across the brain, although this study did not analyze explicitly responses to middle beats. Schaefer et al. (2011) examined ERPs in patterns with accents every two, three, or four beats, objectively and subjectively. Their principal component waveforms (Schaefer et al., 2011, their Fig. 7, bottom) show that the middle beat in the three-beat pattern was more different from the downbeat and upbeat than those two were from each other. This result may be well related to those from another series of studies that investigated spontaneously and subjectively imposed “binary” (two-beat) meter processing on identical isochronous tones, in which auditory evoked responses to one of the tones and its deviations were enhanced for presumed downbeat positions (Brochard et al., 2003; Abecasis et al., 2005; Potter et al., 2009). These support that the binary march meter is “more natural” than the ternary waltz meter, as the production and perception of ternary meters seem more difficult than those of binary meters (Drake, 1993; Desain and Honing, 2003), although this bias may be learned and not universal across cultures (Hannon and Trainor, 2007; Trainor and Hannon, 2013). The novel findings here are that whereas evoked responses to metrically accented beats are much larger when the meter is acoustically defined than when it is imagined, the pattern of β-ERD differences across beat types is similar across both the perceived and imagined meters. Thus, this result further supports the role of β modulation for the representation of internalized timing.

The brain areas for β-band meter representations paint a rather complex picture. The identified areas generally agree with those found previously for beat representation (Fujioka et al., 2012), including auditory cortex, sensorimotor cortex, medial frontal premotor cortex, anterior and posterior cingulate cortex, and some portions of the medial temporal lobe, parietal lobe, basal ganglia/thalamus, and cerebellum. The sensorimotor and medial frontal premotor cortices as well as parietal lobe, basal ganglia, and cerebellum have been repeatedly implicated in auditory rhythms and temporal attention tasks in neuroimaging studies (Lewis and Miall, 2003; Nobre et al., 2007; Zatorre et al., 2007; Kotz and Schwartze, 2010; Wiener et al., 2010). Our results also provide a number of interesting observations. First, although left and right auditory cortices were involved in meter processing, the hemispheric contributions were complex and seem to be affected by meter type (march, waltz) and stimulus interval (perception, imagery). Specifically, enhanced β-ERD was significant in the left auditory cortex for downbeat processing in the march perception, but in the right for the march imagery (Fig. 5), but again in the left in the waltz imagery for upbeat processing (Fig. 6). Second, processing both the march and waltz meters engaged similar areas in the parietal lobe such as the inferior parietal lobule and precuneus, but more extended areas were observed in the waltz compared to march conditions, in line with the idea that the waltz rhythm is more complex and requires additional resources. Third, the perception and imagery conditions engaged overlapping but not identical areas in the brain, despite the fact that these conditions produced similar responses from auditory cortical areas. The imagery engaged additional brain regions, in line with the increased cognitive load, which may have partly resulted from the target detection task during the imagery. In sum, metrical processing is reflected in β-power modulation across a wide network, and the details of the extent to which different brain regions are involved depends on both on the complexity of the meter and the task requirements related to mental effort. This also resonates with inconclusive results from lesion studies. Previous studies reported that meter processing was impaired in right hemisphere (Kester et al., 1991; Wilson et al., 2002) and either hemisphere lesion (Liégeois-Chauvel et al., 1998). So far, no single neural substrate or hemisphere has been related to meter processing (Stewart et al., 2006). Future research including animal models examining how global and local β oscillations reflect timing and their hierarchical representations, combined with neural computational models of metrical timing processing (Jazayeri and Shadlen, 2010; Vuust et al., 2014), will be needed to elucidate finer details.

Neural representation of the auditory rhythm in the β oscillations is relevant to clinical conditions. For example, metronome pacing stimuli can benefit those with stuttering (Toyomura et al., 2015) and other motor impairment caused by Parkinsons' disease and stroke (Thaut et al., 2015), as well as children with dyslexia (Przybylski et al., 2013). Previously, β oscillations in the basal ganglia and sensorimotor cortex were hypothesized to be associated with dopamine levels available in the corticostriatal circuits (Jenkinson and Brown, 2011; Brittain and Brown, 2014) because of its rapid changes with learning (Herrojo Ruiz et al., 2014). Time processing mechanisms related to auditory rhythm would provide useful biomarkers for rehabilitation and for learning in developmental disorders.

Footnotes

  • This work was supported by Canadian Institutes of Health Research Grant MOP 115043 to L.J.T. and T.F. We sincerely thank Brian Fidali and Panteha Razavi for assisting with recruiting and testing procedures.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Dr. Takako Fujioka, 660 Lomita Court, Stanford, CA 94305. takako{at}ccrma.stanford.edu

References

  1. ↵
    1. Abecasis D,
    2. Brochard R,
    3. Granot R,
    4. Drake C
    (2005) Differential brain response to metrical accents in isochronous auditory sequences. Music Percept 22:549–562, doi:10.1525/mp.2005.22.3.549.
    OpenUrlAbstract/FREE Full Text
  2. ↵
    1. Arnal LH,
    2. Giraud AL
    (2012) Cortical oscillations and sensory predictions. Trends Cogn Sci 16:390–398, doi:10.1016/j.tics.2012.05.003, pmid:22682813.
    OpenUrlCrossRefPubMed
  3. ↵
    1. Arnal LH,
    2. Wyart V,
    3. Giraud AL
    (2011) Transitions in neural oscillations reflect prediction errors generated in audiovisual speech. Nat Neurosci 14:797–801, doi:10.1038/nn.2810, pmid:21552273.
    OpenUrlCrossRefPubMed
  4. ↵
    1. Arnal LH,
    2. Doelling KB,
    3. Poeppel D
    (2014) Delta-beta coupled oscillations underlie temporal prediction accuracy. Cereb Cortex 25:3077–3085, doi:10.1093/cercor/bhu103, pmid:24846147.
    OpenUrlCrossRefPubMed
  5. ↵
    1. Bartolo R,
    2. Merchant H
    (2015) β oscillations are linked to the initiation of sensory-cued movement sequences and the internal guidance of regular tapping in the monkey. J Neurosci 35:4635–4640, doi:10.1523/JNEUROSCI.4570-14.2015, pmid:25788680.
    OpenUrlAbstract/FREE Full Text
  6. ↵
    1. Bartolo R,
    2. Prado L,
    3. Merchant H
    (2014) Information processing in the primate basal ganglia during sensory-guided and internally driven rhythmic tapping. J Neurosci 34:3910–3923, doi:10.1523/JNEUROSCI.2679-13.2014, pmid:24623769.
    OpenUrlAbstract/FREE Full Text
  7. ↵
    1. Bauer AKR,
    2. Kreutz G,
    3. Herrmann CS
    (2015) Individual musical tempo preference correlates with EEG beta rhythm. Psychophysiology 52:600–604, doi:10.1111/psyp.12375, pmid:25353087.
    OpenUrlCrossRefPubMed
  8. ↵
    1. Brittain JS,
    2. Brown P
    (2014) Oscillations and the basal ganglia: motor control and beyond. Neuroimage 85:637–647, doi:10.1016/j.neuroimage.2013.05.084.
    OpenUrlCrossRefPubMed
  9. ↵
    1. Brochard R,
    2. Abecasis D,
    3. Potter D,
    4. Ragot R,
    5. Drake C
    (2003) The “ticktock” of our internal clock: direct brain evidence of subjective accents in isochronous sequences. Psychol Sci 14:362–366, doi:10.1111/1467-9280.24441, pmid:12807411.
    OpenUrlAbstract/FREE Full Text
  10. ↵
    1. Cirelli LK,
    2. Bosnyak D,
    3. Manning FC,
    4. Spinelli C,
    5. Marie C,
    6. Fujioka T,
    7. Ghahremani A,
    8. Trainor LJ
    (2014) Beat-induced fluctuations in auditory cortical beta-band activity: using EEG to measure age-related changes. Front Psychol 5:742, doi:10.3389/fpsyg.2014.00742.
    OpenUrlCrossRefPubMed
  11. ↵
    1. Cornwell BR,
    2. Johnson LL,
    3. Holroyd T,
    4. Carver FW,
    5. Grillon C
    (2008) Human hippocampal and parahippocampal theta during goal-directed spatial navigation predicts performance on a virtual Morris water maze. J Neurosci 28:5983–5990, doi:10.1523/JNEUROSCI.5001-07.2008, pmid:18524903.
    OpenUrlAbstract/FREE Full Text
  12. ↵
    1. Cox RW
    (1996) AFNI: software for analysis and visualization of functional magnetic resonance neuroimages. Comput Biomed Res 29:162–173, doi:10.1006/cbmr.1996.0014, pmid:8812068.
    OpenUrlCrossRefPubMed
  13. ↵
    1. Cravo AM,
    2. Rohenkohl G,
    3. Wyart V,
    4. Nobre AC
    (2011) Endogenous modulation of low frequency oscillations by temporal expectations. J Neurophysiol 106:2964–2972, doi:10.1152/jn.00157.2011, pmid:21900508.
    OpenUrlAbstract/FREE Full Text
  14. ↵
    1. Desain P,
    2. Honing H
    (2003) The formation of rhythmic categories and metric priming. Perception 32:341–365, doi:10.1068/p3370, pmid:12729384.
    OpenUrlAbstract/FREE Full Text
  15. ↵
    1. Drake C
    (1993) Reproduction of musical rhythms by children, adult musicians, and adult nonmusicians. Percept Psychophys 53:25–33, doi:10.3758/BF03211712, pmid:8433903.
    OpenUrlCrossRefPubMed
  16. ↵
    1. Fujioka T,
    2. Trainor LJ,
    3. Large EW,
    4. Ross B
    (2009) Beta and gamma rhythms in human auditory cortex during musical beat processing. Ann N Y Acad Sci 1169:89–92, doi:10.1111/j.1749-6632.2009.04779.x.
    OpenUrlCrossRefPubMed
  17. ↵
    1. Fujioka T,
    2. Zendel BR,
    3. Ross B
    (2010) Endogenous neuromagnetic activity for mental hierarchy of timing. J Neurosci 30:3458–3466, doi:10.1523/JNEUROSCI.3086-09.2010, pmid:20203205.
    OpenUrlAbstract/FREE Full Text
  18. ↵
    1. Fujioka T,
    2. Trainor LJ,
    3. Large EW,
    4. Ross B
    (2012) Internalized timing of isochronous sounds is represented in neuromagnetic beta oscillations. J Neurosci 32:1791–1802, doi:10.1523/JNEUROSCI.4107-11.2012, pmid:22302818.
    OpenUrlAbstract/FREE Full Text
  19. ↵
    1. Grube M,
    2. Griffiths TD
    (2009) Metricality-enhanced temporal encoding and the subjective perception of rhythmic sequences. Cortex 45:72–79, doi:10.1016/j.cortex.2008.01.006, pmid:19058797.
    OpenUrlCrossRefPubMed
  20. ↵
    1. Hannon EE,
    2. Trainor LJ
    (2007) Music acquisition: effects of enculturation and formal training on development. Trends Cogn Sci 11:466–472, doi:10.1016/j.tics.2007.08.008, pmid:17981074.
    OpenUrlCrossRefPubMed
  21. ↵
    1. Hasnain MK,
    2. Fox PT,
    3. Woldorff MG
    (1998) Intersubject variability of functional areas in the human visual cortex. Hum Brain Mapp 6:301–315, doi:10.1002/(SICI)1097-0193(1998)6:4<301::AID-HBM8>3.0.CO%3B2-7, pmid:9704267.
    OpenUrlCrossRefPubMed
  22. ↵
    1. Herrojo Ruiz M,
    2. Brucke C,
    3. Nikulin VV,
    4. Schneider GH,
    5. Kuhn AA
    (2014) Beta-band amplitude oscillations in the human internal globus pallidus support the encoding of sequence boundaries during initial sensorimotor sequence learning. Neuroimage 85:779–793, doi:10.1016/j.neuroimage.2013.05.085.
    OpenUrlCrossRefPubMed
  23. ↵
    1. Ille N,
    2. Berg P,
    3. Scherg M
    (2002) Artifact correction of the ongoing EEG using spatial filters based on artifact and brain signal topographies. J Clin Neurophysiol 19:113–124, doi:10.1097/00004691-200203000-00002, pmid:11997722.
    OpenUrlCrossRefPubMed
  24. ↵
    1. Jazayeri M,
    2. Shadlen MN
    (2010) Temporal context calibrates interval timing. Nat Neurosci 13:1020–1026, doi:10.1038/nn.2590, pmid:20581842.
    OpenUrlCrossRefPubMed
  25. ↵
    1. Jenkinson N,
    2. Brown P
    (2011) New insights into the relationship between dopamine, beta oscillations and motor function. Trends Neurosci 34:611–618, doi:10.1016/j.tins.2011.09.003, pmid:22018805.
    OpenUrlCrossRefPubMed
  26. ↵
    1. Jensen O,
    2. Goel P,
    3. Kopell N,
    4. Pohja M,
    5. Hari R,
    6. Ermentrout B
    (2005) On the human sensorimotor-cortex beta rhythm: sources and modeling. Neuroimage 26:347–355, doi:10.1016/j.neuroimage.2005.02.008, pmid:15907295.
    OpenUrlCrossRefPubMed
  27. ↵
    1. Jones MR,
    2. Boltz M
    (1989) Dynamic attending and responses to time. Psychol Rev 96:459–491, doi:10.1037/0033-295X.96.3.459, pmid:2756068.
    OpenUrlCrossRefPubMed
  28. ↵
    1. Jones MR,
    2. Moynihan H,
    3. MacKenzie N,
    4. Puente J
    (2002) Temporal aspects of stimulus-driven attending in dynamic arrays. Psychol Sci 13:313–319, doi:10.1111/1467-9280.00458, pmid:12137133.
    OpenUrlAbstract/FREE Full Text
  29. ↵
    1. Jurkiewicz MT,
    2. Gaetz WC,
    3. Bostan AC,
    4. Cheyne D
    (2006) Post-movement beta rebound is generated in motor cortex: evidence from neuromagnetic recordings. Neuroimage 32:1281–1289, doi:10.1016/j.neuroimage.2006.06.005, pmid:16863693.
    OpenUrlCrossRefPubMed
  30. ↵
    1. Kester DB,
    2. Saykin AJ,
    3. Sperling MR,
    4. O'Connor MJ,
    5. Robinson LJ,
    6. Gur RC
    (1991) Acute effect of anterior temporal lobectomy on musical processing. Neuropsychologia 29:703–708, doi:10.1016/0028-3932(91)90104-G, pmid:1944872.
    OpenUrlCrossRefPubMed
  31. ↵
    1. Kobayashi T,
    2. Kuriki S
    (1999) Principal component elimination method for the improvement of S/N in evoked neuromagnetic field measurements. IEEE Trans Biomed Eng 46:951–958, doi:10.1109/10.775405, pmid:10431460.
    OpenUrlCrossRefPubMed
  32. ↵
    1. Kononowicz TW,
    2. Rijn Hv
    (2015) Single trial beta oscillations index time estimation. Neuropsychologia 75:381–389, doi:10.1016/j.neuropsychologia.2015.06.014, pmid:26102187.
    OpenUrlCrossRefPubMed
  33. ↵
    1. Kotz SA,
    2. Schwartze M
    (2010) Cortical speech processing unplugged: a timely subcortico-cortical framework. Trends Cogn Sci 14:392–399, doi:10.1016/j.tics.2010.06.005, pmid:20655802.
    OpenUrlCrossRefPubMed
  34. ↵
    1. Kropotov JD
    (2009) in Quantitative EEG, event-related potentials and neurotherapy, Beta rhythms, ed Kropotov JD (Elsevier, London, UK), pp 59–76.
  35. ↵
    1. Lakatos P,
    2. Shah AS,
    3. Knuth KH,
    4. Ulbert I,
    5. Karmos G,
    6. Schroeder CE
    (2005) An oscillatory hierarchy controlling neuronal excitability and stimulus processing in the auditory cortex. J Neurophysiol 94:1904–1911, doi:10.1152/jn.00263.2005, pmid:15901760.
    OpenUrlAbstract/FREE Full Text
  36. ↵
    1. Leventhal DK,
    2. Gage GJ,
    3. Schmidt R,
    4. Pettibone JR,
    5. Case AC,
    6. Berke JD
    (2012) Basal ganglia beta oscillations accompany cue utilization. Neuron 73:523–536, doi:10.1016/j.neuron.2011.11.032, pmid:22325204.
    OpenUrlCrossRefPubMed
  37. ↵
    1. Lewis PA,
    2. Miall RC
    (2003) Distinct systems for automatic and cognitively controlled time measurement: evidence from neuroimaging. Curr Opin Neurobiol 13:250–255, doi:10.1016/S0959-4388(03)00036-9, pmid:12744981.
    OpenUrlCrossRefPubMed
  38. ↵
    1. Liégeois-Chauvel C,
    2. Peretz I,
    3. Babai M,
    4. Laguitton V,
    5. Chauvel P
    (1998) Contribution of different cortical areas in the temporal lobes to music processing. Brain 121(Pt 10):1853–1867.
    OpenUrlAbstract/FREE Full Text
  39. ↵
    1. Lobaugh NJ,
    2. West R,
    3. McIntosh AR
    (2001) Spatiotemporal analysis of experimental differences in event-related potential data with partial least squares. Psychophysiology 38:517–530, doi:10.1017/S0048577201991681, pmid:11352141.
    OpenUrlCrossRefPubMed
  40. ↵
    1. McIntosh AR,
    2. Lobaugh NJ
    (2004) Partial least squares analysis of neuroimaging data: applications and advances. Neuroimage 23(Suppl 1):S250–S263, pmid:15501095.
    OpenUrlCrossRefPubMed
  41. ↵
    1. McIntosh AR,
    2. Bookstein FL,
    3. Haxby JV,
    4. Grady CL
    (1996) Spatial pattern analysis of functional brain images using partial least squares. Neuroimage 3:143–157, doi:10.1006/nimg.1996.0016, pmid:9345485.
    OpenUrlCrossRefPubMed
  42. ↵
    1. Merchant H,
    2. Grahn J,
    3. Trainor L,
    4. Rohrmeier M,
    5. Fitch WT
    (2015) Finding the beat: a neural perspective across humans and non-human primates. Philos Trans R Soc Lond B Biol Sci 370:20140093, doi:10.1098/rstb.2014.0093, pmid:25646516.
    OpenUrlAbstract/FREE Full Text
  43. ↵
    1. Morillon B,
    2. Schroeder CE,
    3. Wyart V
    (2014) Motor contributions to the temporal precision of auditory attention. Nat Commun 5:5255, doi:10.1038/ncomms6255, pmid:25314898.
    OpenUrlCrossRefPubMed
  44. ↵
    1. Näätänen R,
    2. Picton T
    (1987) The N1 wave of the human electric and magnetic response to sound: A review and an analysis of the component structure. Psychophysiology 24:375–425, doi:10.1111/j.1469-8986.1987.tb00311.x, pmid:3615753.
    OpenUrlCrossRefPubMed
  45. ↵
    1. Neuper C,
    2. Wörtz M,
    3. Pfurtscheller G
    (2006) ERD/ERS patterns reflecting sensorimotor activation and deactivation. Prog Brain Res 159:211–222, pmid:17071233.
    OpenUrlCrossRefPubMed
  46. ↵
    1. Nobre A,
    2. Correa A,
    3. Coull J
    (2007) The hazards of time. Curr Opin Neurobiol 17:465–470, doi:10.1016/j.conb.2007.07.006, pmid:17709239.
    OpenUrlCrossRefPubMed
  47. ↵
    1. Nozaradan S,
    2. Peretz I,
    3. Missal M,
    4. Mouraux A
    (2011) Tagging the neuronal entrainment to beat and meter. J Neurosci 31:10234–10240, doi:10.1523/JNEUROSCI.0411-11.2011, pmid:21753000.
    OpenUrlAbstract/FREE Full Text
  48. ↵
    1. Pantev C,
    2. Oostenveld R,
    3. Engelien A,
    4. Ross B,
    5. Roberts LE,
    6. Hoke M
    (1998) Increased auditory cortical representation in musicians. Nature 392:811–814, doi:10.1038/33918, pmid:9572139.
    OpenUrlCrossRefPubMed
  49. ↵
    1. Pfurtscheller G,
    2. Lopes da Silva FH
    (1999) Event-related EEG/MEG synchronization and desynchronization: basic principles. Clin Neurophysiol 110:1842–1857, doi:10.1016/S1388-2457(99)00141-8, pmid:10576479.
    OpenUrlCrossRefPubMed
  50. ↵
    1. Pollok B,
    2. Südmeyer M,
    3. Gross J,
    4. Schnitzler A
    (2005a) The oscillatory network of simple repetitive bimanual movements. Brain Res Cogn Brain Res 25:300–311, doi:10.1016/j.cogbrainres.2005.06.004, pmid:16023333.
    OpenUrlCrossRefPubMed
  51. ↵
    1. Pollok B,
    2. Gross J,
    3. Müller K,
    4. Aschersleben G,
    5. Schnitzler A
    (2005b) The cerebral oscillatory network associated with auditorily paced finger movements. Neuroimage 24:646–655, doi:10.1016/j.neuroimage.2004.10.009, pmid:15652300.
    OpenUrlCrossRefPubMed
  52. ↵
    1. Potter DD,
    2. Fenwick M,
    3. Abecasis D,
    4. Brochard R
    (2009) Perceiving rhythm where none exists: Event-related potential (ERP) correlates of subjective accenting. Cortex 45:103–109, doi:10.1016/j.cortex.2008.01.004, pmid:19027894.
    OpenUrlCrossRefPubMed
  53. ↵
    1. Przybylski L,
    2. Bedoin N,
    3. Krifi-Papoz S,
    4. Herbillon V,
    5. Roch D,
    6. Léculier L,
    7. Kotz SA,
    8. Tillmann B
    (2013) Rhythmic auditory stimulation influences syntactic processing in children with developmental language disorders. Neuropsychology 27:121–131, doi:10.1037/a0031277, pmid:23356600.
    OpenUrlCrossRefPubMed
  54. ↵
    1. Repp BH
    (2005) Sensorimotor synchronization: a review of the tapping literature. Psychon Bull Rev 12:969–992, doi:10.3758/BF03206433, pmid:16615317.
    OpenUrlCrossRefPubMed
  55. ↵
    1. Repp BH
    (2010) Do metrical accents create illusory phenomenal accents? Atten Percept Psychophys 72:1390–1403, doi:10.3758/APP.72.5.1390, pmid:20601719.
    OpenUrlCrossRefPubMed
  56. ↵
    1. Repp BH,
    2. Su YH
    (2013) Sensorimotor synchronization: a review of recent research (2006–2012). Psychon Bull Rev 20:403–452, doi:10.3758/s13423-012-0371-2, pmid:23397235.
    OpenUrlCrossRefPubMed
  57. ↵
    1. Riggs L,
    2. Moses SN,
    3. Bardouille T,
    4. Herdman AT,
    5. Ross B,
    6. Ryan JD
    (2009) A complementary analytic approach to examining medial temporal lobe sources using magnetoencephalography. Neuroimage 45:627–642, doi:10.1016/j.neuroimage.2008.11.018, pmid:19100846.
    OpenUrlCrossRefPubMed
  58. ↵
    1. Robinson SE
    (2004) Localization of event-related activity by SAM(erf). Neurol Clin Neurophysiol 2004:109, pmid:16012649.
    OpenUrlPubMed
  59. ↵
    1. Robinson SE,
    2. Vrba J
    (1999) in Recent advances in biomagnetism, Functional neuroimaging by synthetic aperture magnetometry, eds Yoshimoto T, Kotani M, Kuriki S, Karibe H, Nakasato N (Tohoku UP, Sendai, Japan), pp 302–305.
  60. ↵
    1. Ross B,
    2. Snyder JS,
    3. Aalto M,
    4. McDonald KL,
    5. Dyson BJ,
    6. Schneider B,
    7. Alain C
    (2009) Neural encoding of sound duration persists in older adults. Neuroimage 47:678–687, doi:10.1016/j.neuroimage.2009.04.051, pmid:19393323.
    OpenUrlCrossRefPubMed
  61. ↵
    1. Saleh M,
    2. Reimer J,
    3. Penn R,
    4. Ojakangas CL,
    5. Hatsopoulos NG
    (2010) Fast and slow oscillations in human primary motor cortex predict oncoming behaviorally relevant cues. Neuron 65:461–471, doi:10.1016/j.neuron.2010.02.001, pmid:20188651.
    OpenUrlCrossRefPubMed
  62. ↵
    1. Samar VJ,
    2. Bopardikar A,
    3. Rao R,
    4. Swartz K
    (1999) Wavelet analysis of neuroelectric waveforms: a conceptual tutorial. Brain Lang 66:7–60, doi:10.1006/brln.1998.2024, pmid:10080864.
    OpenUrlCrossRefPubMed
  63. ↵
    1. Schaefer RS,
    2. Vlek RJ,
    3. Desain P
    (2011) Decomposing rhythm processing: electroencephalography of perceived and self-imposed rhythmic patterns. Psychol Res 75:95–106, doi:10.1007/s00426-010-0293-4, pmid:20574661.
    OpenUrlCrossRefPubMed
  64. ↵
    1. Steinstraeter O,
    2. Teismann IK,
    3. Wollbrink A,
    4. Suntrup S,
    5. Stoeckigt K,
    6. Dziewas R,
    7. Pantev C
    (2009) Local sphere-based co-registration for SAM group analysis in subjects without individual MRI. Exp Brain Res 193:387–396, doi:10.1007/s00221-008-1634-z, pmid:19011844.
    OpenUrlCrossRefPubMed
  65. ↵
    1. Stewart L,
    2. von Kriegstein K,
    3. Warren JD,
    4. Griffiths TD
    (2006) Music and the brain: disorders of musical listening. Brain 129:2533–2553, doi:10.1093/brain/awl171, pmid:16845129.
    OpenUrlAbstract/FREE Full Text
  66. ↵
    1. Teale P,
    2. Pasko B,
    3. Collins D,
    4. Rojas D,
    5. Reite M
    (2013) Somatosensory timing deficits in schizophrenia. Psychiatry Res 212:73–78, doi:10.1016/j.pscychresns.2012.11.007, pmid:23484867.
    OpenUrlCrossRefPubMed
  67. ↵
    1. Tesche CD,
    2. Uusitalo MA,
    3. Ilmoniemi RJ,
    4. Huotilainen M,
    5. Kajola M,
    6. Salonen O
    (1995) Signal-space projections of MEG data characterize both distributed and well-localized neuronal sources. Electroencephalogr Clin Neurophysiol 95:189–200, doi:10.1016/0013-4694(95)00064-6, pmid:7555909.
    OpenUrlCrossRefPubMed
  68. ↵
    1. Thaut MH,
    2. McIntosh GC,
    3. Hoemberg V
    (2015) Neurobiological foundations of neurologic music therapy: rhythmic entrainment and the motor system. Front Psychol 5:1185, doi:10.3389/fpsyg.2014.01185.
    OpenUrlCrossRefPubMed
  69. ↵
    1. Toiviainen P,
    2. Luck G,
    3. Thompson M
    (2009) Embodied metre: hierarchical eigenmodes in spontaneous movement to music. Cogn Process 10(Suppl 2):S325–S327.
    OpenUrlCrossRefPubMed
  70. ↵
    1. Toyomura A,
    2. Fujii T,
    3. Kuriki S
    (2015) Effect of an 8-week practice of externally triggered speech on basal ganglia activity of stuttering and fluent speakers. Neuroimage 109:458–468, doi:10.1016/j.neuroimage.2015.01.024, pmid:25595501.
    OpenUrlCrossRefPubMed
  71. ↵
    1. Trainor LJ,
    2. Hannon EE
    (2013) in The psychology of music, Musical development, ed Deutsch D (Elsevier, London), pp 423–498.
  72. ↵
    1. van Ede F,
    2. de Lange F,
    3. Jensen O,
    4. Maris E
    (2011) Orienting attention to an upcoming tactile event involves a spatially and temporally specific modulation of sensorimotor alpha- and beta-band oscillations. J Neurosci 31:2016–2024, doi:10.1523/JNEUROSCI.5630-10.2011, pmid:21307240.
    OpenUrlAbstract/FREE Full Text
  73. ↵
    1. Van Veen BD,
    2. van Drongelen W,
    3. Yuchtman M,
    4. Suzuki A
    (1997) Localization of brain electrical activity via linearly constrained minimum variance spatial filtering. IEEE Trans Biomed Eng 44:867–880, doi:10.1109/10.623056, pmid:9282479.
    OpenUrlCrossRefPubMed
  74. ↵
    1. Vrba J
    (2002) Magnetoencephalography: the art of finding a needle in a haystack. Physica C Superconductivity Appl 368:1–9, doi:10.1016/S0921-4534(01)01131-5.
    OpenUrlCrossRef
  75. ↵
    1. Vrba J,
    2. Robinson SE
    (2001) Signal processing in magnetoencephalography. Methods 25:249–271, doi:10.1006/meth.2001.1238, pmid:11812209.
    OpenUrlCrossRefPubMed
  76. ↵
    1. Vuust P,
    2. Gebauer LK,
    3. Witek MA
    (2014) Neural underpinnings of music: the polyrhythmic brain. Adv Exp Med Biol 829:339–356, doi:10.1007/978-1-4939-1782-2_18, pmid:25358719.
    OpenUrlCrossRefPubMed
  77. ↵
    1. Wiener M,
    2. Turkeltaub P,
    3. Coslett HB
    (2010) The image of time: a voxel-wise meta-analysis. Neuroimage 49:1728–1740, doi:10.1016/j.neuroimage.2009.09.064, pmid:19800975.
    OpenUrlCrossRefPubMed
  78. ↵
    1. Wilson SJ,
    2. Pressing JL,
    3. Wales RJ
    (2002) Modelling rhythmic function in a musician post-stroke. Neuropsychologia 40:1494–1505, doi:10.1016/S0028-3932(01)00198-1, pmid:11931954.
    OpenUrlCrossRefPubMed
  79. ↵
    1. Zatorre RJ,
    2. Chen JL,
    3. Penhune VB
    (2007) When the brain plays music: auditory-motor interactions in music perception and production. Nat Rev Neurosci 8:547–558, doi:10.1038/nrn2152, pmid:17585307.
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 35 (45)
Journal of Neuroscience
Vol. 35, Issue 45
11 Nov 2015
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Advertising (PDF)
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Beta-Band Oscillations Represent Auditory Beat and Its Metrical Hierarchy in Perception and Imagery
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Beta-Band Oscillations Represent Auditory Beat and Its Metrical Hierarchy in Perception and Imagery
Takako Fujioka, Bernhard Ross, Laurel J. Trainor
Journal of Neuroscience 11 November 2015, 35 (45) 15187-15198; DOI: 10.1523/JNEUROSCI.2397-15.2015

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Beta-Band Oscillations Represent Auditory Beat and Its Metrical Hierarchy in Perception and Imagery
Takako Fujioka, Bernhard Ross, Laurel J. Trainor
Journal of Neuroscience 11 November 2015, 35 (45) 15187-15198; DOI: 10.1523/JNEUROSCI.2397-15.2015
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • event-related desynchronization
  • ERD
  • magnetoencephalography
  • predictive coding
  • timing processing

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Articles

  • Choice Behavior Guided by Learned, But Not Innate, Taste Aversion Recruits the Orbitofrontal Cortex
  • Maturation of Spontaneous Firing Properties after Hearing Onset in Rat Auditory Nerve Fibers: Spontaneous Rates, Refractoriness, and Interfiber Correlations
  • Insulin Treatment Prevents Neuroinflammation and Neuronal Injury with Restored Neurobehavioral Function in Models of HIV/AIDS Neurodegeneration
Show more Articles

Behavioral/Cognitive

  • Maturational Indices of the Cognitive Control Network Are Associated with Inhibitory Control in Early Childhood
  • The Spatial Reach of Neuronal Coherence and Spike-Field Coupling across the Human Neocortex
  • Neural Correlates Underlying Social-Cue-Induced Value Change
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
(JNeurosci logo)
(SfN logo)

Copyright © 2022 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.