Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Articles, Behavioral/Systems/Cognitive

Action Representation of Sound: Audiomotor Recognition Network While Listening to Newly Acquired Actions

Amir Lahav, Elliot Saltzman and Gottfried Schlaug
Journal of Neuroscience 10 January 2007, 27 (2) 308-314; https://doi.org/10.1523/JNEUROSCI.4822-06.2007
Amir Lahav
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Elliot Saltzman
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Gottfried Schlaug
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Article Figures & Data

Figures

  • Figure 1.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 1.

    Action–listening illustration. A, Music performance can be viewed as a complex sequence of both actions and sounds, in which sounds are made by actions. B, The sound of music one knows how to play can be reflected, as if in a mirror, in the corresponding motor representations.

  • Figure 2.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 2.

    Experimental setup. A, Learning times of the trained-music are shown for individual subjects (n = 9) over five daily piano-training sessions. Learning time was influenced by the number of errors made during a session, until reaching error-free performance with the musical piece. A plateau level of performance is notable in sessions 4–5, obtaining the minimum possible learning time (12 min) set by the software. B, Examples of three fMRI experimental conditions. Sample short passages extracted from the trained-music that subjects learned how to play (blue) and two untrained control musical pieces are shown. These controls include an untrained-same-notes-music, in which the same notes were used in a different order to create a new melody (cyan) and an untrained-different-notes-music, composed of a completely different set of notes (red). C, A sample fMRI time series is shown between two imaging time points (TR, 18 s; TA, 1.75 s; red) within a single run. Images were acquired immediately after listening to short passages (5–8 s each; yellow) extracted from all three musical pieces. A behavioral control task (green) was used to control for listening attention; subjects heard three piano notes and had to press a button with their left hand if these notes had appeared as a subsequence in the preceding musical passage (yellow) they had heard. Intentionally, no images reflecting this task were acquired. We varied pause times between scans (gray) so that the TR remained fixed. D, Results of the behavioral control task are shown for all listening conditions. The graph shows the percentage of correct button presses from the group (n = 9) for each listening condition (24 trials per subject). No significant differences were found for mean correct responses across musical pieces (p = 0.489).

  • Figure 3.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 3.

    Action–listening activation. A, B, Extensive bilateral activation in the frontoparietal motor-related brain regions was observed when subjects listened to the trained-music they knew how to play (A), but not when they listened to the never-learned untrained-different-notes-music (B) (contrasted against rest baseline; p < 0.05, FDR corrected). C, Activation maps are shown in areas that were significantly more active during listening to the trained-music versus the untrained-different-notes-music. Surface projection of group mean activation (n = 9) are rendered onto an individual standardized brain (p < 0.05, FDR corrected). L, Left; R, right.

  • Figure 4.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 4.

    A,Areas activated during listening to the untrained-same-notes-music contrasted against rest (p < 0.05, FDR corrected). B, Contrasted image of group mean activation is presented in areas that were significantly more active during listening to trained-music compared with untrained-same-notes-music. This included the left premotor region as well as Broca's area and its right hemispheric homolog (green arrows), shown also in the corresponding coronal view (middle) (p < 0.05, FDR corrected). C, Parameter estimates (β values) of the left (−50, 18, 16; magenta) and right (52, 18, 16; cyan) IFG across listening conditions. Results indicate significant pick activations on the left IFG only when subjects listen to the trained-music they knew how to play (p = 0.001), whereas the right IFG remained fairly active across listening conditions (p = 0.973).

  • Figure 5.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 5.

    Scores of the PRP test. The percentage of correct responses is shown for individual subjects, before the initial (red) and after the last (blue) piano-training session. Mean group errors (solid line) and SEs (dashed line) are presented. An improvement trend is shown, from 24% before training (around the level of chance) to 77% after training (mean relative improvement of 209%; p < 10−4).

Back to top

In this issue

The Journal of Neuroscience: 27 (2)
Journal of Neuroscience
Vol. 27, Issue 2
10 Jan 2007
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Action Representation of Sound: Audiomotor Recognition Network While Listening to Newly Acquired Actions
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Action Representation of Sound: Audiomotor Recognition Network While Listening to Newly Acquired Actions
Amir Lahav, Elliot Saltzman, Gottfried Schlaug
Journal of Neuroscience 10 January 2007, 27 (2) 308-314; DOI: 10.1523/JNEUROSCI.4822-06.2007

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Action Representation of Sound: Audiomotor Recognition Network While Listening to Newly Acquired Actions
Amir Lahav, Elliot Saltzman, Gottfried Schlaug
Journal of Neuroscience 10 January 2007, 27 (2) 308-314; DOI: 10.1523/JNEUROSCI.4822-06.2007
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Articles

  • Memory Retrieval Has a Dynamic Influence on the Maintenance Mechanisms That Are Sensitive to ζ-Inhibitory Peptide (ZIP)
  • Neurophysiological Evidence for a Cortical Contribution to the Wakefulness-Related Drive to Breathe Explaining Hypocapnia-Resistant Ventilation in Humans
  • Monomeric Alpha-Synuclein Exerts a Physiological Role on Brain ATP Synthase
Show more Articles

Behavioral/Systems/Cognitive

  • Memory Retrieval Has a Dynamic Influence on the Maintenance Mechanisms That Are Sensitive to ζ-Inhibitory Peptide (ZIP)
  • Neurophysiological Evidence for a Cortical Contribution to the Wakefulness-Related Drive to Breathe Explaining Hypocapnia-Resistant Ventilation in Humans
  • Monomeric Alpha-Synuclein Exerts a Physiological Role on Brain ATP Synthase
Show more Behavioral/Systems/Cognitive
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.