The discovery of trimodal (motor, visual, and auditory) mirror neurons in the monkey ventral premotor cortex (Kohler et al., 2002) has encouraged studies of the auditory properties of the human mirror-matching system. This putative mechanism is thought to map the acoustic representation of actions into the motor plans necessary to produce those actions. Action-related sounds, executed by the hand or the mouth, indeed activate premotor areas in humans (Gazzola et al., 2006). Other studies have investigated the brain areas activated in more complex action-related sounds such as music. Musicians had been a valuable population for such studies, revealing the functional plasticity elicited by music training in somatosensory, auditory, and motor cortices, as well as the anatomical differences in areas involved in music-related behaviors (Munte et al., 2002). In passive listening tasks, functional magnetic resonance imaging (fMRI) has revealed that left premotor regions are activated in musicians compared non-musicians (Bangert et al., 2006), whereas transcranial magnetic stimulation in musicians showed a primary motor facilitation when listening to a rehearsed piece compared with a control piece (D'Ausilio et al., 2006).
Recently Lahav et al. (2007) in The Journal of Neuroscience explored the brain areas recruited when subjects with no formal musical training listened to sounds associated with sequences of actions they learned during a prerecording period. The authors also showed the differential recruitment of the left and right inferior frontal gyrus (IFG) and bilateral premotor areas.
The research was designed to control for the specificity of the functional mapping by comparing responses with well controlled and newly acquired musical pieces. In a first session, non-musicians trained for 5 d to play a piece by ear with their right hand in a way that avoided any score reading learning. Then, their brain activity was assessed by fMRI in a second session. Experimental conditions included passive listening to the practiced musical excerpt, a second piece containing the same notes but in a different order, and a third with different notes. Behavioral tests were used to measure learning curves during the training session, and pitch recognition–production performance before and after training. A hand motion-tracking evaluation was performed to verify motionless listening after training.
Bilateral superior temporal gyrus activity was common to all three conditions, whereas the trained piece showed a bilateral frontoparietal network of areas including the posterior IFG, posterior middle premotor region, inferior parietal lobule and left cerebellum [Lahav et al. (2007), their Fig. 3A,B (http://www.jneurosci.org/cgi/content/full/27/2/308/F3)]. The untrained same-notes piece instead showed a more right lateralized parietofrontal activities [Lahav et al. (2007), their Fig. 4A (http://www.jneurosci.org/cgi/content/full/27/2/308/F4)]. These results neatly describe how learning a novel musical piece for a few days induced, in naive subjects, a pattern of activities similar to that of expert musicians (Bangert et al., 2006). Such effects appeared specific for the trained piece and to a lesser extent for the single sound–movement couples.
A region of interest analysis on the bilateral IFG (pars opercularis) revealed stronger activity on the left when listening to the trained piece, whereas the right IFG was fairly active across the three conditions [Lahav et al. (2007), their Fig. 4C (http://www.jneurosci.org/cgi/content/full/27/2/308/F4)]. These latter results indicate the clearly different role played by the left and right posterior IFG: something that goes beyond pure language production/comprehension and might support a general multimodal representation of meaningful actions. A key question is whether those actions are encoded into their simple constitutive movements or plain action goals. The results of Lahav et al. (2007) indicate that the left posterior IFG maps for the global representation of actions, regardless of the single auditory events that constitute the piece.
At a behavioral level, subjects did improve their ability to associate trained sounds to the related key press [Lahav et al. (2007), their Fig. 5 (http://www.jneurosci.org/cgi/content/full/27/2/308/F5)]. These trained associations were present in two conditions out of three, and both showed a compelling premotor activity. One possibility is that Broca's area codes for abstract multimodal representations of actions, whereas premotor regions build a one-to-one map between sensory events and related motor programs. As for the untrained same-notes piece, simple sound–movement couples were not sufficient to activate the learned piece–action couple. This latter idea is confirmed with results showing premotor activities for lower-level mapping and anticipation of incoming auditory features (Schubotz et al., 2003).
This work clearly demonstrates an auditory mirror area in the left IFG for complex and newly acquired actions (Lahav et al., 2007). In addition to rote auditory-motor mapping for learning and online execution control, this mirror mechanism might subserve other evolutionary critical functions like action recognition (Lahav et al., 2007), and interindividual emotional resonance (Warren et al., 2006). This auditory mirror-like property seems indeed to be valid for a wide range of functions that in turn elicit very different behaviors.
Footnotes
-
Editor's Note: These short reviews of a recent paper in the Journal, written exclusively by graduate students or postdoctoral fellows, are intended to mimic the journal clubs that exist in your own departments or institutions. For more information on the format and purpose of the Journal Club, please see http://www.jneurosci.org/misc/ifa_features.shtml.
- Correspondence should be addressed to Alessandro D'Ausilio, Sapienza University of Rome, Mr Via dei Marsi, 78, 00100 Rome, Italy. alessandro.dausilio{at}uniroma1.it