The creation of robust percepts is a key function of the CNS necessary to overcome everyday life challenges. To this end, the CNS possesses the capacity to synthesize spatially related unimodal stimuli into multisensory events. More specifically, many neurons of the superior colliculus (SC) have the capacity to govern multisensory integration and mediate orienting behaviors toward multisensory targets (Stein, 1998). In true gestalt manner, many of these neurons' responses to different spatially congruent sensory stimuli (e.g., visual and auditory) exceed the larger of the unimodal responses (Meredith and Stein, 1986) or even the arithmetic sum of the unimodal responses (King and Palmer, 1985). Moreover, multisensory integration in single SC neurons is an important predictor of the animal's behavior. For example, cats trained to orient toward a briefly illuminated light-emitting diode showed increased success rate when a spatially coincidental auditory stimulus cue was simultaneously presented (Stein et al., 1988). However, many studies have demonstrated that this multimodal integration is not computed exclusively within the SC, but is rather processed through a distributed neural network that involves the midbrain as well as cortical areas. Indeed, reversible cooling deactivation (Jiang et al., 2001) of the cat's ectosylvian sulcus (AES) and lateral suprasylvian sulcus (rLS) resulted in the loss of SC neurons' cross-modal integration abilities.
In a recent study, Alvarado et al. (2007a) observed that unisensory and multisensory integration are governed by different rules. Single-unit recording in the deep layers of the cat's SC revealed that, generally, within-modal (e.g., two visual stimuli) integration produces response depression, whereas cross-modal (e.g., visual and auditory stimuli) integration produces response enhancement. Segregating these two mechanisms brought new evidence supporting a parallel organization, in which within-modal depression could be of primary use to focus on a single target feature among potential distracters, whereas multimodal enhancement would more likely enhance stimuli detection. This raised questions about whether and how the neuronal networks underlying unimodal and multimodal processing are different from each other.
Alvarado et al. (2007b) recently published in The Journal of Neuroscience an attempt to resolve this issue by exploring potential differences in cortical influences over multisensory and unisensory neuronal integration in the deep layers of the SC. For this purpose, the authors deactivated the AES and the rLS of anesthetized cats by using two cooling coils. Visual–auditory multisensory neurons (n = 45) and visual unisensory neurons (n = 25) were tested with two different light bars (V1 and V2), which were either moving or stationary, and a white noise burst (A). Each neuron was tested with the three modality-specific stimuli presented alone (V1, V2, and A) and with the presentation of combined stimuli: V1 with V2, V1 with A, and V2 with A. Because multisensory enhancement is most robust when the stimuli are presented at weak intensities [a phenomenon referred to as inverse effectiveness (Meredith and Stein, 1986)], the complete protocol was run at three levels of stimulation (threshold intensity, maximal response intensity, and in-between intensity, around the midpoint on the dynamic curve) in each experimental condition (control, cortical deactivation, and cortical reactivation).
In the control condition, the majority of the multisensory neurons (89%, 40 of 45) showed cross-modal integration. Moreover, most of these neurons (77.5%, 31 of 40) followed the principles of inverse effectiveness. Finally, within-modal stimulations usually resulted in substantially lower responses than the sum of their components in either unisensory or multisensory neurons.
The novel results of the present study revealed two major advancements to the general understanding of multisensory integration in the SC. First, AES and rLS deactivation altered the SC multisensory neurons capacity to integrate cross-modal (V–A) but not within-modal (V1–V2) information. Thus, corroborating the results of Jiang et al. (2001), most of the multisensory neurons (77.5%, 31 of 40) lost their cross-modal integration abilities after cortical deactivations and recovered them after cortical reactivation. In sharp contrast, cortical deactivation had little or no effect on within-modal integration among multisensory and unisensory SC neurons. This suggests that AES and rLS specifically target cross-modal, but not within-modal, integration among multisensory SC neurons. Moreover, the results show that cortex deactivation nullifies inverse effectiveness in cross-modal integrative neurons. Second, as illustrated by Alvarado et al. (2007b) [their Fig. 8 (http://www.jneurosci.org/cgi/content/full/27/47/12775/F8)], the authors observed different cortical deactivation effects on the modality-specific response (V1, V2, and A) of multisensory and unisensory neurons. The figure shows that during the deactivation condition, multisensory neurons' firing rate decreased in response to modality-specific stimulations, whereas unisensory neurons responses remained unchanged. Moreover, in contrast to the inverse effectiveness seen in cross-modal integration, the influence of cortical deactivation on modality-specific responses was more prominent on most vigorous responses.
The present study demonstrates that AES and rLS modulate cross-modal but not within-modal integration and that modality-specific stimuli are affected by cortical deactivation only in multisensory neurons but not in unisensory neurons. These observations further suggest that AES and rLS projections target exclusively multisensory neurons. The present study is of primary importance to understanding the wiring behind multisensory integration in the SC, but also in a more general manner across the CNS. Indeed, recent data has revealed new avenues of research in the extrastriate cortex of the cat, in which the presence of subthreshold auditory stimulations enhanced visual responses (Allman and Meredith, 2007). It would be of interest to study the effect of AES and rLS on such cortical areas to determine whether midbrain multisensory processing is structure specific. This would help to disclose whether cross-modal integration processed within primary cortices relies on projections from associative regions, and therefore bring new evidence concerning the cortical network underlying multisensory integration.
Editor's Note: These short, critical reviews of recent papers in the Journal, written exclusively by graduate students or postdoctoral fellows, are intended to summarize the important findings of the paper and provide additional insight and commentary. For more information on the format and purpose of the Journal Club, please see http://www.jneurosci.org/misc/ifa_features.shtml.
- Correspondence should be addressed to Etienne Vachon-Presseau, Institut Universitaire de Gériatrie de Montréal, 4565 chemin Queen-Mary, Montréal, Quebec, Canada H3W 1W5.