Abstract
The functional organization of auditory cortex (AC) is still poorly understood. Previous studies suggest segregation of auditory processing streams for spatial and nonspatial information located in the posterior and anterior AC, respectively (Rauschecker and Tian, 2000; Arnott et al., 2004; Lomber and Malhotra, 2008). Furthermore, previous studies have shown that active listening tasks strongly modulate AC activations (Petkov et al., 2004; Fritz et al., 2005; Polley et al., 2006). However, the task dependence of AC activations has not been systematically investigated. In the present study, we applied high-resolution functional magnetic resonance imaging of the AC and adjacent areas to compare activations during pitch discrimination and n-back pitch memory tasks that were varied parametrically in difficulty. We found that anterior AC activations were increased during discrimination but not during memory tasks, while activations in the inferior parietal lobule posterior to the AC were enhanced during memory tasks but not during discrimination. We also found that wide areas of the anterior AC and anterior insula were strongly deactivated during the pitch memory tasks. While these results are consistent with the proposition that the anterior and posterior AC belong to functionally separate auditory processing streams, our results show that this division is present also between tasks using spatially invariant sounds. Together, our results indicate that activations of human AC are strongly dependent on the characteristics of the behavioral task.
Introduction
Neurophysiological studies in primates have shown that frequencies of sounds are represented in a tonotopic manner in auditory cortex (AC) (Merzenich et al., 1975; Kaas and Hackett, 2000) and a similar organizational pattern is seen in humans (Woods et al., 2009). However, in behaving animals spectrotemporal receptive fields of primary AC neurons (Fritz et al., 2005) and the topographic organization of primary and secondary AC (Polley et al., 2006) are not fixed but are changing depending on the behavioral task. Thus, AC processes are not simply determined by physical features (e.g., frequency structure) of sounds but are quickly reshaped according to the current behavioral demands. Although the functional organization of AC is still poorly understood, accumulating evidence from studies in humans and other mammals suggests that AC is composed of functionally differentiated areas and that, analogously to the visual system, AC is organized in separate processing streams (Romanski et al., 1999; Kaas and Hackett, 2000; Rauschecker and Tian, 2000; Arnott et al., 2004; Lomber and Malhotra, 2008).
Several previous functional magnetic resonance imaging (fMRI) studies have shown that activations of human AC are strongly modulated by attention-engaging auditory tasks (Woodruff et al., 1996; Jäncke et al., 1999; Janata et al., 2002; Rinne et al., 2005; Degerman et al., 2006; Rinne et al., 2008; Mayer et al., 2009). The most pronounced attention-related modulations are typically seen in nonprimary parts of AC in the lateral superior temporal gyrus (STG) (Hall et al., 2000; Petkov et al., 2004; Woods et al., 2009). The functional significance of attention-related modulations observed in fMRI is not well understood but it is has been suggested that the increased STG activations during auditory tasks could be related to some additional resources, such as working memory (Petkov et al., 2004; Brechmann et al., 2007), required by active listening tasks. Previous studies have also reported differences in the distribution of human AC activations during spatial and nonspatial auditory tasks. Typically these studies show stronger activations associated with spatial tasks in the posterior STG and inferior parietal lobule (IPL) as compared with nonspatial tasks (Alain et al., 2001; Arnott et al., 2004; Barrett and Hall, 2006). The distribution of activations during nonspatial tasks, however, seems less consistent in the literature. While anterior AC appears to be involved in tasks requiring pitch or pitch pattern discrimination, some studies suggest that also areas in the posterior STG are activated in such auditory nonspatial tasks (Patterson et al., 2002; Warren and Griffiths, 2003; Arnott et al., 2004; Barrett and Hall, 2006).
Previous results clearly demonstrate that processing of sounds in AC depends on the characteristics of a behavioral task. However, the task dependence of AC activations has not been systematically studied. In the present study, we applied high-resolution imaging of the AC and adjacent areas to compare activations during pitch discrimination and n-back pitch memory tasks that required auditory attention and were varied parametrically in difficulty (see Fig. 1). We also measured activations during a visual task with sounds similar to those used in the auditory tasks.
a, b, During all auditory and visual task blocks, low (lowest frequency 200 Hz), medium (561 Hz), and high (1573 Hz) pitched tones were presented with six equidistant pitch levels in each category (pitch range within a category depended on subjective pitch discrimination performance during training). The tones were 200 ms in duration and consisted of two successive 100 ms parts (each part included a 5 ms linear onset and offset ramps). In pitch discrimination tasks (a), the last half of each tone was slightly lower or higher in pitch than the first part (the magnitude of pitch difference depended on the difficulty level). Subjects were required to press a button when the two halves of a tone had the same pitch (target). In the pitch memory tasks (b), the two halves of a tone always had the same pitch and subjects were required to respond when a tone belonged to the same pitch category as the one presented one, two, or three trials before (target in a 2-back task is illustrated). c–h, The task and its difficulty level were indicated by task instruction symbols presented on a screen from 6 s before each block onset until the end of the block. A letter “Λ” (Lambda) or “V” (not shown in the figure) in the middle of task instruction symbols indicated the task modality, auditory or visual, respectively. Pitch discrimination tasks were indicated by one red dot (c–e), while the pitch memory tasks were indicated by two red dots (f–h). In the pitch discrimination tasks, task difficulty level was indicated by the position (yellow rectangles) of the red dot (the leftmost position, easy; second position from the left, medium; rightmost position, hard). For the pitch memory tasks, the distance between two red dots indicated the relative serial positions of the sounds to be compared. For the visual tasks, in addition to the letter V, two red dots were presented at second and third position from the left.
Materials and Methods
Subjects.
Subjects (N = 17, 13 women, all right-handed) were 23–30 years (mean 25 years) of age. All subjects had normal hearing, normal or corrected-to-normal vision, and no history of psychiatric or neurological illnesses. An informed written consent was obtained from each subject before the experiment. The study protocol was approved by the Ethical Committee of the Hospital District of Helsinki and Uusimaa, Finland.
Stimuli and tasks.
Subjects were presented with tones (diotic presentation, duration 200 ms, onset-to-onset interval 0.8–1 s) consisting of two successive 100 ms parts (each part included a 5 ms linear onset and offset ramps). The tones formed low (lowest frequency 200 Hz), medium (561 Hz), and high (1573 Hz) pitch categories each including six equidistant pitch levels. During pre-fMRI training, the pitch range within each category was adjusted individually for each subject depending on their pitch discrimination ability (highest possible frequencies were 320, 898 and 2517 Hz in low, medium, and high category, respectively). Thus, pitch ranges within a category varied depending on subjective performance but each category always contained six equidistant pitch levels. In pitch discrimination tasks (Fig. 1a), the second half of each tone was one, two, or three levels (depending on the difficulty level) lower or higher in pitch than the first half. Subjects were required to attend to the tones and indicate by pressing a button with their right hand when the two halves of a tone had the same pitch. In pitch memory tasks (Fig. 1b), the tones were otherwise similar, but the two halves of a tone always had the same pitch, and the subjects indicated with a button press when a tone belonged to the same pitch category as the one presented one, two, or three trials before.
The tones were delivered with an UNIDES ADU2a audio system (Unides Design) via plastic tubes through a porous EAR-tip (ER3, Etymotic Research) acting as an earphone. The noise of the scanner was attenuated by the earplugs, circumaural ear protectors, and viscoelastic mattress inside and around the headcoil and under the subject. The experiment was controlled using Presentation software (Neurobehavioral Systems).
The sounds were presented in 15 s blocks alternating with 8 s breaks with no sound stimuli. During the breaks, the subjects focused on a fixation mark (white X on gray background, red 186, green 186, blue 186) presented for 2 s in the middle of a screen (viewed through a mirror fixed to the head coil). The fixation mark was replaced by nonverbal task-instruction symbols 6 s before the start of the next block (Fig. 1c–h). The task-instruction symbols were presented until the end of the block. For each task type and difficulty level (two auditory tasks with three difficulty levels and one visual task), 15 blocks were presented resulting in 105 blocks altogether. There were 2–4 targets in each block.
In the visual task, the subjects were instructed to ignore the sounds and to detect occasional slight luminance changes of a flickering gray rectangle (R 186, G 186, B 186) underlying the task-instruction symbols (Fig. 1c–h). The target rectangle was slightly brighter (R 194, G 194, B 194). The visual stimulus sequences delivered concurrently with auditory sequences were similar to auditory sequences (stimulus duration 200 ms, onset-to-onset interval 0.8–1 s, 2–4 targets in each block). The auditory sequences presented during the visual conditions were identical to the auditory sequences during pitch memory tasks (i.e., the two halves of each tone had the same pitch).
Pre-fMRI training.
To reveal differences in brain activity between easy and hard tasks and between pitch discrimination and pitch memory tasks, the hardest difficulty levels were made intentionally highly demanding. Before the fMRI session, subjects were carefully trained to perform the tasks (1–2 h of training in two sessions 1–5 d before scanning). During training, it was emphasized to the subjects that maximum effort in performance is essential especially during the most difficult levels.
Analysis of the behavioral data.
Mean hit rates and reaction times were calculated separately for each task. Correct responses occurring between 200 and 1300 ms from target onset were accepted as hits. Hit rate was defined as the number of hits divided by the number of targets. Mean reaction time was calculated only for hits. Behavioral results were analyzed using one-way repeated-measures ANOVAs with factor Task Difficulty (3 levels). Due to technical problems in data collection, the behavioral data of one subject were lost. Based on good behavioral results during training, self-reported performance after fMRI and typical brain activation patterns, this subject's fMRI data were not excluded, however.
fMRI data acquisition and analysis.
fMRI data were acquired with a 3.0 T GE Signa system retrofitted with an Advanced NMR operating console and a quadrature birdcage coil. Functional images were acquired using a T2*-weighted gradient-echo echo-planar (GE-EPI) sequence [repetition time (TR), 2000 ms; echo time (TE), 32 ms; flip angle, 90°; voxel matrix, 96 × 96; field of view (FOV), 20 cm; slice thickness, 2.1 mm with no gap; in-plane resolution, 2.1 mm × 2.1 mm; number of slices, 24]. Based on an anatomical scout image (sagittal slices, slice thickness 3 mm, in-plane resolution 0.94 mm × 0.94 mm), the middle EPI slices were aligned along Sylvian fissures (Fig. 2). The functional scanning was divided in two 23 min runs resulting in ∼2 × 690 images. After the first run, there was a short break during which subjects remained in the scanner and were instructed not to move their heads or speak. After the functional scans, a fluid-attenuated inversion recovery image using the same imaging slices but with denser in-plane resolution was acquired (FLAIR; TR, 10,000 ms; TE, 120 ms; voxel matrix 320 × 192; FOV, 20; slice thickness, 2.1 mm; in-plane resolution 0.39 × 0.39). Finally, at the end of the session, high-resolution anatomical images were acquired (voxel matrix, 156 × 256 × 256; resolution, 1 × 0.98 × 0.98 mm).
Inflated left-hemisphere cortical surface (light gray, gyri; dark gray, sulci). The areas covered in the present fMRI study are shown in lighter grayscale. The EPI slices were aligned along Sylvian fissures to cover the STG, HG, anterior insula, and most of the IPL of both hemispheres. Due to anatomical differences between left and right hemispheres and between subjects the imaged area did not completely cover the IPL in all cases. This problem was even larger for the inferior frontal gyrus (IFG), which was therefore not considered in the present study.
Global voxel-vise analysis was performed using the tools developed by the Analysis Group at the Oxford Centre for Functional MRI of the Brain (FMRIB) and implemented within FMRIB's software library (FSL, release 4.1,www.fmrib.ox.ac.uk/fsl, Smith et al., 2004). First, data from the two runs were combined into one file for motion correction. The motion-corrected data were again split into two separate files, high-pass filtered (cutoff 100 s), and spatially smoothed (Gaussian kernel of 7 mm full-width half-maximum). First-level statistical analysis was performed using FMRIB's improved linear model. Based on the timing information recorded during the experiment, each functional image was labeled as either pitch discrimination (3 difficulty levels), pitch memory (3 levels), visual task, or baseline (8 s breaks with no sound stimuli). The hemodynamic response function was modeled with a gamma function (mean lag 6 s, SD 3 s) and its temporal derivative. Contrasts were specified to create Z-statistic images testing for task and difficulty effects. A second-level statistical analysis using fixed-effects combined the data from the two runs.
For analysis across participants (third level analysis), the data were anatomically normalized in the following steps: First, cortical surfaces were extracted from high-resolution anatomical images, transformed to spherical standard space, and anatomically normalized on the basis of the cortical gyral and sulcal patterns using FreeSurfer (v4.0.5, http://surfer.nmr.mgh.harvard.edu). Next, the three-dimensional (3D) spherical cortical surfaces were rotated and projected to a two-dimensional (2D) space separately for each hemisphere using equal area Mollweide projection (Python libraries matplotlib and basemap, http://matplotlib.sourceforge.net). This procedure produced 3D-to-2D anatomical transformation matrices for each subject that were then applied separately for each subject to transform the results of the 3D second-level statistical analysis to 2D.
Finally, the group analysis (FMRIB's local analysis of mixed effects using automatic outlier deweighting, N = 17) was run on these flattened data. Z-statistic images were thresholded using clusters determined by Z >2.3 and a (corrected) cluster significance threshold of p < 0.05 (using Gaussian random field theory).
Results
Behavior
Hit rate (HR) across all auditory tasks was 64% and mean reaction time (RT) 705 ms (N = 16, see Materials and Methods). HRs decreased with increasing task difficulty both in pitch discrimination tasks (F(2,30) = 21, p < 0.0001, linear trend, F(1,15) = 44, p < 0.0001) and pitch memory task (F(2,30) = 62, p < 0.0001, linear trend, F(1,15) = 365, p < 0.0001), where increasing difficulty also decreased RTs (F(2,30) = 3.3, p = 0.050), linear trend, F(1,15) = 5.6, p = 0.032).
Voxel-vise analysis of fMRI data
AC activations to sounds in the absence of auditory attention were isolated by contrasting activations during the visual task with activations during the 8 s breaks with no sounds. In both hemispheres, widespread AC regions, including the anterior and posterior STG and anterior insula, were activated by unattended sounds (Fig. 3a, blue and yellow; for anatomical labels, see Fig. 3g; for results in individual subjects, see supplemental Fig. 1, available at www.jneurosci.org as supplemental material). General effects of auditory tasks were isolated by contrasting all auditory tasks with the visual task. Distinct activation clusters were detected in the posterior STG, IPL, and insula in both hemispheres and in the left anterior STG (Fig. 3a, red and yellow).
Activations (a, f) shown on flattened mean 2D cortical surface (N = 17, threshold Z > 2.3, cluster-corrected p < 0.05 unless otherwise specified). a, Activations to sounds in the absence of auditory attention (blue) were isolated by contrasting activations during the visual task with activations during the 8 s breaks with no sounds. General effects of auditory tasks were isolated by contrasting all auditory tasks with the visual task (red). Areas showing significant activations in both contrasts are shown in yellow. b, Activations specific to pitch discrimination (blue) and pitch memory (red) tasks were extracted by comparing each auditory task (all difficulty levels) with activations during the visual task. c, Areas where activations were stronger during pitch discrimination than pitch memory tasks (blue) and areas where activations were stronger during pitch memory than pitch discrimination tasks (red). d, Effects of task difficulty (linear contrasts) on pitch discrimination (blue, threshold Z >1.64 corresponding to uncorrected p = 0.05) and pitch memory (red, Z >2.3, cluster-corrected p < 0.05) tasks. e, Areas where activations were higher during the visual task than during pitch discrimination task (blue) and during pitch memory tasks (red). f, Results of linear inverse contrast revealing areas where activations decreased with increasing task difficulty during pitch discrimination (blue) and pitch memory (red). g, Anatomical labels: STG (superior temporal gyrus), HG (Heschl's gyrus), IPL (inferior parietal lobule, consisting of angular gyrus and supramarginal gyrus).
Activations specific to the two auditory tasks were extracted by comparing each auditory task (all difficulty levels) with activations during the visual task. Enhanced activations related to pitch discrimination were detected bilaterally in the anterior STG, posterior STG, and anterior insula (Fig. 3b, blue and yellow). Activation increases during pitch memory tasks, in turn, were found bilaterally in the posterior STG, IPL, and anterior insula (Fig. 3b, red and yellow). Direct comparison of the two types of auditory tasks with each other revealed stronger activations in wide temporal and insular areas during pitch discrimination than during pitch memory tasks (Fig. 3c, blue), while areas in the IPL and insula were more activated during pitch memory than during pitch discrimination tasks (Fig. 3c, red; for results in individual subjects, see supplemental Fig. 2, available at www.jneurosci.org as supplemental material).
Effects of auditory task-difficulty were examined with linear contrasts. Significant activation increases with increasing task difficulty were detected in the anterior insula and IPL for pitch memory (Fig. 3d, red and yellow), but significant activations (not shown) were detected only in the right insula for pitch discrimination. However, with a more lenient threshold (Z >1.64, corresponding to uncorrected p = 0.05) distinct clusters showing higher activity for more difficult discrimination tasks (Fig. 3d, yellow) were seen within the insula and IPL, in areas where activations were also modulated by memory load. (Note that in Fig. 3d different thresholds are used for pitch discrimination and pitch memory tasks.)
Separate contrasts were conducted to detect areas where activations were higher during the visual task than during the auditory tasks. As compared with pitch discrimination, activations were higher during the visual task bilaterally in the IPL and in parts of the insula (Fig. 3e, blue and yellow). As compared with pitch memory tasks, activations were higher during the visual task also in the IPL and, in addition, in wide areas of the anterior temporal lobe and insula bilaterally (Fig. 3e, red and yellow). Apparently, these differences were not due to activation increases during the visual task but to activation decreases during the auditory tasks. This was revealed by an inverse linear contrast indicating that activations decreased with increasing memory load (Fig. 3f, red and yellow) in the anterior insula, anterior STG, Heschl's gyrus (HG), and posterior STG. Correspondingly, activations decreased in the IPL with increasing pitch discrimination difficulty (Fig. 3f, blue).
Discussion
As expected, task-irrelevant sounds during the visual task activated areas in the supratemporal cortex including HG (where the primary AC is located) and STG bilaterally (Fig. 3a). As compared with the visual task, the auditory tasks in general were associated with enhanced activations especially in the posterior/lateral STG (Fig. 3a). However, activations in the AC and adjacent areas were strongly dependent on whether subjects performed pitch discrimination or pitch memory tasks. We found that activations in the anterior STG increased during pitch discrimination but not during pitch memory tasks while activations in the IPL were increased during pitch memory tasks but not during pitch discrimination, and areas in the posterior STG showed enhanced activations during both kinds of tasks (Fig. 3b).
There is abundant evidence supporting the role of lateral HG and anterior STG in pitch processing (Patterson et al., 2002; Warren and Griffiths, 2003; Penagos et al., 2004; Barrett and Hall, 2006; Krumbholz et al., 2007). Consistently, the present results implicate these areas in active pitch discrimination (Fig. 3b,c). However, we also found activations associated with pitch discrimination posterior to HG, in the STG including its superior plane, the so-called planum temporale (PT). These activations overlapped partly with activations during pitch memory tasks. The activations shared by pitch discrimination and pitch memory tasks may be related to attentive processing of sounds in general (Jäncke et al., 1999; Petkov et al., 2004; Rinne et al., 2005). However, as both tasks required processing of pitch information, overlapping activations in the posterior STG may also be related to some aspect of active pitch processing. Thus, consistent with the model proposed by Griffiths and Warren (2002), STG areas posterior to HG including the PT might act as a “computational hub” engaged in analysis and segregation of sound patterns and matching incoming and previously stored patterns. Our results are also in line with the suggestion that between the AC areas, there is a hierarchy where processing of pitch and pitch variation (including melody) continues, after initial processing in HG, at higher stages in the anterior and posterior STG (Patterson et al., 2002; Hall and Plack, 2009; McLachlan, 2009). The present results show that activations within this hierarchy are strongly dependent on the behavioral task.
We detected activations related to pitch discrimination also in the anterior insula bilaterally (Fig. 3b) (Wong et al., 2004; Gaab et al., 2006). These activations were more distinct during the pitch memory tasks than during the pitch discrimination tasks (Fig. 3c). However, during both kinds of tasks, these activations increased with increasing task demands (Fig. 3d). The anterior insula was also activated by the sounds when attention was directed to the visual task (Fig. 3a). Together, these results suggest that the anterior insula has a complex role in auditory processing and that its function is probably not directly related to pitch processing but to some more general aspects of active listening (Bamiou et al., 2003; Lawrence et al., 2003; Deary et al., 2004; Mutschler et al., 2009).
Based on a comparison of activations during auditory 2-back and 0-back memory tasks, it has been suggested that activations in the posterior STG are associated with auditory working memory (Brechmann et al., 2007). In the present study, we found small activation clusters in the posterior STG that showed higher activity during the pitch memory tasks (Fig. 3b). However, we found no STG areas that were significantly modulated by the memory load (Fig. 3d). Therefore, the posterior STG activations during the present pitch memory tasks were presumably related to attentive processing of sounds in general and not to auditory working memory per se. Previous studies have also implicated IPL in auditory working memory (Martinkauppi et al., 2000; Gaab et al., 2006). One study applied spatial n-back tasks and reported activation increases with increasing memory load in various parietal lobe locations including IPL (Martinkauppi et al., 2000). Consistently, in the present study, we found bilateral IPL activation clusters that were strongly associated with the pitch memory tasks. These IPL clusters showed higher activity during the pitch memory tasks, but not during the pitch discrimination tasks as compared with the visual task (Fig. 3b), greater activations during the pitch memory tasks than during the pitch discrimination tasks (Fig. 3c), and increasing activity with an increasing memory load (Fig. 3d).
Unexpectedly, we found that wide areas of the anterior AC and anterior insula were strongly deactivated during the pitch memory tasks. In these areas, activations were lower during the memory tasks as compared with the visual task (Fig. 3e) and decreased with an increasing memory load (Fig. 3f). Interestingly, the areas where activations decreased with memory load included HG and areas in the posterior STG. To our knowledge, such decreases in the primary and nonprimary human AC activations associated with an auditory task have not been reported before. These results might be explained by the characteristics of the present auditory tasks applying fast presentation rates (one sound in 0.6–0.8 s). In a previous study applying auditory (spatial) n-back tasks with a lower rate (one sound in 3.1 s), RTs were significantly longer in a 3-back condition than in 1-back condition (Martinkauppi et al., 2000). In the present study, responses became faster with increasing memory load and the shortest RTs were observed in the 3-back memory task. In addition to an unusually fast presentation rate, the present n-back tasks were complicated by the requirement to categorize sounds as “low,” “medium,” or “high.” Perhaps in the difficult 2-back and 3-back tasks, the subjects had to give their response as quickly as possible to return to rehearsing the categorized sequence of past sounds before perceiving and categorizing the next sound. Although both the pitch discrimination and pitch memory tasks required pitch processing, the present discrimination tasks required detailed pitch analysis of both the first and the last 100 ms halves of each tone, while the memory task required the categorization of tones into low, medium, or high pitch groups (Fig. 1). During these memory tasks, detailed pitch analysis may have been halted actively; i.e., brain processes involved in pitch analysis may have been deactivated as soon as the pitch category was resolved to save resources and time for the actual memory task. In literature, task-induced deactivation is commonly defined as relative decrease of activations during active tasks as compared with a “resting” baseline. One theory explaining deactivations observed in such comparisons postulates that these deactivations are caused by interruption of processing that occurs “as default” during the resting state (McKiernan et al., 2006). In line with this theory, we suggest that the present task-dependent deactivations observed during active auditory memory tasks are due to interruption of pitch processing that occurs as default in the AC for each incoming sound. However, these accounts of the present unexpected findings are unavoidably post hoc in their nature and should be carefully tested in future fMRI experiments.
Our results are in line with the suggestion that anterior and posterior AC belong to functionally separate auditory processing streams. We found that activations of AC areas anterior to HG were enhanced during pitch discrimination and decreased during pitch memory tasks. Furthermore, areas showing increased activations during pitch discrimination and memory tasks seemed at least partially segregated: while HG and areas of the anterior STG showed enhanced activations only during pitch discrimination, activity in the IPL was more strongly associated with the pitch memory task. However, unlike in previous studies suggesting spatial and nonspatial processing in the posterior and anterior processing streams, respectively, the present anterior–posterior differences in AC activations were observed between pitch discrimination and pitch memory tasks both requiring analysis of pitch information in spatially stationary sounds.
As discussed above, in the present and previous studies, IPL activations were modulated by memory load (Martinkauppi et al., 2000). Interestingly, in the present study, IPL activations seemed to increase also with increasing demands in the pitch discrimination task (Fig. 3d), although this effect was seen only with a more lenient statistical threshold. Together, these results suggest that the present IPL activations were related to auditory task difficulty.
In conclusion, while the present results are partly consistent with the prevailing dual-stream model of auditory processing assuming an anterior “what” and posterior “where” pathway, our results suggest that this division is not necessarily only between spatial and nonspatial processing but that similar anterior–posterior division is present also between auditory tasks with spatially invariant sounds. The present results suggest that areas in the posterior AC are involved in tasks that cannot be completed within the default-mode processing performed in the anterior AC processing stream. Subsequent studies are needed to investigate systematically how different nonspatial and spatial auditory tasks with varying task demands affect the distribution of activations in the human AC. Our results indicate that the dynamics of task-dependent modulations within human AC are considerably more complex than generally assumed.
Footnotes
-
This work was supported by the Academy of Finland (Grant #210587) and Research Funds of the University of Helsinki. We thank Drs. G. Christopher Stecker and David L. Woods for insightful comments and Ms. Suvi Talja for her skilled assistance in programming data analysis tools. T.R. is the Principal Investigator and carried the main responsibility in designing the experiment, analyzing the fMRI data, and writing the article; S.K. participated in designing the experiment, implemented and performed the experiment, analyzed the behavioral data, and participated in writing; O.S. participated in the analysis of anatomical MRI data and writing; K.A. participated in designing the experiment and writing the article.
- Correspondence should be addressed to Teemu Rinne, Department of Psychology, PO Box 9, FI-00014 University of Helsinki, Finland. teemu.rinne{at}helsinki.fi