We used functional magnetic resonance imaging to examine how modality of input affects functional network organization beyond the sensory cortices for simple working memory tasks. The stimuli were auditory or visual bandpass-filtered white noise. On a given trial, three stimuli, each with differing center frequencies, were presented in succession. For temporal sequencing tasks, participants indicated when the stimulus with the highest frequency content appeared. For comparison tasks, participants indicated whether the frequency content of the last stimulus was lower, intermediate, or higher than the first two stimuli. Task difficulty was equated by establishing equivalent accuracy thresholds across subjects. We used behavioral spatiotemporal partial-least squares (ST-bPLS) analysis to identify neural patterns capturing the optimal association between brain images and reaction time. Because of statistical instabilities, subjects were divided into a SLOW group and a FAST group based on the median split of reaction times. ST-bPLS identified a significant interaction between stimulus modality and task demands for both groups, indicating that task-dependent brain–behavior correlations changed with stimulus modality. The large-scale activity pattern associated with this effect included prefrontal cortex and parietal cortex for the SLOW group and parietal cortex and cingulate for the FAST group. For the FAST group only, ST-bPLS also identified a significant main effect that differentiated tasks independent of modality. The pattern associated with this effect included prefrontal cortex and parietal cortex. These results confirm that modality of input affects network configuration even outside of the sensory cortices but that network configuration may vary with behavior.
The notion that cognition results from large-scale neural network operation has been proposed in various forms throughout the history of neuroscience (Finger, 1994; Bressler, 1995, 2002). Advances in functional neuroimaging have provided empirical validation of this proposition. One principle that developed from the idea of large-scale neural network operation is neural context, in which regional contribution to a cognitive operation is determined by other coactive brain regions (McIntosh, 1999; Bressler, 2003). A different form of context is situational context. Situational context represents environmental factors, including sensory input and response demands of both the external and internal milieux. In most circumstances, neural context is shaped by situational context (Bressler and McIntosh, 2007).
Often, neuroimaging research attempts to map the brain in terms of one aspect of situational context, such as the type of information processed, or the process elicited by the task. For example, several alternatives have been offered to explain the neural organization of working memory. Although many parts of the brain show activity patterns consistent with working memory operations, most neurophysiological and lesion studies have emphasized the contribution of regions in the prefrontal cortex (PFC) (Fuster, 1990; Goldman-Rakic, 1990). Specifically, there appears to be a dorsal/ventral division of the lateral PFC, related to either spatial versus nonspatial information (Wilson et al., 1993) or maintenance versus manipulation of stored information (Petrides, 1994; Owen et al., 1996). Alternatively, following from the idea that both the auditory and visual system are organized into dorsal and ventral processing streams, several studies have looked for input modality-based dissociations in prefrontal cortex (Cavada and Goldman-Rakic, 1993; Distler et al., 1993; Rauschecker et al., 1997; Belin and Zatorre, 2000; Romanski et al., 2000; Alain et al., 2001).
Given that all three aforementioned proposals about the organization of working memory have received support, perhaps looking at each of these mappings independently is too simplistic. A study by Rama and Courtney (2005) tested two input modalities (i.e., auditory and visual) within one information type (i.e., nonspatial). They measured brain activity with functional magnetic resonance (fMRI) while participants performed a delayed recognition task with faces and voices. The dorsal aspect of the ventral PFC was more active during the face delay, and the inferior aspect of the ventral PFC was more active during the voice delay. The authors concluded that there was a subtle cross-modal dissociation in the ventral PFC during working memory maintenance of face and voice stimuli within a nonspatial domain. However, they used stimuli that differed both physically and semantically. Therefore, it is possible that activation differences attributed to sensory modality were actually caused by varying semantic representations.
The current experiment examined input modality and task demand effects on neural network organization in simple working memory tasks using semantically devoid stimuli (Bennett and Cortese, 1996). We constructed matrices of one-dimensional bandpass-filtered noise and presented them auditorily as noise bursts or visually as grayscale images. Participants performed four experimental tasks: auditory temporal sequencing, visual temporal sequencing, auditory comparison, and visual comparison. These tasks were as similar as possible between modality, allowing us to look at the effect of input modality on neural network organization. Within modality, the temporal sequencing and comparison tasks had different task demands, allowing us to examine the effect of performance strategy.
Materials and Methods
Seventeen participants took part in the study. Data from five participants were excluded because of improper task performance or technical difficulties. Data from the 12 remaining participants (six males; mean age, 27.4 years; range, 20–36 years) were used in the analyses. All were right handed, reported no history of major medical, neurological or psychiatric disorders, had normal hearing, and had normal or corrected-to-normal vision. All participants gave informed consent in accordance with the Institutional Review Board of University of Toronto and Baycrest Centre.
Each participant performed the following experimental tasks: auditory temporal sequencing, auditory comparison, visual temporal sequencing, and visual comparison with bandpass-filtered white-noise stimuli. In the auditory conditions, the noise stimuli were played as sound bursts. In the visual conditions, the noise stimuli were displayed as visual textures. These tasks were as similar as possible between modality and as different as possible within modality (given the constraint that we used identical stimulus presentations for each task). On the day of scanning, observers also performed an auditory and a visual control task.
In each experimental trial, three noise stimuli appeared successively for 500 ms, with a blank (silent/gray) interstimulus interval of 500 ms. The center frequency of the bandpass filter differed for each stimulus. Figure 1 illustrates the sequence of events for each trial. After the onset of the third stimulus, participants pressed one of three response keys to indicate their response. For auditory temporal sequencing, participants indicated when the tone with the highest pitch sounded: first, second, or third. For auditory comparison, participants compared the third sound with the first two sounds. They indicated whether the third sound was lower, intermediate, or higher in pitch compared with the first two sounds. For visual temporal sequencing, participants indicated when the visual texture with the highest spatial frequencies appeared. For visual comparison, participants compared the last texture with the first two textures. They indicated whether the spatial frequency content of the last texture was lower, intermediate, or higher than the first two textures. Control trials were identical to experimental trials, except that the center frequency of the bandpass filter was the same for all three stimuli, and participants pressed all three response buttons after the third stimulus was presented. On the day of scanning, the intertrial interval was chosen pseudorandomly and lasted 3, 5, 7, 9, or 11 s.
Experimental participation took place across 4 d. To ensure that practice effects did not play a role in our fMRI findings, all participants performed the tasks outside of the scanner on the first 3 d of testing. Control tasks were performed during scanning only. On the first day, participants completed 210 trials of each experimental condition, blocked by condition. Participants completed a few trials before data collection to familiarize themselves with the tasks. The order of presentation for each task was random. Participants were instructed to respond as quickly and as accurately as possible by pressing one of three keys on the computer keyboard. They did not receive feedback on their performance in any session. Based on the data collected, psychometric functions for center frequency ratio were determined (see stimulus description below), and 80% correct thresholds were estimated from best fitting Weibull curves using the QUEST adaptive staircase procedure (Watson and Pelli, 1983; Press et al., 1989). This method ensured that the experimental tasks were equally difficult for all participants. In the event that QUEST was unable to calculate an 80% threshold for an observer, that observer repeated one block of the task for which they had no threshold. On the second day, participants performed 50 trials of each experimental task at their 80% thresholds. Again, the order of presentation for each task was random. The third day took place the day before scanning in a simulated MRI environment. Each participant performed 100 trials of each experimental task, blocked by condition, and presented in random order. Again, 80% thresholds were calculated using QUEST. In the event that QUEST was unable to calculate an 80% threshold for an observer, that observer repeated one block of the task for which they had no threshold. On the day of scanning, each participant performed 40 trials of each experimental task as well as 40 trials of each control task. The trials were presented in four runs, with two blocks of five trials of each task. Blocks within runs were presented in random order across participants. Before each block, participants were presented with an instruction image indicating which task they would perform next and which response the first response key indicated.
The visual stimuli were generated by filtering one-dimensional Gaussian white-noise fields with a two-octave frequency filter (Fig. 1). During each trial, three textures appeared, differing in center frequency. The base frequency was jittered ∼2 cycles/degree (c/°) by ±20%. The remaining two center frequencies increased by a constant ratio (e.g., at a ratio of 2 and a base frequency of 2 c/°, the center frequencies were 2, 4, and 8 c/°). Task difficulty could be increased by decreasing the center frequency ratio. The order of presentation for the textures was chosen randomly. Each texture was 256 × 256 pixels in size and was generated randomly. Peak Michelson contrast was 38% and was modulated with a two-dimensional Gaussian envelope. The average luminance of the stimulus was 15 cd/m2. The background was gray with a luminance of 15 cd/m2. Finally, the display was gray during the interstimulus interval and had a luminance of 15 cd/m2.
The auditory stimuli were generated and presented in the same manner as the visual stimuli, except that the 500 ms stimulus presentation included a rise and decay of 50 ms, and the base frequency was jittered around 600 Hz. The stimuli were presented at a sound level that was comfortable for the participant.
A MacIntosh iBook (Apple Computers, Cupertino, CA) controlled stimulus presentation and response recording. On the first 2 d of testing, participants viewed the stimuli binocularly on the monitor of the iBook or listened to the stimuli presented through the speakers of the iBook binaurally from a comfortable distance. Participants indicated their responses by pressing one of three specified keys (F, J, or K) on the keyboard of the iBook. The experiment took place in a dimly lit, quiet room. On the third day, participants performed the tasks in an MRI simulator. Visual stimuli were projected using a Boxlight 6000 projector (Boxlight, Poulsbo, WA) onto a rear-projection screen and viewed by the participants through a mirror mounted in the head coil of the simulator. Auditory stimuli were delivered to the participant at a comfortable sound level by fMRI-compatible, acoustically padded headphones (Avotech, Jensen Beach, FL). Participants used their right index, middle, or ring fingers to indicate their responses on a Lumitouch Reply System response box (Lightwave Medical Industries, Burnaby, British Columbia, Canada). On the fourth day, participants performed the tasks during fMRI acquisition. Stimulus presentation and response recording was performed in the same manner as in the MRI simulator.
Regional cerebral activity was measured using a 1.5 T Signa MR scanner with a standard quadrature head coil (CV/I hardware, LX8.3 software; GE Medical Systems, Waukesha, WI). For each participant, a structural MRI was obtained by using a three-dimensional T1-weighted pulse sequence [repetition time (TR), 12.4 ms; echo time (TE), 5.4 ms; flip angle, 35°; 22 × 16.5 field of view; 256 × 192 acquisition matrix; 124 axial slices, 1.4 mm thick]. Functional imaging measured brain activation by means of the blood oxygen level-dependent effect with optimal signal contrast. Eighteen axial slices were acquired, each with a thickness of 7 mm. Functional scans were obtained using a single-shot T2*-weighted pulse sequence with spiral readout, off-line gridding, and reconstruction (TR, 2000 ms; TE, 40 ms; flip angle, 80°; 90 × 90 effective acquisition matrix).
Data processing was performed using Analysis of Functional NeuroImaging (AFNI) software (http://afni.nimh.nih.gov/) (Cox, 1996). Time series data were spatially coregistered to correct for head motion by using a three-dimensional Fourier transform interpolation, and motion-corrected images were then spatially transformed to an fMRI spiral scan template generated from 30 subjects scanned locally. This template was registered to the Montreal Neurological Institute MNI305 template. The transformation of each subject to the spiral template was achieved using a 12-parameter affine transform with sinc interpolation as implemented in SPM99 (http://www.fil.ion.ucl.ac.uk/spm/) Friston et al., 1995). Images were smoothed with an 8 mm isotropic Gaussian filter before analysis. For each subject, “brain” voxels in a specific image were defined as voxels with an intensity >15% of the maximum value in that image. The union of masks was used for group analyses.
The primary image analysis was done with spatiotemporal partial least squares (ST-PLS) (McIntosh et al., 2004). ST-PLS operates on the entire data structure at once, which requires that the data be in matrix form. One data matrix is made for each group. Within group, the rows of the data matrix are arranged as follows: condition blocks are stacked, and each subject has a row of data within each condition block. With n subjects and k conditions, there are n × k rows in the matrix. The columns of the data matrix contain the signal intensity measure at each voxel at each time point. The first column has intensity for the first voxel at the first time point, the second column has the intensity for the first voxel at the second time point. With m voxels and t time points, there are m × t columns in the matrix. The hemodynamic response function (HRF) for any given condition normally lasts for several scans; thus, a “lag window” is defined for a short signal segment within a trial that represents the response of each voxel. In the current experiment, the lag window is 8 (TR, 2; 16 s), beginning at the onset of each trial. The HRF for each trial is expressed as the intensity difference from trial onset.
Two forms of ST-PLS were performed. The first, behavioral ST-PLS, was the primary analysis to test the hypothesis of an interaction between task demands and stimulus modality in terms of brain–behavior correlations. The second, nonrotated ST-PLS, assessed whether there were differences between groups in task-dependent brain activity. These analyses are explained further below.
We used behavioral ST-PLS (ST-bPLS) to see whether there exists an interaction in brain–behavior correlations between strategy and modality. ST-bPLS identifies latent variables (LVs) that capture task- and group-dependent patterns of brain–behavior correlations. The correlation of behavior measures and the fMRI signal is computed across subjects within each task, producing within-task brain–behavior correlations. Singular value decomposition of the brain–behavior correlation matrix produces three new matrices: voxel saliences, singular values, and task saliences. The variation across the task saliences indicates whether a given LV represents a similarity or difference in the brain–behavior correlation across tasks. This can also be shown by calculation of correlation between the brain scores (dot product of the voxel salience and fMRI data) and behavior data for each task (see scatter plots in Figs. 3⇓–5). The voxel saliences give the corresponding spatiotemporal activity pattern. They are displayed as a singular image, which shows voxels that are weighted in proportion to the strength and direction (positive or negative) of their brain–behavior correlation.
Only experimental tasks (auditory comparison, auditory temporal sequencing, visual comparison, and visual temporal sequencing) were used in the ST-bPLS analyses because we did not have RT measures for the control tasks. (Because of a software bug, we were unable to register reaction times when more than one response button was pressed at the same time. In the control conditions, participants responded by pressing all three response buttons at the same time, making us unable to record their reaction times.) Trials with outlying reaction times (>3 SD from the subject's mean RT) were excluded from subject averages used in the analyses. Reaction time measures were expressed as z scores from each subject's mean and SD computed for all experimental conditions. The z-score transformation allowed us to compare changes in reaction time with equalized means and SD, without being overly influenced by large differences in mean reaction time across subjects (Ben-Shakhar, 1985). The correlations between these z scores and the task-specific fMRI data were computed across subjects and served as the input for ST-bPLS.
Statistical assessment for ST-bPLS is done using permutation tests for the LVs and bootstrap estimation of SEs for the voxel saliences. The permutation test assesses whether the effect represented in a given LV is sufficiently strong, in a statistical sense, to be different from random noise. The SE estimates of the saliences from the bootstrap tests are used to assess the reliability of the nonzero saliences on significant LVs.
We expected ST-bPLS to identify the following brain–behavior correlations. In the event that input modality does not interact with strategy, we expected LVs to identify any or all of the following: (1) brain–behavior relationships that are common for all four tasks, (2) brain–behavior relationships that differentiate performance strategy (i.e., comparison tasks vs temporal sequencing tasks), and (3) brain–behavior relationships that differentiate input modality (i.e., auditory vs visual tasks). Alternately, if modality of input does interact with putative cognitive networks, the important pattern for ST-bPLS to identify would be the interaction between strategy and modality.
Nonrotated task ST-PLS.
ST-PLS is able to identify time-varying distributed activity patterns, or LVs, that differentiate experimental conditions. To look at whether there were differences between the groups in terms of task-dependent or group-dependent interactions, we used a two-group, ST-PLS analysis. For the sake of consistency with the ST-bPLS analyses, only experimental tasks (auditory comparison, auditory temporal sequencing, visual comparison, and visual temporal sequencing) were used in these analyses.
We used a nonrotated version of ST-PLS, in which a priori contrasts restrict the patterns derived from PLS (McIntosh and Lobaugh, 2004). This version has the advantage of allowing a direct assessment of hypothesized experimental effects. There is, however, no guarantee that these effects are the strongest, which can be identified using the original version of PLS with singular value decomposition. The effects in which we were interested were a main effect of modality, a main effect of task demands, an interaction between group and modality, and an interaction between group and task demands. Statistical assessment is similar to that used for ST-bPLS.
Measures of reaction time, accuracy, and threshold from the day of scanning are summarized in Figure 2. We performed a two-group (FAST vs SLOW) × 2 (modality) × 2 (task) repeated-measures ANOVA on reaction time, accuracy, and threshold data from the day of scanning. For percentage correct data (Fig. 2A), all effects were statistically nonsignificant, indicating that task difficulty, as indicated by response accuracy, was equated across all tasks for FAST and SLOW participants. For threshold data (Fig. 2B), the main effects of modality (F(1,10) = 35.30; p < 0.0001) and task (F(1,10) = 37.92; p < 0.0001) were significant. The main effect of modality indicates that auditory thresholds were generally lower than visual thresholds. The main effect of task indicates that comparison thresholds were generally higher than temporal sequencing thresholds. The interaction between modality and task also was significant (F(1,10) = 8.48; p = 0.016), indicating that the difference between comparison and temporal sequencing thresholds was bigger for visual than for auditory tasks. All other effects were statistically nonsignificant. For reaction time data (Fig. 2c), the main effects of group (F(1,10) = 26.57; p < 0.0001), modality (F(1,10) = 52.26; p < 0.0001), and task (F(1,10) = 12.83; p = 0.005) were significant. The main effect of group confirmed that FAST participants were indeed faster than SLOW participants. The main effect of modality indicates that visual tasks were performed more quickly than auditory tasks. Finally, the main effect of task indicates that comparison tasks were associated with higher reaction times than sequencing tasks. All other effects were statistically nonsignificant.
Initially, we conducted ST-bPLS analyses using thresholds, but these analyses yielded unstable results. Bootstrap tests from a whole-group ST-bPLS analysis with reaction time showed that there were instabilities in the data when the group was considered as a whole. There were behavioral and neural differences between subjects who performed the tasks with fast reaction times and those who performed the tasks with slow reaction times. Specifically, the brain scores of subjects who performed the tasks with fast reaction times appeared as outliers in the whole-group ST-bPLS. Therefore, subjects were divided into a SLOW and a FAST group. We obtained a mean reaction time for each subject based on reaction times measured during scanning for each task. We then used the median split of mean reaction times to divide subjects into the SLOW and FAST groups.
A two-group ST-bPLS using reaction time as the behavioral measure identified three significant LVs, all of which reflected interactions between task demands, modality, and group. To clarify these interactions, we present the results from the analyses conducted within each group separately.
The ST-bPLS for the SLOW group identified only one significant LV: an interaction between task demands and modality (interaction LV; p = 0.002) (Fig. 3; for a list of local maxima, see Table 1), indicating that task-dependent brain–behavior correlations differ with modality of input. All LVs that represented main effects were not significant by permutation tests (commonality LV, p = 0.96; task-demands LV, p = 0.25; modality LV, p = 0.72). The large probability values for the main effects suggest that, despite the relatively low sample size, the lack of significance was likely not a statistical power issue.
Dominant-negative weights (related to faster reaction times in visual temporal sequencing and auditory comparison) were located in the left dorsal parietal cortex and bilateral occipital cortex. Dominant-positive weights (related to faster reaction times in visual comparison and auditory temporal sequencing) were located in the left frontal operculum, right caudate, and bilateral cerebellum.
ST-bPLS analysis for the FAST group produced two significant LVs. The first LV differentiated brain–behavior correlations in task demands for comparison and temporal sequencing (task demands LV; p < 0.0001) (Fig. 4; for a list of local maxima, see Table 2). The singular image identified dominant-negative weights (related to faster reaction times for auditory and visual temporal sequencing) in right inferior prefrontal cortex, left superior prefrontal cortex, anterior cingulate, left motor cortex, left lateral temporal parietal cortex, left dorsal parietal cortex, and left lateral occipital cortex. Dominant-positive weights (related to faster reaction times for auditory and visual comparison) were in the left dorsal prefrontal cortex, left inferior prefrontal cortex, left middle temporal cortex, and right premotor cortex.
The second LV was an interaction between task demands and modality (interaction LV; p = 0.048) (Fig. 5; for a list of local maxima, see Table 2), indicating that task-dependent brain–behavior correlations differ with modality of input. The correlation profile (data not shown) indicates that the regions identified in the singular image do not show a reliable correlation with visual temporal sequencing. This is likely because most of the covariance associated with visual temporal sequencing is captured in the task demands LV (LV1). Dominant-negative weights (related to faster reaction times in auditory comparison) were located in right inferior temporal cortex, bilateral medial occipital cortex, and bilateral cerebellum. Dominant-positive weights (related to faster reaction times in auditory temporal sequencing and visual comparison) were located in left medial frontal cortex, left posterior superior temporal cortex, cuneus, and left lateral occipital cortex. The remaining two LVs for the fast group were nonsignificant main effects (modality LV, p = 0.60; commonality LV, p = 0.98).
In summary, the neural patterns that support reaction time in the SLOW group had both frontal and posterior cortical involvement as seen in the interaction LV. The interaction LV in the FAST group had relatively less prefrontal and more posterior involvement. The strongest frontal engagement in the FAST group was observed in the task demands LV.
The two-group nonrotated ST-PLS identified two significant LVs, a main effect of modality, and an interaction between modality and group. All other effects were statistically nonsignificant.
The LV that showed the strongest effect was the contrast between auditory and visual tasks (modality LV; p = 0.000) (Fig. 6; for a list of local maxima, see Table 3). Dominant-negative weights (related to increased activation during visual tasks) were located in right middle prefrontal cortex, right posterior superior temporal cortex, left middle cingulate, left inferior parietal cortex, right dorsal occipital cortex, right ventral occipital cortex, and bilateral cerebellum. Dominant-positive weights (related to increased activation during auditory tasks) were located in bilateral anterior middle temporal cortex, right posterior superior temporal cortex, right temporoparietal cortex, right putamen, right superior colliculus, left posterior cingulate, and bilateral medial occipital cortex. Regional contributions to this pattern can be characterized by extracting the HRF for a dominant voxel over the selected time window. Figure 6 depicts one such profile, a superior temporal cortex voxel (near primary auditory cortex). Both groups show a stronger response to auditory versus visual stimuli, although the response is somewhat smaller in the SLOW group.
The second significant LV was an interaction between modality and group, indicating that there were group differences in the modality effect (modality × group interaction LV; p = 0.03) (Fig. 6; for a list of local maxima, see Table 3). Dominant-negative weights were located in left ventral prefrontal cortex, left anterior inferior temporal cortex, right posterior superior temporal cortex, left putamen, left insula, and left cerebellum. Dominant-positive weights were located in left medial occipital cortex and right cerebellum. Figure 6 depicts an HRF plot for a superior temporal cortex voxel (posterior to the voxel plotted for the main effect of modality). The voxel shows a strong auditory response in the FAST group but not in the SLOW group. When considered with the main effect, the nonrotated ST-PLS results indicate that both groups show a similar modality response but that the spatial extent is reduced in the SLOW group.
We examined the effect of input modality and task demands on neural network organization for simple working memory. Because of statistical instabilities, participants were divided into a SLOW and a FAST group based on the median split of reaction times measured during scanning. Both groups showed an interaction between modality and task in terms of the neural systems supporting behavior, but the regions recruited into these patterns were substantially different between groups.
Behavioral analyses from the current study confirmed that we were able to equate difficulty, as measured by response accuracy, across participants and tasks. Also, the behavioral analyses indicated that there were no group differences other than reaction time.
We used nonrotated ST-PLS to examine whether there were brain activity differences between the groups in terms of modality-dependent, task-dependent, or group-dependent interactions. We identified two significant LVs. The strongest LV was a main effect of modality. The second LV depicted an interaction between modality and group and was far more spatially restricted than the modality LV. Considered together, these patterns suggest that both SLOW and FAST participants showed similar stimulus modality responses but that the effect is smaller in SLOW participants. In other words, the modality-based difference between the groups is quantitative rather than qualitative.
The brain–behavior analysis (ST-bPLS) suggested that, in addition to the quantitative differences in the modality effect between groups, there were qualitative differences in the neural systems that support behavior. For the SLOW group, ST-bPLS identified one reliable pattern of activity: an interaction in brain–behavior correlations between modality and task demands. For the FAST group, ST-bPLS identified two reliable patterns of activity. The first pattern of brain–behavior relationships was a main effect of task demands. This LV suggests that, unlike SLOW participants, FAST participants were able to use similarities across modality to perform the tasks. The second reliable pattern of activity for the FAST group was an interaction in brain–behavior correlations between modality and task demands. The statistical reliability of LV1 and LV2 in the FAST group emphasizes that both patterns coexist and contribute to behavior.
Our results lead to two conclusions. The first is that modality of input interacts with putative cognitive networks, even outside the sensory cortices. The current experiment used semantically devoid auditory and visual stimuli that were as similar as possible between modality (cf. Rama and Courtney, 2005). In addition, it is unlikely that our stimuli activated semantic processes because our stimuli had objectively defined physical dimensions that were difficult to represent symbolically.
The second conclusion from the current study is that neural network organization varies with response speed. Questions asked during debriefing did not identify any overt strategy differences between the SLOW and FAST groups. However, the existence of the task demands LV for FAST participants suggests that the difference between groups is that FAST participants are able to use the similarities across modality when they perform the tasks. This result is in line with a paper by Speer et al. (2003) showing that covert cognitive strategies play an important role in modulating both behavior and brain activity during memory tasks. Speer et al. implicitly biased participants to use either a short-term or a long-term memory strategy while performing a word-list memory task. They found that the different biases led to different behavior and differential activation in the lateral and medial frontal cortices. The current results show that there are neural differences between FAST and SLOW participants. Although we cannot differentiate groups based on overt strategy use, these data are consistent with the idea that there are multiple ways that a task can be instantiated at a neural level and that there are behavioral consequences of using one set of regions as opposed to another.
Current theories about the neural organization of working memory (Wilson et al., 1993; Petrides, 1994; Owen et al., 1996) assume that each location has a static function. According to this view, areas identified in the task demands LV should not overlap with the interaction LV. In Table 4, we highlight areas with high spatial proximity between the task demands LV and the interaction LVs. Spatial proximity was defined as to whether significant clusters from the two LVs overlapped, independent of which lag the cluster was defined. Noteworthy regions include middle prefrontal cortex [Brodmann area (BA) 9], anterior cingulate (BA 32), inferior occipital gyrus (BA 18), and superior temporal gyrus (BA 22). The high spatial proximity could indicate that there are small neural populations with varied function that are obscured by the limited spatial resolution of fMRI (Grill-Spector et al., 1999; Grill-Spector and Malach, 2001). However, if this were the case, one also would expect the anatomical connections within these sectors to vary dramatically. Nonhuman primate anatomical research has shown that this is not the case (Petrides and Pandya, 1999). There appears to be a general principle of brain anatomical connectivity wherein connectional changes vary slowly across space (Felleman and Van Essen, 1991).
The spatial overlap between LVs may instead indicate that the function of a region is dynamic, depending on the interaction between neural context (i.e., functional connections) and situational context (e.g., the input modality and the type of response required for a given situation) (Bressler and McIntosh, 2007). In this light, perhaps it is not surprising that anterior cingulate and prefrontal cortex are engaged in both the task demands LV and the interaction LV. Broad psychological processes, such as attention and memory, will activate anterior cingulate and prefrontal cortex. This common activation is best explained by changes in the functional connections of a region. For instance, an fMRI study by Lenartowicz and McIntosh (2005) suggests that anterior cingulate activation can contribute to both memory and attentional processes when there are changes in the brain regions with which the anterior cingulate interacts. Subjects performed a standard version of a two-back working memory task with strong attentional demands and a cued version that promoted memory retrieval. Although both tasks activated the anterior cingulate, the functional connections of the anterior cingulate and the relationship of these activity patterns to memory performance were completely different in the two tasks. Therefore, the contribution of the anterior cingulate to memory- and attention-driven tasks was determined by other coactive brain regions. Similar task-dependent functional connectivity patterns have been reported for the medial temporal lobe (McIntosh et al., 2003) and for middle PFC (McIntosh et al., 1997).
The idea of dynamic function is perhaps more surprising for secondary visual cortex and auditory association cortex. Although such regions are typically considered part of distinct sensory systems, each has the anatomical capacity for interacting with areas that are part of other systems. For example, the primary (V1) and secondary (V2) visual cortices are generally believed to be visual areas that receive cortical connections from visually related areas only. However, primate anatomical studies suggest that V1 and V2 receive projections from auditory association and parietal areas (Rockland and Ojima, 2003; Schroeder et al., 2003). The capacity for sensory systems to interact with areas that are a part of other systems enables regions to participate in more than one functional network depending on the pattern of interactions with other regions (i.e., the neural context).
Overall, these results emphasize the interactive nature of brain processing, in which neural context allows the response properties of one element in a network to be profoundly affected by the status of other elements in that network. Furthermore, our results suggest that it is too restrictive to attempt to map the brain in terms of only one aspect of situational context. Environmental factors, including sensory input and response demands of both the external and internal milieux, interact to determine how a task is instantiated at the neural level.
This work was supported by Canadian Institutes of Health Research and Natural Sciences and Engineering Research Council of Canada grants (A.R.M.).
- Correspondence should be addressed to Anthony R. McIntosh, Rotman Research Institute of Baycrest Centre, 3560 Bathurst Street, Toronto, Ontario, Canada M6A 2E1.