Although the lateral prefrontal cortex (lPFC) and dorsal premotor cortex (PMd) are thought to be involved in goal-directed behavior, the specific roles of each area still remain elusive. To characterize and compare neuronal activity in two sectors of the lPFC [dorsal (dlPFC) and ventral (vlPFC)] and the PMd, we designed a behavioral task for monkeys to explore the differences in their participation in four aspects of information processing: encoding of visual signals, behavioral goal retrieval, action specification, and maintenance of relevant information. We initially presented a visual object (an instruction cue) to instruct a behavioral goal (reaching to the right or left of potential targets). After a subsequent delay, a choice cue appeared at various locations on a screen, and the animals could specify an action to achieve the behavioral goal. We found that vlPFC neurons amply encoded object features of the instruction cues for behavioral goal retrieval and, subsequently, spatial locations of the choice cues for specifying the actions. By contrast, dlPFC and PMd neurons rarely encoded the object features, although they reflected the behavioral goals throughout the delay period. After the appearance of the choice cues, the PMd held information for action throughout the specification and preparation of reaching movements. Remarkably, lPFC neurons represented information for the behavioral goal continuously, even after the action specification as well as during its execution. These results indicate that area-specific representation and information processing at progressive stages of the perception–action transformation in these areas underlie goal-directed behavior.
To accomplish a goal-directed behavior, we often must initially determine a behavioral goal at an abstract level and then select an action to achieve that goal. For instance, if asked on the phone to come into an unfamiliar room, look for two objects placed side by side, and pick up the object on the left, you first establish the behavioral goal of capturing the left object. Later, when actually observing the two objects in the room, you select an action to reach the left object. Clearly, the determination of behavioral goals and action specification constitute two separate stages of a goal-directed behavior. Given this distinction, an interesting question is how each of cortical area contributes to each of these processes. Although a growing body of evidence has indicated the involvement of the frontal cortex in diverse aspects of goal-directed behavior (Passingham, 1993; Wise et al., 1996; Miller and Cohen, 2001; Fuster, 2008; Tanji and Hoshi, 2008), more studies are required to determine the exact role of the individual frontal areas. In this report, we focus on two sectors of the lateral prefrontal cortex (lPFC) and dorsal premotor cortex (PMd) to compare their participation in behavioral goal determination and action specification.
Based on anatomical and functional studies, the lPFC is believed to have two major sectors: the dorsolateral part (dlPFC) (area 46/9) and the ventrolateral part (vlPFC) (area 45/12). Early studies have stressed the involvement of the vlPFC in processing visual object information and of the dlPFC in visuospatial processing (Goldman-Rakic, 1988). However, later studies determined the importance of across-domain integration (Rao et al., 1997) and dynamic changes in multiple-domain representations (Meyer et al., 2011). Another series of experiments proposed that the vlPFC is involved in task-relevant signal selection and selective sensory attention (Rushworth et al., 2005), whereas the dlPFC is involved in information monitoring and abstract rule processing (Petrides, 1995; Owen et al., 1996; Wallis et al., 2001; Barraclough et al., 2004; Buckley et al., 2009). However, it is established that the PMd and lPFC are separate entities with anatomical distinctions (Petrides and Pandya, 1999, 2002; Luppino et al., 2003). As for functional roles, both areas are involved in multiple aspects of motor behavior (Mitz et al., 1991; Deiber et al., 1997; Asaad et al., 1998; Toni et al., 1999; Rowe et al., 2000; Wise and Murray, 2000; Cisek and Kalaska, 2005), although only a few studies have proposed preferential involvement of each area (Boussaoud and Wise, 1993; di Pellegrino and Wise, 1993; Wallis and Miller, 2003; Hoshi and Tanji, 2004, 2006). These previous reports provide a foundation for further studies, and here, we examined the specific roles of the three areas from the perspective of the behavioral goal retrieval and action specification in goal-directed behavior. We examined the neuronal activity in each area while macaques were engaged in a behavioral task specifically designed to dissociate the two distinct behavioral factors (Nakayama et al., 2008; Yamagata et al., 2009).
Materials and Methods
Animals and experimental conditions.
Two male monkeys (Macaca fuscata, monkeys 1 and 3) and one female monkey (Macaca fuscata, monkey 2), weighing 5.0∼8.0 kg, were cared for in accordance with the National Institutes of Health guidelines and the guidelines of Tokyo Metropolitan Institute of Medical Science and Tamagawa University. In monkey 1, neurons were recorded from the lPFC and PMd; in the other two monkeys, neurons were recorded from either the lPFC (monkey 3) or the PMd (monkey 2). During the experimental sessions, each monkey sat in a chair with his or her head and left arm restrained. We installed a button in front of the chair at waist level. A 19 inch video monitor equipped with a touch-sensitive screen was placed in front of the monkey. Eye positions were monitored with a 240 Hz infrared eye-tracking system (resolution, 0.25° visual angle; R-HS-S6; Applied Science Laboratories).
We devised a task for the monkeys in which four behavioral phases were separated in time (see Fig. 1A): (1) encoding a visual object presented as an instruction cue, (2) representing a behavioral goal associated with the visual object, (3) specifying an action based on the combination of information (remembered behavioral goal and a choice cue signal), and (4) preparing and executing the action. After an intertrial interval of 3 s, each trial commenced when the monkey pushed the button with the right hand and gazed at a fixation point (a white circle, 1.4° visual angle) that appeared at the center of the touch-sensitive screen. If the monkey continued to gaze at the fixation point for 1200 ms, an instruction cue (11°) was presented for 800 ms to signal the animal to select either the right or left target (i.e., the behavioral goal). To monkey 1 and monkey 3, four instruction cues were presented; a green circle and a yellow square instructed to select the target on the right, whereas a red diamond and a blue cross indicated the left target should be selected (see Fig. 1B). To monkey 2, only two instruction cues were presented; a green circle and a red diamond instructed to select the target on the right and on the left, respectively (see Fig. 1B). These cues were presented randomly. Because no information about future targets was available at this stage, the monkeys were required to select right or left without specifying a forthcoming action. If the monkey continued to gaze at the fixation point for 1200 ms during the subsequent delay, a choice cue consisting of two gray squares (size of each box, 7°; 11° between the centers) appeared at one of six different locations on the screen (see Fig. 1C). At this point, the animal could specify what to do (i.e., action) for the first time. After 1500–2500 ms, the color changed from gray to white (the GO signal) if the monkey had already fulfilled the requirement of gazing at the choice cue for ≥300 ms. (The time for the fixation requirement at the end of the choice cue period was varied between 300 and 600 ms to prevent the monkey from predicting the onset of the GO signal.) If not, the onset of the GO signal was postponed until the fixation requirement was satisfied. After the GO signal, if the monkey reached for the target with the right arm within the next 1.5 s, they received a fruit juice reward 500 ms after touching the correct square.
Surgery and physiological recordings.
After completing the behavioral training, aseptic surgery was performed under pentobarbital sodium anesthesia (20–25 mg/kg, i.v.) with atropine sulfate. Antibiotics and analgesics were used to prevent postsurgical infection and pain. Polycarbonate screws were implanted in the skull, and two plastic pipes, rigidly attached with acrylic resin, were used to securely fix the head during the daily recording sessions. A part of the skull was removed, and a recording chamber was implanted.
Neuronal activity was recorded with glass-insulated Elgiloy-alloy microelectrodes (0.5–2.0 MΩ at 1 kHz), which were inserted through the dura mater using a hydraulic microdrive (MO-81-S; Narishige). Single-unit potentials were amplified with a multichannel processor and sorted using a multispike detector (MCP Plus 8 and ASD; Alpha Omega Engineering). The TEMPONET system (Reflective Computing) was used to control the behavioral task and save data for off-line analyses at 1000 Hz.
Neuronal activity classification.
In this study, we analyzed neurons whose activity was recorded in ≥50 successful trials. To detect neurons with task-related activity, we first counted the number of spikes from each neuron during successive 200 ms bins (38 bins in total) involving five task events: the onset of the instruction cue (10 bins; four before, one during, and five after onset); the onset of the choice cue (10 bins; four before, one during, and five after onset); the onset of the GO signal (five bins; four before and one around onset); the onset of the movement (four bins; one before, one during, and two after onset); and the onset of the reward (nine bins; four before, one during, and four after onset). We classified a neuron as “task related” if the discharge rate distribution (in spikes/second) significantly differed in at least one of the 10 trial types (five positions of the target on the screen, T1 to T5 in Fig. 1C, each of which could be associated with a left or right target; ANOVA, p < 0.001, not corrected).
For the activity during each of the successive 200 ms bins (38 bins in total as described above), we initially used a one-way ANOVA (p < 0.01) to analyze the spike count data in each bin by examining the four visual stimuli to identify activity selective for the visual objects used as the instruction cues (i.e., cue-selective activity). For the cue-selective neurons with the significant main effect of OBJECT (p < 0.01), we used a t test with Bonferroni's correction (p < 0.01) in which we compared activities in response to the left-instruction cue (RED DIAMOND vs BLUE CROSS) or to the right-instruction cue (GREEN CIRCLE vs YELLOW SQUARE). If the activities differed in either comparison, the cue-selective neuron was judged as OBJECT selective.
For neurons without OBJECT selectivity, we first applied a two-way ANOVA to examine the behavioral goals (i.e., left or right side of the choice cue) and the actions (i.e., the five target positions on the screen). Based on these analyses, we classified neuronal activity into four categories: (1) behavioral goal only (behavioral goal, <0.01; action, ≥0.01; and behavioral goal by action, ≥0.01), (2) action only (behavioral goal, ≥0.01; action, <0.01; and behavioral goal by action, ≥0.01), (3) selective for both (behavioral goal, <0.01, and action, <0.01; or behavioral goal by action, <0.01), and (4) nonselective (behavioral goal, ≥0.01; action, ≥0.01; and behavioral goal by action, ≥0.01). Neuronal activity classified as both may play a role in representing the choice cue location or in integrating it with the behavioral goal. To investigate these possibilities, we applied a second two-way ANOVA to neuronal activity classified as both to examine two main factors: the choice cue location and the behavioral goal. For these analyses, we used the data obtained with the cues at positions 1–5 in Figure 1C. We excluded data from trials in which the choice cue included positions 0 and 6 (see Fig. 1C), which were never used as reach targets. The second analysis resulted in the identification of two classes of activity: (1) activity selective only for the choice cue locations (choice cue, <0.01; behavioral goal, ≥0.01; and choice cue by behavioral goal, ≥0.01) and (2) activity selective for both the choice cue locations and the behavioral goal (choice cue, <0.01, and behavioral goal, <0.01; or choice cue by behavioral goal, <0.01). The results of the two sets of ANOVAs classified activity without object selectivity into five categories: (1) selective only for the behavioral goal, (2) selective only for the action, (3) selective only for the choice cue location, (4) selective for the choice cue location and the behavioral goal (mixed selectivity), and (5) selective for neither.
Overall, the above analyses led to the classification of neuronal activity into six categories: (1) selective for the visual objects, (2) selective only for the behavioral goals, (3) selective only for the actions, (4) selective only for the choice cue locations, (5) selective for the choice cue locations and the behavioral goals (mixed selectivity), and (6) selective for neither.
Detection of activity onset.
To investigate the onset of activity reflecting the visual objects and the behavioral goals after the instruction cue onset, we first applied ANOVA to find neurons whose activity during the 400 ms after the cue onset was selective for the visual objects or the behavioral goals. If the activity from the baseline period (the 500 ms period preceding the cue onset) increased more than it decreased, the activity was classified as excited. If the activity decreased more than it increased, the activity was considered inhibited. For object-selective neurons, the activity modulation was calculated as an activity difference between visual-object presentations leading to the greatest and the smallest changes in activity. The activity modulation for the behavioral goals was calculated as an activity difference between the right- and left-instruction cues. Latency was defined as the point at which the activity modulation exceeded the mean ± 5SD of the baseline period. This criterion was adopted because our preliminary analyses showed that lower threshold values often lead to too short latencies for the frontal cortex and because the neuronal responses are strong enough to develop well above the larger threshold value (5SD).
While two monkeys (monkeys 1 and 3) performed the task, we recorded extracellular neuronal activity in and around the principal sulcus (dlPFC) and in the inferior convexity region (vlPFC) in the left hemisphere (Fig. 1D) (see also Fig. 14). Recording sites, first determined by observing the principal and arcuate sulci during craniotomy, were verified by examining magnetic resonance images (1.5 T; Sonata; Siemens) (Saga et al., 2011). At the end of each experimental session, we applied intracortical microstimulation (ICMS) through the tip of an inserted electrode (11–44 pulses of 200 μs width at 333 Hz; current, <50 μA) to map the frontal eye field (FEF) (Bruce et al., 1985). Saccadic eye movements were elicited with ICMS in the anterior bank of the arcuate sulcus with currents of <40 μA. We operationally defined this area as FEF. We excluded cells that were recorded from FEF. We put the border between dlPFC and vlPFC at 3 mm ventral to the principal sulcus. Thus, the dlPFC and vlPFC largely corresponded to area 46d/46v/9 and area 45/12, respectively (Petrides and Pandya, 1999, 2002; Borra et al., 2011; Gerbella et al., 2012). In total, we found 1149 task-related lPFC neurons: 358 in the vlPFC (255 in monkey 1 and 103 in monkey 3) and 791 in the dlPFC (397 in monkey 1 and 394 in monkey 3). In addition, we found 773 task-related neurons in the PMd (330 in monkey 1 and 443 in monkey 2). The success rate exceeded 95% for the three animals; therefore, we analyzed the responses in successful trials.
Basic visuospatial response properties of neurons in the lPFC
We first examined the basic visuospatial response properties using a control task (Fig. 2A) in which a visual signal was presented in one of five horizontal arrays (Fig. 2B) while the monkeys gazed at a fixation point. We found that the visual responses of neurons in the vlPFC and dlPFC were biased toward a contralateral hemifield from the recorded hemisphere. Additionally, a significant fraction of the vlPFC neurons responded best to stimuli presented at the central visual field (Fig. 2C). These observations agree with previous reports (Suzuki and Azuma, 1983; Scalaidhe et al., 1999), confirming that we sampled neurons from the vlPFC and dlPFC.
Neuronal activity in the lPFC in response to the instruction cue
We found that many cue-selective lPFC neurons responded selectively to the four visual objects used as the instruction cue (p < 0.01, one-way ANOVA and t test with Bonferroni's correction). For example, the neuron shown in Figure 3A exhibited a burst of activity predominantly when a yellow square was presented. We also detected cue-selective neurons preferentially responding to two objects, either during the cue presentation (Fig. 3B) or in the delay period (Fig. 3C). It was possible that these neurons merely responded to two of the four objects. However, our subsequent analyses led to a different interpretation. The two objects that led to the highest and second highest activity turned out to be the combination of a red diamond and a blue cross or the combination of a green circle or a yellow rectangle for >80% of dlPFC neurons and >60% of vlPFC neurons (Fig. 4A), and both exceeded the occurrence rate by chance (33%). Because both of these combinations of objects are associated with a particular behavioral goal (left or right), we suggest that cue-selective neuronal responses in the vlPFC and dlPFC are generally biased toward the behavioral goal.
To study the time course of the selectivity distribution in each area, neuronal activity was classified using ANOVA (see Materials and Methods) for every 200 ms bin (Fig. 5, A for vlPFC, B for dlPFC; Table 1). Neurons selective for the visual objects (green parts) were found primarily in the vlPFC (Figs. 5A, 14). In the first bin (101–300 ms) after the instruction cue onset, only 1% of dlPFC neurons were selective for the visual objects, whereas 8% of vlPFC neurons were selective (p < 0.001, Fisher's exact test between vlPFC and dlPFC). However, in the third bin (501–700 ms) and thereafter, the fraction of the object-selective vlPFC neurons was continuously smaller than the fraction in the first bin (101–300 ms) (p < 0.01, Fisher's exact test), suggesting that object selectivity was most obvious in the initial encoding phase. In contrast to neurons selective for the visual objects, neurons selective for the behavioral goals were commonly observed in the vlPFC and dlPFC soon after the instruction cue onset (Fig. 5, blue bars in A for vlPFC, B for dlPFC; Table 1). In the first bin (101–300 ms), 7% of vlPFC and 8% of dlPFC neurons were selective for the behavioral goals (p = 0.55, Fisher's exact test; Table 1). However, in the second bin (301–500 ms) and thereafter, neurons in the dlPFC were more often selective for the behavioral goals than were those in the vlPFC (p < 0.05, Fisher's exact test).
We next compared visual-object selectivity and instructed behavioral-goal selectivity in the vlPFC and dlPFC. For each cue-selective neuron responding differentially to the four objects used as the instruction cues (p < 0.01, one-way ANOVA), we first computed the activity difference between all six pairs of four objects. Subsequently, we computed the average difference of each neuron in response to four pairs of objects instructing a different behavioral goal [across-instruction difference (AID)] and to two pairs of objects instructing an identical behavioral goal [within-instruction difference (WID)]. If activity was correlated more with the instructed behavioral goals, AID would be greater than WID. In Figure 4B, we plotted AID against WID for each cue-selective neuron at three different time points (t1, 101–300 ms relative to the instruction cue onset; t2, 301–500 ms relative to the instruction cue onset; t3, −299 to −100 ms relative to the choice cue onset). The density functions revealed that neurons with object selectivity (green) were clustered in the proximity of the unity slope line, whereas the neurons selective for the behavioral goals (blue) were above the unity slope line. These results showed that there are two groups of neurons selective for the behavioral goals and for the objects and that their proportions differ between vlPFC and dlPFC. In the dlPFC (middle column), AID was greater than WID (p < 0.01, Kolmogorov–Smirnov test), indicating that activity of dlPFC neurons amply reflected the instructed behavioral goals. In contrast, in the vlPFC (left column), AID was not different from WID (p > 0.05, Kolmogorov–Smirnov test), indicating that vlPFC neuronal activity was also considerably influenced by the object features.
Overall, the above findings suggest that dlPFC neurons are primarily involved in representing the behavioral goal without encoding the visual objects, whereas vlPFC neurons initially encode the visual objects but also participate in representing the behavioral goals. We thus investigated whether the object-selective vlPFC neurons preferentially turned into the behavioral-goal selective neurons. The result revealed that the probability with which a vlPFC neuron gains behavioral-goal selectivity does not depend on the history of object selectivity (Fisher's exact test, p > 0.05).
Subsequently, we examined the onset of activity reflecting the visual objects and the behavioral goals. In the vlPFC, the object-selective excited activity began 130 ms after the cue onset (150 ms for inhibited activity; Fig. 6A), and the behavioral-goal-selective excited activity began at 150 ms (Fig. 6B). Thus, vlPFC activity reflecting the behavioral goals developed while activity reflecting the visual objects already existed. In the dlPFC, the behavioral-goal-selective activity began at 170 ms for excited responses and at 330 ms for inhibited responses (Fig. 6B). Comparison of the behavioral-goal selectivity in the two areas suggests that the development of the behavioral goals in the dlPFC tend to be later than that in the vlPFC. Examination of the peak selectivity of excited responses supports this view because the peak was at 190 ms in the vlPFC and at 270 ms in the dlPFC.
Neuronal activity in the lPFC in response to the appearance of the choice cue
After the choice cue onset, the monkeys could specify a forthcoming action toward a correct target on the screen based on the combination of two sets of information: behavioral goal and choice cue location. During this period, three types of neuronal activity were newly observed in the lPFC. The first type reflected the location of the choice cues, which appeared at one of the six locations on the screen (Fig. 1C). For example, the neuron shown in Figure 7 was active when the choice cue was located at positions 4 and 5. The magnitude of the choice cue response was similar regardless of whether the right- or left-side choice cue was selected (as indicated by the two arrows). The second type of activity reflected the chosen actions regardless of the behavioral goals. For example, the neuron shown in Figure 8 was strongly active when the correct target was located at position 5 (as indicated by the two arrows). The third type reflected both the behavioral goal and the cue location or selected action (mixed selectivity). For example, activity of the neuron shown in Figure 9 was greatest when the behavioral goal was “right ” and the selected action was to reach toward the rightmost target on the screen (as indicated by the arrow).
ANOVAs revealed that the three types of activity developed in an area-dependent manner (Fig. 5). In the first bin (101–300 ms), neurons selective for the choice cue locations (gray bars) were found more often in the vlPFC (12%) than in the dlPFC (7%; Fisher's exact test, p < 0.01; Table 2), whereas neurons selective for the behavioral goal (blue bars) were found more often in the dlPFC (8%) than the vlPFC (2%). In contrast, neurons selective for both behavioral goals and the choice cue location or selected action (yellow bars) were commonly found in both the dlPFC and the vlPFC (Table 2).
These results suggest that, upon specifying action, vlPFC neurons were strongly influenced by the choice cue location, whereas dlPFC neurons were biased toward the behavioral-goal information stored during the delay period.
To identify the development of action-selective information with higher temporal resolution, we applied the same ANOVA to the activity after calculating inverse interspike intervals with a resolution of 10 ms (Hoshi and Tanji, 2006). Figure 10 shows the fraction of action-selective neurons around the choice cue onset. The latency was 100 ms in the dlPFC and 110 ms in the vlPFC (p < 0.01 by Fisher's exact test, compared with the median of the five 10 ms bins before choice cue onset), indicating that activity reflecting the actions developed almost simultaneously in the dlPFC and vlPFC.
Neuronal activity in lPFC in the preparation and execution of action
While the monkeys prepared to initiate the reaching movements during the choice cue period, neurons reflecting the choice cue location (Fig. 5, gray parts) and neurons with mixed selectivity (Fig. 5, yellow parts) gradually decreased. By the time the GO signal appeared, neurons in the dlPFC and vlPFC rarely reflected the choice cue location (Table 2). In contrast, 10–11% of dlPFC neurons and 5–9% of vlPFC neurons continuously reflected the selected action (red bars). Neurons selective for the actions were found more frequently in the dlPFC than in the vlPFC (p < 0.01, Fisher's exact test; Table 2). During execution of the action, 13% of dlPFC neurons and 8% of vlPFC neurons reflected the intended action. These results suggested that lPFC neurons selectively represented the information for action until the action is executed with appropriate movements and that the selectivity for the choice cue location disappears.
Moreover, lPFC neurons were found to represent the behavioral-goal information throughout the action preparation and action execution periods. The neuron shown in Figure 11 was active while the monkey prepared and executed the intended action under the behavioral goal of the left side reach regardless of the target position. Remarkably, the behavioral-goal-selective activity was represented during the period when the intended action was prepared: 10–12% of dlPFC neurons and 5–9% of vlPFC neurons reflected the behavioral goals at this stage of action preparation. Neurons selective for the behavioral goals were found more frequently in the dlPFC than in the vlPFC (p < 0.05, Fisher's exact test; Table 2). Another unexpected finding was that the behavioral-goal-selective activity continued to be present at the behavioral stage of executing the action with motor acts: 13% of dlPFC neurons and 15% of vlPFC neurons reflected the behavioral goal during the motor execution. Notably, behavioral-goal-selective neurons in the vlPFC and dlPFC were no less frequent than action-selective neurons during the preparatory period for action. Furthermore, during the execution of motor acts, the behavioral goals were more strongly represented than the specified actions among vlPFC neurons (p < 0.001, Fisher's exact test), although the two sets of behavioral selectivity were comparable among dlPFC neurons.
Given this abundance of behavioral-goal-selective activity, we examined whether individual lPFC neurons represented behavioral-goal information throughout the task phases beginning from the presentation of the instruction cue to the action execution or whether separate groups of neurons held the behavioral-goal information during a specific phase of the task period. We first found that only a minority of lPFC neurons exhibited behavioral-goal selectivity both during a period preceding (Fig. 12, −299 to −100 ms, t3) and following (501–700 ms, t5) the choice cue appearance. Conversely, 97% of vlPFC and 83% of dlPFC neurons selective for the behavioral goal after the choice cue (during t5) did not show selectivity before the choice cue onset (t3). We also found that 92% of vlPFC and 77% of dlPFC neurons that were behavioral-goal selective during the action execution period (t7) did not exhibit goal selectivity in the period preceding the choice cue appearance (t3). We then applied the comparison to all possible pairs of 200 ms bins before and after the choice cue onset. In 90–100% of vlPFC cases and in 73–83% of dlPFC cases, the behavioral-goal selectivity was reflected only in either bin. In addition, we did not find a tendency that a goal-selective neuron gains a particular selectivity after the choice cue onset (p > 0.05, Fisher's exact test). We next expanded the time range of the comparison into 600 ms. We looked for neurons with behavioral-goal selectivity in at least one of three bins (−699 to −500, −499 to −300, and −299 to −100 ms) before the choice cue onset and neurons with the goal selectivity in at least one of three bins (301–500, 501–700, and 701–900 ms) after the choice cue onset (Fig. 13). In vlPFC of 43 neurons with the goal selectivity before the choice cue onset, 33 neurons (77%) lost the goal selectivity after the appearance (Fig. 13A), while in dlPFC 125 of 178 goal-selective neurons (70%) lost the goal selectivity (Fig. 13B). Conversely, in vlPFC 56 of 66 goal-selective neurons (85%) newly gained the goal selectivity after the choice cue onset (Fig. 13A), while in dlPFC 130 of 183 goal-selective neurons (71%) newly gained the goal selectivity (Fig. 13B). These results indicate that a majority of neurons that showed behavioral-goal selectivity before the choice cue onset ceased to represent the information after its onset, and a large amount of goal-selective activity occurred de novo after the choice cue appearance.
Comparison of the response properties of PMd and lPFC neurons
Finally, we compared the response properties of lPFC neurons with those of the PMd (n = 773; 330 in monkey 1 and 443 in monkey 2) (Nakayama et al., 2008). Figure 5C [adopted from the study by Nakayama et al. (2008)] shows the classification of the PMd neurons into the five categories (without the category of object selectivity). For the PMd neurons recorded from monkey 1 to which four visual objects were presented, we applied the same set of ANOVA as that applied to the lPFC neurons to classify the activity into the six categories including object selectivity. This analysis revealed that only a small number of PMd neurons reflected the visual objects (3%; n = 9 of 330 during the 101–300 ms after the instruction cue onset; Fig. 5C, inset). We subsequently examined the selectivity of the PMd neurons in monkey 1 for the instructed behavioral goals and for the visual objects in the same manner as we did for vlPFC and dlPFC neurons (Fig. 4). In >90% of cases, the visual responses were classified according to the behavioral goals (Fig. 4A, right). Furthermore, AID was greater than WID (Fig. 4B, right column). It was remarkable that PMd neurons started to strongly reflect the instructed behavioral goals in the first bin (101–300 ms, t1) after the instruction cue onset.
The virtual absence of object selectivity in the PMd is consistent with the fact that the PMd does not interconnect with the inferotemporal cortex or vlPFC (Barbas and Pandya, 1987; Matelli et al., 1998; Luppino et al., 2003). We reported previously that the behavioral-goal selectivity appeared at 150 ms in the PMd (Fig. 6B, gray lines) (Yamagata et al., 2009). Thus, the goal selectivity in the PMd developed in parallel with that in the vlPFC (latency, 150 ms) and earlier than that in the dlPFC (latency, 170 ms).
After the choice cue onset, action selectivity began at 110 ms in the PMd (Fig. 10, right), revealing that it developed in parallel with the vlPFC (110 ms) and dlPFC (100 ms). It was noteworthy that lPFC neurons exhibiting a variety of behavioral selectivity greatly increased just after the choice cue onset. In Figure 5D, we overlaid the fraction of task-selective neurons in the three areas, normalized with respect to the values obtained before the choice cue onset (−299 to −100 ms; Fig. 5A–C). The number of task-selective neurons increased 4.2 (in the vlPFC) and 2.4 (in the dlPFC) times after the choice cue onset, whereas the increase was 1.7 times in the PMd. A closer look at the selectivity distribution revealed that lPFC neurons began to show selectivity for multiple behavioral factors : the choice cue location (Fig. 5A, B, gray bars), the behavioral goal together with the choice cue location (yellow bars), and action (red bars), while continuing to represent selectivity for the behavioral goal (Fig. 5A,B, blue bars). In contrast, information reflected in the PMd switched from the behavioral goal to the action (Fig. 5C); a majority of the goal-selective neurons (80%; 219 of 275 neurons) lost the goal selectivity after the choice cue onset (Fig. 13C), whereas a smaller number of neurons newly gained the goal selectivity (n = 56). A considerable fraction of PMd neurons also integratively represented the behavioral goal and the choice cue location soon after the choice cue onset (yellow bars at 101–300 ms; Fisher's exact test, p < 0.001). These results imply that the lPFC involvement represents diverse sets of task-relevant information, whereas the PMd is involved in transforming the behavioral-goal information into action-specific information toward the preparation and execution of action. This notion is supported by the fact that lPFC neurons reflecting the action were more abundant just after the choice cue onset, whereas the action selectivity gradually increased in the PMd toward execution of the reaching movements (Figs. 5, 10).
During the behavioral stages of action preparation and execution, PMd neurons reflecting the behavioral goals decreased, and neurons reflecting the action became dominant (Fig. 5C) (Nakayama et al., 2008). In contrast, lPFC neurons continued to represent information specifying both the behavioral goal and action (Fig. 5A,B). Fisher's exact test confirmed that the distribution of PMd neurons selective for the behavioral goal and action is different from that of dlPFC and vlPFC neurons during motor preparation (at t6 in Fig. 5A; p < 0.001; Table 3) and motor execution (at t7; p < 0.001), indicating that parallel representations of behavioral goals and actions in the lPFC persisted even at this behavioral stage, whereas the behavioral-goal representation faded away in the PMd.
Localization of neurons
Figure 14 shows the distribution of the three classes of neurons at six different time points. The visual-object neurons were located primarily in the vlPFC, just after the instruction cue onset (left column). The behavioral-goal neurons were found in the vlPFC, dlPFC, and PMd until the choice cues were presented. Thereafter, they decreased in the PMd, whereas they were represented in the vlPFC and dlPFC until the movements were executed (middle column). The action-selective neurons in the vlPFC and dlPFC were prevalent just after the choice cue onset, whereas in the PMd, they increased during the preparation and execution of reaching movements (right column).
In the present study, we compared the neuronal activity in the vlPFC, dlPFC, and PMd of monkeys performing a behavioral task that initially required the retrieval of a behavioral goal and, subsequently, the specification of action. As summarized in Figure 15, we found several aspects of area-selective participation in the representation and processing of task-relevant information at individual task stages constituting the goal-directed behavior despite common participation from each area.
Involvement of the three areas in retrieving behavioral goals associated with visual objects
When the instruction cue was presented, a sizeable number of vlPFC neurons, but fewer dlPFC neurons, exhibited responses that were selective for object features of the visual cue (Fig. 5, green bars); PMd neurons showed little, if any, of feature-selective activity. This finding is in agreement with the anatomical report that the vlPFC is the major receiving area of input from the inferotemporal cortex (IT) (Webster et al., 1994), suggesting that vlPFC neurons substantially participate in encoding visual-object feature signals for subsequent use. In contrast, participation of the dlPFC in encoding visual objects was limited. This finding appears to contradict a previous study reporting significant object encoding in the dlPFC (Meyers et al., 2012). This apparent discrepancy is likely because the previous study required monkeys to memorize the visual objects themselves for some time, whereas in our study, we required prompt transformation of visual-object information into behavioral-goal information.
The object feature selectivity in the vlPFC was rapidly replaced with activity that exhibited selectivity for the behavioral goal (Figs. 4, 5). Time analyses of selectivity development revealed that behavioral-goal selectivity developed promptly in the PMd and vlPFC. In the PMd, activity reflecting the behavioral goal developed almost simultaneously with that in the vlPFC at ∼150 ms. Because the vlPFC and PMd are not directly interconnected (Barbas and Pandya, 1987; Luppino et al., 2003), the behavioral goal seems to develop in parallel in these two areas. The behavioral goal in the PMd may be retrieved through the basal ganglia (BG) (Nixon et al., 2004). It is probable that the two networks (i.e., IT–vlPFC and BG–PMd) take part in determining the behavioral goal associated with the visual signal (Kurata and Hoffman, 1994; Miyachi et al., 1997; Murray et al., 2000; Toni et al., 2002; Brasted and Wise, 2004). However, the dlPFC is thought to be involved in retrieving the behavioral-goal information through the corticocortical connections with the vlPFC or PMd, or via connections with the BG (Middleton and Strick, 2000).
As stated before, the visual-object information was promptly processed to reflect the behavioral goal in the lPFC, indicating that the visual signal was readily processed to provide information necessary to fulfill the requirement imposed by the behavioral task. This finding of transformation of visual information into behavior-specific information agrees with previous reports showing that the lPFC is involved in categorizing visual objects (Freedman et al., 2001), integrating visual objects and saccade directions (Asaad et al., 1998), determining GO/NOGO responses based on visual object signals (Sakagami et al., 2001), shifting cognitive set according to sensory feedback (Konishi et al., 1996; Nakahara et al., 2002), retrieving or selecting task-relevant information based on sensory signals (Thompson-Schill et al., 1998; Rushworth et al., 2005), and selecting the direction of saccades in a strategic manner based on visual stimuli (Genovesio et al., 2005). In other reports, sensory signals used as an instruction cue were found to generate information that specifies the behavioral rule [i.e., to follow either a spatial rule or a conditional rule (White and Wise, 1999) or to select a matching-to-sample rule or a non-matching-to-sample rule (Wallis et al., 2001; Bunge et al., 2003) in performing a forthcoming behavioral task]. The present study extends these studies by showing that the lPFC is involved in promptly encoding the abstract behavioral goal after the instruction cue onset.
Maintenance of behavioral-goal information in early- and late-task phases
During the delay period between the instruction cue and choice cue, the dlPFC maintained the behavioral-goal information more than the vlPFC did, which agrees with previous reports indicating that the dlPFC is involved in monitoring of information stored in working memory (Petrides, 1995). The involvement of PMd neurons reflecting the behavioral goal during the delay period was stronger than that in the lPFC. This activity is useful for the transformation of behavioral-goal information to action-selective information, as reported in our previous study (Nakayama et al., 2008).
It is remarkable that the behavioral-goal information maintained by lPFC neurons during the instructed delay period largely terminated soon after the appearance of the choice cue, and a new population of lPFC neurons began maintaining that information during the late phases of the behavioral task, throughout the action specification, preparation, and execution periods (Figs. 12, 13). This finding suggests that the early and late phases of behavioral-goal representation serve different objectives. It is hypothesized that the early behavioral-goal signal was integrated with the choice cue information to provide the basis for selecting the action, whereas the late signal was used to keep the behavioral-goal information on line for a variety of potential behavioral demands. The behavioral-goal representation in preparation for action agrees with previous reports indicating that lPFC activity may reflect the relative spatial position (Niki, 1974) or shape of targets to be captured or the rules governing the target selection (Hoshi et al., 1998, 2000). The behavioral-goal representation could further be used for behavioral-state monitoring (Milner, 1963; Petrides and Milner, 1982), long-term behavioral planning (Luria, 1966; Shallice, 1982), goal/subgoal structuring (Goel and Grafman, 1995), or willed behavioral control (Norman and Shallice, 1986; Frith et al., 1991; Lau et al., 2004) (Fig. 15).
In contrast, the time course of the behavioral-goal representation in the PMd appeared different and declined considerably after the choice cue. As reported earlier, this time course points to the transition of behavioral-goal information into the representation of selected action (Nakayama et al., 2008).
Representation and processing of information for action specification
When the choice cue was presented, we first observed responses encoding its spatial location (Fig. 7). This visuospatial response was greater in the vlPFC than in the dlPFC, suggesting that the vlPFC is more dependent on visual signals in specifying the action. Second, the appearance of activity selective for the action was prompt in both areas (starting ≤110 ms), indicating readiness for the action specification. This readiness seems to be due to ample representation of the behavioral goal, especially in the dlPFC. Third, action selectivity in PMd developed in parallel with dlPFC and vlPFC, which may appear incompatible with previous studies reporting that activity reflecting a motor command tends to develop earlier in the PMd than in the lPFC (Wallis and Miller, 2003; Hoshi and Tanji, 2004, 2006). However, this discrepancy may be explained by differences in the task design. In the current study, actions could be specified through the direct interactions between the neurons maintaining the behavioral goals and those encoding the choice cue locations.
On considering the nature of the action specification signal in the lPFC, it is important to refer to the literature examining the aspects of action representation in this area. In an object-copying task (Averbeck et al., 2003), lPFC neurons were found to represent the shape of objects to be copied or the temporal sequences of actions for the process of copying. Subsequently, in a series of studies using a spatial maze task (Mushiake et al., 2006), lPFC neurons were found to represent predominantly the future direction of the motion of the objects to be manipulated rather than the arm movement to achieve that operation. Furthermore, lPFC neurons coded the spatial endpoints during tracking of a maze more often than the individual movements to be performed (Saito et al., 2005). More recently, lPFC neurons were found to represent the category of temporal patterns constituting a series of actions (Shima et al., 2007). In view of the wealth of abstract-level coding of actions demonstrated in previous reports, the action-selective activity of lPFC neurons found in this study likely codes the action substantially in an abstract sense; in particular, it likely represents an intentional aspect of action to capture the target on the screen rather than specifying movement itself. In addition, the possible involvement of lPFC neurons in coding spatial attention for action should also be considered (Lebedev et al., 2004; Rushworth et al., 2005).
In contrast, previous studies have repeatedly demonstrated that PMd neurons are involved in planning attributes of motor elements to be performed (Wise et al., 1996; Cisek and Kalaska, 2005). Thus, the action-selective activity in the PMd, directly connected to the primary motor cortex (Dum and Strick, 2005), most likely codes the direction of arm reach movements.
We compared neuronal activity in the vlPFC, dlPFC, and PMd during performance of a behavioral task that sorted out the following behavioral factors: retrieval of behavioral goal information from a visual cue, maintenance of the goal-selective information, transformation of the behavioral-goal signal into action specification signals, and preparation/execution of the action. We found that the magnitude of activity representing each of the behavioral factors and their time courses varied greatly among the three areas. Such selectivity in the representation and processing of information at progressive stages encompassing the perception–action transformations underlie the goal-directed behavior exemplified in this study (Fig. 15).
This work was supported by Grant-in-Aid for Young Scientists 19670004 (E.H.), Global COE Program of Tamagawa University (T.Y., Y.N.), and CREST (E.H.). We thank Y. Saga, M. Hashimoto, and N. Arimura for assistance and discussions, and T. Ogata and S. Hoffman for technical assistance.
The authors declare no competing financial interests.
- Correspondence should be addressed to Dr. Eiji Hoshi, Frontal Lobe Function Project, Tokyo Metropolitan Institute of Medical Science, Kamikitazawa 2-1-6, Setagaya-ku, Tokyo 156-8506, Japan.