Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
PreviousNext
Articles, Behavioral/Cognitive

Action Preparation Shapes Processing in Early Visual Cortex

Tjerk P. Gutteling, Natalia Petridou, Serge O. Dumoulin, Ben M. Harvey, Erik J. Aarnoutse, J. Leon Kenemans and Sebastian F.W. Neggers
Journal of Neuroscience 22 April 2015, 35 (16) 6472-6480; DOI: https://doi.org/10.1523/JNEUROSCI.1358-14.2015
Tjerk P. Gutteling
1Departments of Psychiatry,
4Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behavior, 6525 HR, Nijmegen, the Netherlands, and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Natalia Petridou
2Radiology, and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Serge O. Dumoulin
5Department of Experimental Psychology, Helmholtz Institute, Utrecht University, 3584 CS, Utrecht, the Netherlands
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Ben M. Harvey
5Department of Experimental Psychology, Helmholtz Institute, Utrecht University, 3584 CS, Utrecht, the Netherlands
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Erik J. Aarnoutse
3Neurosurgery, Brain Center Rudolf Magnus, University Medical Center Utrecht, 3508 GA, Utrecht, the Netherlands,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Erik J. Aarnoutse
J. Leon Kenemans
5Department of Experimental Psychology, Helmholtz Institute, Utrecht University, 3584 CS, Utrecht, the Netherlands
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Sebastian F.W. Neggers
1Departments of Psychiatry,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Preparation for an action, such as grasping an object, is accompanied by an enhanced perception of the object's action-relevant features, such as orientation and size. Cortical feedback from motor planning areas to early visual areas may drive this enhanced perception. To examine whether action preparation modulates activity in early human visual cortex, subjects grasped or pointed to oriented objects while high-resolution fMRI data were acquired. Using multivoxel pattern analysis techniques, we could decode with >70% accuracy whether a grasping or pointing action was prepared from signals in visual cortex as early as V1. These signals in early visual cortex were observed even when actions were only prepared but not executed. Anterior parietal cortex, on the other hand, showed clearest modulation for actual movements. This demonstrates that preparation of actions, even without execution, modulates relevant neuronal populations in early visual areas.

  • action preparation
  • feature perception
  • grasping
  • high-field fMRI
  • MVPA
  • visual cortex

Introduction

When planning to pick up an object, our brain needs to obtain accurate information about the location, size, and orientation of the target object. It is therefore essential that such relevant visual properties be examined with the maximum possible accuracy before initiating a grasping action. Enhanced perception is thus triggered by the mere intention to grasp. Several behavioral studies have shown evidence for an “action-modulated perception” mechanism that automatically enhances relevant features during action preparation (Craighero et al., 1999; Bekkering and Neggers, 2002; Hannus et al., 2005; Fagioli et al., 2007), a moment when an action is just intended but not executed yet. Recently we showed that orientation perception, a relevant feature, is enhanced during preparation of a grasping action, compared with a pointing action, for which object orientation is not important (Gutteling et al., 2011, 2013).

Improved object feature perception shortly before action performance could be mediated by feedback connections between cortical motor control areas and early visual areas in the brain (Neggers et al., 2007; Van Ettinger-Veenstra et al., 2009; Gutteling et al., 2010, 2011, 2013). The close link between eye movement preparation and spatial visual attention depends on neuronal connections between the frontal eye fields and the visual cortex (Moore and Fallah, 2004; Ruff et al., 2006; Neggers et al., 2007; Van Ettinger-Veenstra et al., 2009; Gutteling et al., 2010; van Elk et al., 2010), supporting this hypothesis. Furthermore, an EEG study by van Elk et al. (2010) observed preparatory activity in occipital areas during grasping preparation, suggesting that a similar mechanism exists for manual motor actions. Using fMRI, increased activity was found in visuomotor areas of the parietal cortex when grasping was compared with simple reaching with the arm (Culham et al., 2003; Cavina-Pratesi et al., 2007), further confirming this notion.

A likely source of top-down feedback in case of grasping is the anterior intraparietal area (aIPS), which is involved in the planning and updating of grasping actions (Binkofski et al., 1999; Murata et al., 2000; Culham et al., 2003; Tunik et al., 2005; Baumann et al., 2009). Consistent with this, we recently demonstrated that the action-specific enhancement of orientation perception during grasping preparation is modulated when stimulating aIPS with TMS (Gutteling et al., 2013) just before action execution.

As orientation information is extracted in primary visual cortex (Hubel and Wiesel, 1974), we hypothesize that orientation processing in the early visual cortex is altered during grasping preparation, possibly as a consequence of feedback from aIPS, supporting subsequent visually guided action.

Here we aim to uncover influences of grasping preparation (the phase after a go cue, but before execution) in early visual areas using high resolution fMRI at 7T (tesla) and a surface-based multivoxel pattern analysis (MVPA) searchlight approach. Given that orientation perception is enhanced during grasping but not pointing preparation (Gutteling et al., 2011), we hypothesize that preparing to grasp an object enhances orientation sensitivity in the orientation representation of early visual cortex, whereas pointing at the object would not.

Materials and Methods

Subjects

Six healthy subjects (2 male, mean age 25.8, SD 2.5) with normal or corrected-to-normal vision participated in this study. Five of the six subjects were right handed. This study was approved by the Medical Ethical Committee of the University Medical Center Utrecht. All subjects signed an informed consent before participation.

Tasks

Subjects performed three tasks in two separate fMRI sessions. In one session, subjects performed a grasping and pointing task to oriented bars. In the other session, subjects engaged in a passive viewing orientation task, to isolate voxels selective for the orientations used in the grasping/pointing task, and a standard retinotopic mapping task to delineate early visual areas.

Grasping/pointing task.

Subjects were placed in the 7T MRI scanner, and a custom-made MR compatible “grasping/pointing device” shaped as a small table was placed over the lap. This device enabled subjects to perform grasping or pointing actions to two black bar objects mounted into a polystyrene foam extension with a vertical surface facing the subject, with white background. The two protruding bars were placed on either side of a fixation cross (6 cm from fixation cross to bar center, bar width 1.5 cm, length 6 cm), within easy reaching distance for the subject. The bars were oriented either at −45° (left) and +45° (right), or vice versa. This was counterbalanced across subjects. See Figure 1 for a graphical depiction of the setup. The device was visible through prism glasses, worn by the subject while lying supine. The same perspective was therefore obtained as when the subjects were looking down at their hands in a normal situation. Visual color cues were given by a back projection system, illuminating the white background on the device. Auditory cues (tones) were presented using speakers built into the MRI system through soft tubes inserted near the ear canal and closed off by moldable earplugs. The upper part of the right arm was restrained by a soft elastic band wrapped around the upper torso, to minimize head movement.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Paradigm. A, Graphic depiction of the grasping/pointing setup in the MRI scanner. Subjects lay supine in the MRI bore. A posterior volume was scanned (indicated in red), here depicted on a single subject anatomy (right). The partial volume was chosen to encompass both the early visual areas and the anterior parietal areas. Subjects were able to see the grasping/pointing display (bottom) through prism glasses. Color cues were given through a back projection system, illuminating the grasping/pointing display. When not giving cues, the back projection illuminated the grasping/pointing display (white). Every trial started with an instruction color cue (red/green, 1 s duration) indicating whether the action should be performed on the left or right bar. After a 2.5–3.5 s interval, an auditory cue instructed the subject to either perform the action (single beep) or withhold it (double beep). In case of a go cue, the subject performed the action that was instructed at the start of the block of trials. B, The retinotopic mapping stimuli consisted of a rotating checkerboard wedge (left) and an expanding checkered ring stimulus (middle). The black arrows did not appear in the actual stimulus presentation. The orientation grating stimuli (rightmost picture) consisted of two gratings of equal orientation, which appeared on either side of a fixation point for two seconds, followed by eight seconds of rest.

Hand movements were recorded with an MR compatible dataglove (Data Glove 5 Ultra MRI, 5DT) that tracked the bend fraction of the individual fingers through fiber optic tubes in the dataglove.

Subjects performed four grasping and four pointing blocks, each lasting 4 min, with a 20 s pause between blocks and a longer break halfway. Total duration was ∼35 min. Every block started with an instruction of the action to-be performed in the upcoming block (grasping or pointing) indicated by a color cue (red or green). Actions alternated for every block. Within each 4 min block, the subject performed 20 trials (a trial contains 1 grasp or pointing movement) of 12 s each. Each trial started with an instructional color cue, indicating whether subjects should perform their action toward the left of right bar object. After 3.5–4.5 s (random interval), an auditory cue instructed to perform the action (single beep, “go” cue). When a second beep sounded shortly afterward (400 ms) the movement that was prepared, had to be withheld (double beep, “no-go” cue). This resulted in six possible events per trial: grasping instruction, grasping go, grasping no-go, pointing instruction, pointing go, and pointing no-go. Subjects were asked to keep fixating a central cross throughout the experiment. When pointing, subjects were asked to point to the center point of the bar object. When instructed to grasp, thumb and index fingers were to be placed at the short sides of the bar. All visual and auditory stimuli were presented by the stimulus presentation software (Presentation v14.9, Neurobehavioral Systems), that also controlled the data glove recordings using a custom workspace extension.

Proper care was taken that grasping versus pointing classification results in the visual areas could not be explained by visual differences during grasping and pointing trials. The color cue that was used to instruct the target of action was the same for grasping and pointing. Left and right cues are averaged together during data analysis. The (no-)go cue was chosen to be an auditory signal only.

Orientation task.

In a separate session, subjects performed a passive viewing orientation task. Stimuli were presented using a back projection system and a Plexiglas screen mounted onto the receive coil, visible through a mirror system and prism glasses. Subjects viewed a fixation spot, flanked by two oriented gratings, which were both either 45° or −45°, see Figure 1B. The spatial frequency of the grating patterns was aimed to be approximately equal to the spatial frequency of the oriented bars in the grasping/pointing task. The task duration was 8 min and contained 48 repetitions (10 s each), equally divided between 45° and −45° gratings. At the start of each trial, the gratings were presented for 2 s, followed by 8 s of rest.

Retinotopy.

Subjects also engaged in a standard retinotopic mapping protocol (Engel et al., 1994; Sereno et al., 1995), consisting of a checkerboard stimulus and a rotating wedge (Fig. 1B). These tasks were done in a single session. Subjects viewed a rotating checkerboard wedge to obtain a polar angle map. The protocol started with 30 s of rest, followed by seven rotations of the wedge (thickness 48°), each 1 min long, and ended with another 30 s of rest. The eccentricity mapping was identical, with the exception that the main stimulus did not rotate, but expanded from the center outward once per cycle.

Acquisition

A high resolution 3-D segmented echo planar imaging sequence (voxel size: 1.25 mm isotropic; acquisition time: 2.88 s; 35 slices, echo time 25 ms) was used for all experiments. A posterior field-of-view was scanned, positioned such that the primary visual areas up to the anterior parietal regions were included (Fig. 1A). Therefore, the volume included the visual cortex and aIPS for all subjects. The volume was locally shimmed per subject to minimize geometric distortions in the functional volumes. All functional scans were acquired using a Philips 7T scanner (Philips Medical Systems) equipped with a 32-channel head coil. For the grasping/pointing task, 720 volumes were acquired and for the orientation task and each retinotopy task, 165 volumes. Whole brain T1-weighted anatomy scans were acquired on a Philips 3T Achieva scanner (Philips Medical Systems; voxel size 0.875 × 0.875 × 1 mm, 8 channel head coil). This anatomical scan was acquired at lower field due to intensity inhomogeneities at higher field, which interfered with the flat-mapping procedure. For registration purposes, a partial volume T1 was acquired for each subject at the 7T scanner (voxel size: 0.97 × 0.97 mm; slice thickness 1 mm, 44 slices), that encompassed the same volume as the functional images.

Analysis

GLM analysis was used to extract activation patterns from the grasping/pointing task, which were subjected to a surface-based searchlight MVPA analysis to find areas in the posterior brain that discriminate between grasping and pointing (preparation). Subsequently, a region-of-interest (ROI) analysis was performed to further examine the effects of action preparation on the early visual areas and aIPS, for orientation-selective patches. These orientation selective patches were obtained from the MVPA searchlight analysis of the orientation task. The ROIs were determined from the retinotopy and orientation task results.

Grasping/pointing task.

All acquired functional volumes were realigned using rigid body transformations. The partial T1 was coregistered with the functional scans using normalized mutual information. Subsequently, the whole brain T1-weighted anatomy obtained at the 3T scanner was coregistered to the partial T1 recorded at the 7T scanner. For MVPA analysis, multiple instances of an activation pattern were obtained from separate GLM analyses for all conditions of interest, i.e., the instruction, go and no-go conditions. The onset of the “instruction” regressor was at t = 0 (trial start), where a cue instructed the subject to perform an action to a left or right target. The go regressors were aligned to the first beep of the go cue, 3.5–4.5 s later. For no-go trials, the regressor was also aligned to the first beep, as this is where the action preparation starts, which we aim to capture. High-pass filter cutoff was 128 s. Each of these GLM analyses (e.g., the go-condition) contained a regressor for each of the conditions of no-interest (e.g., the instruction and no-go events) and multiple regressors for the condition of interest (e.g., the go grasping and pointing events). One regressor was included in the GLM for every four repetitions of one condition (e.g., four grasping-go trials) to reduce noise by averaging. These repetitions per regressor were spread out over the different blocks, to avoid biasing the pattern toward a certain block. Thus, one regressor would, for instance, consist of trial 1, 6, 11, and 16. With 160 repetitions, this resulted in 10 β images per action (grasping/pointing) for every condition (go/no-go). These 20 β images were used in the MVPA analysis to evaluate discrimination of grasping from pointing. Movement parameters and two white matter regressors (one for each hemisphere) were also added as nuisance regressors to the GLM to eliminate possible movement noise.

A surface based searchlight (Chen et al., 2011) MVPA was used. That is, a circular patch from which voxel values were sampled moved over the gray matter sheet, and these voxels were fed to the MVPA classifier for the various conditions (instruction, go and no-go). To accomplish this, two tissue segmentations of the whole brain anatomical 3T scan were made. First, the anatomical T1 weighted 3T scan was segmented using CARET software (Van Essen et al., 2001). This created a gray matter mask. Second, the same T1 scan was also segmented using unified segmentation (Ashburner and Friston, 2005) in SPM8 (http://www.fil.ion.ucl.ac.uk/spm/software/spm8/) to obtain a probabilistic gray matter map for each hemisphere. The CARET based segmentation was used as a mask for the probabilistic gray matter map, to restrict surfaces to a single hemisphere, remove the cerebellum, and remove any overlap between gyri. This combined probabilistic gray matter map was subsequently used to create a surface using SPM8 routines, expressed as a triangulation of the gray matter surface consisting of nodes (3D coordinates) and edges (connections, forming triangles).

For every node in the gray matter surface, a circular surface patch (6 mm radius) was calculated. The patches, calculated as a set of connected nodes, were interpolated to a set of voxels using a nearest neighbor algorithm. The surface based searchlight was iteratively moved over the entire gray matter surface coinciding with the imaged volume of the fMRI acquisition. The MVPA analysis was performed separately for every condition (instruction/go/no-go). The data instances (grasping and pointing activation patterns described above) were divided in a “training set” (18 patterns) and a “test set” (2 patterns). A linear support vector machine (LIBSVM implementation with a constant penalty parameter C = 1; Chang and Lin, 2011) was trained on the 18 training patterns (9 grasping, 9 pointing) and tested using the test set. Twentyfold cross-validation (Kohavi, 1995) was performed to assess classifier performance. The resulting accuracy was attributed to the center voxel for each patch. This procedure was repeated for every voxel in the scanned volume to obtain a classification accuracy map for all conditions (instruction/go/no-go) and all subjects.

Orientation task.

The surface based searchlight procedure was also applied to the functional volumes obtained from the orientation task. Here, β images were obtained from a GLM analysis, where regressors for −45° and 45° presentation were used (aligned at the onset of presentation, duration 2 s). As in the grasping/pointing task, these were also estimated based on four repetitions, but as there were 48 repetitions in total, this resulted in a total of 12 β images (6 for −45°, 6 for 45°). Surface based searchlight classification was performed on the categories −45°/45°.

Retinotopy.

Processing of both eccentricity and polar angle functional data were performed in the same way. The volumes were realigned and coregistered with the full T1 anatomical scans (acquired at 3T) using the partial T1 from the 7T as an intermediate. The data were smoothed with a 4 mm FWHM Gaussian kernel. In SPM, the data were fitted to a sine and cosine regressor. For every pair of β voxel values (corresponding to the sine and cosine regressor), the arctangent was calculated to obtain an eccentricity/polar angle phase map (Engel et al., 1994; Sereno et al., 1995).

These polar angle and eccentricity maps were projected to the CARET generated surface representation and warped to a flat map. Using the polar angle phase reversals and direction of eccentricity, ROIs were manually created for early visual areas V1–V3. Furthermore, an ROI of the aIPS was created based on cortical landmarks (the confluence of the anterior end of the intraparietal sulcus and the postcentral gyrus; Culham et al., 2003; Cavina-Pratesi et al., 2007).

ROI analysis.

The ROIs described above were used to analyze the grasping/pointing task. For every ROI, only those voxels were selected that exhibited significant above chance classification performance in the orientation task (and can hence discriminate between ±45°). To this end, first the classification accuracy level to represent voxels “significantly above chance” was established using permutation testing (Golland and Fischl, 2003; Bonferroni corrected α threshold of 0.05). Second, a mask was created for each ROI that only contained those voxels that were significantly above chance for orientation classification. Third, these masks were used to extract the surface based searchlight accuracy values from the grasping/pointing task. That is, only accuracies from searchlight center-voxels from the grasping/pointing task were selected that were significant in the orientation coding task. These grasping versus pointing discrimination values were averaged, to obtain the mean accuracy for each ROI, condition and subject separately.

Laterality analysis.

To check whether the effects of action preparation are lateralized, i.e., whether actions performed toward a target on the right show a different lateralization pattern than actions performed to the left, the ROI analysis was repeated for each action direction. The analysis performed was identical to the standard ROI analysis, except that analyses were performed with only one-half the data for each action direction (left/right).

Results

Decoding of action intention

We obtained up to 80% classification scores for grasping versus pointing actions in the superior parietal areas, extending into the intraparietal sulcus and in the occipital lobe, along the calcarine sulcus, lateral occipital areas, and the parieto-occipital junction. Figure 2 provides an overview of the results for grasping and pointing (go/no-go). In general, high grasping versus pointing decoding performance was found throughout the parietal and occipital areas for both go and no-go conditions. These regions thus encode different information for grasping and pointing movements during action preparation, whether the action is executed or not. Figure 3 shows that timing in the go condition was very similar for grasping and pointing, and that subjects were well able to abort execution of the action when they perceived a no-go cue. The parietal areas that decode grasping versus pointing correspond well with established areas of the parietal reaching and grasping network (Culham and Valyear, 2006; Andersen and Cui, 2009). Importantly, high grasping versus pointing decoding performance was also found in early occipital areas, demonstrating that action preparation affects early visual processing, as hypothesized.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Grasping/pointing Searchlight results. Group results for the surface based searchlight grasping versus pointing analysis on a normalized brain. A, Classification results for the cue go condition, where action was executed. B, Classification results for the cue no-go condition, where the action had to be withheld. C, Classification scores during the instruction phase. The individual searchlight results were normalized, averaged across subjects, and smoothed for display purposes. The left hemisphere is depicted on the left (lateral and medial view) and the right hemisphere on the right. The color map represents classification scores.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Kinematics. Grand averages of the obtained hand movements during the grasping/pointing task, recorded using a dataglove during the experiment. The left graph shows finger bend fractions during grasping for the index finger and thumb, in both go (solid line) and no-go (dotted line) conditions. Time 0 represents the auditory go-cue. In the no-go condition, an auditory stop-cue followed shortly after the first. The right graph depicts the movements during the pointing movement for the index finger in the go (solid line) and no-go conditions (dotted line).

Decoding of grating orientation

The orientation decoding experiment was intended to isolate voxels that can classify rightward tilted gratings from leftward tilted gratings. Here, classification performance was generally lower than for grasping versus pointing classification. Orientation decoding results from a single subject are shown in Figure 4. All subjects showed similar decoding performance, although there is considerable variation in locations that yielded good classification performance between subjects, yielding little overlap in the mean surface map. The significance threshold for orientation classification was 72.2% accuracy and varied only marginally between subjects (SD 0.4%). As well as sites in early visual cortex, high orientation decoding performance was also seen in the superior parietal and intraparietal areas. Only voxels that can classify rightward from leftward grating orientation were used for subsequent ROI analysis of data from the grasping/pointing run for each subject separately, to ensure we are looking at units that process orientation information.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Orientation decoding results. Single-subject results of the surface based searchlight analysis of the orientation task in a medial view and flat map. A classifier was trained to discern gratings of 45° or −45° (leftward vs rightward tilt) for every surface based searchlight patch. The dotted red line indicates the edge of the scanned volume. Individual regions of interest of early visual areas V1–V3 are marked. The color map represents classification scores.

ROI analysis of V1–V3 and aIPS

A ROI analysis was performed to further examine the effects of action preparation on the early visual areas and aIPS, for orientation-selective patches. V1, V2, and V3 ROIs for each subject were created using conventional retinotopic mapping procedures (Engel et al., 1994; Sereno et al., 1995). Within these ROIs, searchlight classification accuracy values of grasping versus pointing were extracted for voxels that showed significantly above chance orientation classification performance, and those values were averaged across voxels within the ROIs. Results averaged over subjects are shown in Figure 5. Classification performance for discriminating grasping from pointing actions is significantly above chance (50%) for all regions of interest [Bonferroni corrected α of 0.0021, one sample t tests (24), all p < 0.00071].

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

ROI analysis. Classification scores of the grasping/pointing task for every ROI, thresholded to include only surface patches that can significantly classify orientation. For both hemispheres, mean classification values are depicted for the go condition (where a grasping or pointing action was performed), no-go condition (where the action was withheld), and instruction condition (the cue instructed the action target). Error bars denote the SEM. All classification scores are significantly above chance level (50%) corrected for multiple comparisons (Bonferroni corrected α of 0.0021, one sample t tests, all p < 0.00071).

When an actual movement was executed (the go condition), classification scores are higher for aIPS than the early visual areas. The early visual areas V1–V3 show high classification accuracy bilaterally.

When the action was prepared but not executed (no-go condition), classification scores are generally lower than for actual actions (go condition), especially in the aIPS. Interestingly, when looking at grasping versus pointing classification performance during the instruction phase, high classification scores are also observed for V1, V2, and V3. Instruction events indicate the grasping/pointing target location, ∼3 s before the go or no-go cue. However, in aIPS the grasping versus pointing classification performance is lower at the time of instruction than during the go or no-go cue. Classification scores in the aIPS differ significantly between the go and instruction phase (one-tailed paired sample t tests; left hemisphere: p = 0.048, right hemisphere: 0.043), whereas there is no significant difference between these phases in V1 (left hemisphere: p = 0.153, right hemisphere: p = 0.182). This suggests that aIPS activity reflects information more closely related to the actual motor control signals, whereas the early visual areas' activity reflects visual information processing required for the upcoming act. This is in agreement with the general notion that aIPS is at the interface of visual perception and action (Culham et al., 2003; Cavina-Pratesi et al., 2007).

Laterality analysis

We repeated the ROI analysis for each action direction, i.e., for targets presented on the left and right, see Figure 6. Note that all actions are still performed with the right hand. Although it would be expected that the effect of action preparation would be strongest in the contralateral visual area, suggesting a retinotopic organization, no such consistent pattern was found. Classification scores in the no-go condition are notably lower for targets on the left, but the opposite pattern is not present for targets on the right. As would be expected, high decoding performance is found in the contralateral aIPS, for targets on the right.

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Laterality analysis. Classification scores of the grasping/pointing task for every ROI, as in Figure 5, but split between actions performed to targets to the left (top) and right (bottom). The ROIs are denoted by their hemisphere (e.g., “Left aIPS” refers to the aIPS in the left hemisphere). Error bars denote the SEM.

Discussion

We were able to predict whether a subject was going to grasp or point to a bar object from fMRI signal modulations in early visual cortex, including V1, using MVPA. These results support the hypothesis that action preparation modulates visual processing of action relevant features as early as V1.

As an additional constraint, only voxels that responded significantly to the orientations of grasping targets (tested using Gabor patches of 45° and −45°) were included in the analysis. As orientation perception improves when preparing a grasping action (Craighero et al., 1999; Gutteling et al., 2011), we expected that groups of voxels that are highly sensitive to the orientations used in this study would show the hypothesized action preparation effects. Our results show that all regions included in the ROI analysis yielded above chance classification scores, during the preparation of an action and even during target instruction several seconds before that. This supports the notion that changes in activity in these early visual areas underlie action induced perceptual modulations. Unfortunately, our design precluded testing whether the bar orientations could be classified more accurately when grasping than when pointing (i.e., classify −45° vs 45° bars), as would be expected. The orientation of the bars were confounded with the direction of the action (left/right), hence further studies are needed to test this hypothesis.

Classification scores are highest in the cue go condition, where classification is performed on data acquired during the preparation phase of actual movements. This is the most engaging condition requiring maximal visuomotor processing. Here, the aIPS shows the highest classification scores. Furthermore, aIPS shows a specific pattern of classification scores across different conditions (go/no-go/instruction), which is different from the pattern across conditions shown by the early visual area V1. This indicates that the grasping/pointing signals classified in the aIPS are directly related to action execution. Note, however, that the hand was visible during action execution. Although the difference in visual input was minimal, this may also be a source of information used by the classifier.

In the no-go condition the action was prepared but not executed. This enables analysis of neuronal processes related to action preparation, while excluding interpretations where activity differences result from action performance or vision. Note that here we refer to “action preparation” as the period just after a go-cue, where the action is prepared, but not yet executed, as we found this to be a critical time period for the proposed feedback mechanism (Gutteling et al., 2011, 2013). Even though the action is cancelled by the second (no-go) cue, action preparation is present in this condition. For the no-go condition, classification scores were also clearly above chance level. Although the classification scores are generally lower for no-go trials, in occipital ROIs they are very similar to the go condition, demonstrating that actual action performance has little effect on early visual cortex. However, the classification score in the aIPS was significantly lower when no action was actually executed. This is consistent with the hypothesis that aIPS activity depends more on motor execution processes, similar to areas in the dorsal processing stream, which are known to show both visual and direct action control signals in monkeys (Murata et al., 2000). The grasping versus pointing classification performance observed in aIPS could therefore be related to the hypothesized feedback signal to the visual areas or to direct action control. The observation that the grasping and pointing actions can still be robustly classified in the absence of the actual action indicates that action preparation itself is sufficient to enhance processing of action-relevant information in occipital visual areas. Previous findings have shown connections from frontoparietal areas providing feedback to early visual areas (Moore and Fallah, 2004; Ruff et al., 2006, 2008), revealing a topographical organization. In the current study, we did not observe a consistent lateralization pattern when separating targets on the left and right. This may be due to the increased noise when using half of the dataset. Alternatively, the retinotopic organization of feedback for eye movements (e.g., originating from the frontal eye fields) may be stronger than for hand actions (e.g., originating from aIPS), as shown by Ruff et al. (2008).

The classification scores in aIPS are notably lower during the instruction phase than immediately following the go or no-go cues occurring later. Although in this case there may not be a direct driving input modulating the activity in V1, there may be a baseline shift in visual processing favoring processing of grasping-relevant orientations at the expense of other information. Grasping and pointing were performed in separate blocks of trials lasting 4 min in total, instead of an intermixed design. Without the need to switch, subjects may have been in a “grasping-mode”; i.e., a state of awareness related to having to grasp an object. This would entail a steady-state of improved perceptual sensitivity for grasping-relevant features, allowing for classification of action intention, several seconds before the go (or no-go) cue. Such “action modes” for our perceptual system might be a neuronal implementation of persistent biases in perception that seem to exist for persons skilled in a certain action (Witt, 2011). Alternatively, subjects may have engaged in motor imagery during the instruction phase. Although the early visual areas studies here are not typically associated with motor imagery activity (Mizuguchi et al., 2013, 2014), the (preparatory) activation of more motor-related areas may have provided early feedback to the visual areas regarding the action relevant features, even before the go-signal.

The surface based searchlight analysis of the entire scanned volume revealed that large parts of the superior parietal and intraparietal areas yielded grasping versus pointing classification scores equal or higher than those obtained for the visual areas. These areas are involved in the dorsal stream of the frontoparietal reaching and grasping network (Culham et al., 2003; Fogassi and Luppino, 2005; Tunik et al., 2008; Cohen et al., 2009). As aIPS is our main parietal area of interest because of its implication in grasping preparation (Culham et al., 2003; Cavina-Pratesi et al., 2007; Gutteling et al., 2013), we included it in the ROI analysis. However, any area that has a different activity pattern for grasping actions than for pointing actions will likely show above chance classification. Thus, areas (specifically) involved in pointing movements, such as the “parietal reach region” (Andersen and Buneo, 2002; Calton et al., 2002), V6A (Galletti et al., 2003), and the superior parieto-occipital complex (Gallivan et al., 2011) will also show high classification accuracy. AIPS showed high classification scores, supporting a possible role in generating modulatory feedback signals to early visual cortex. Recent studies have shown projections from parietal areas, including IPS, to early visual areas (Borra and Rockland, 2011; Greenberg et al., 2012). This notion is further supported by our recent study where aIPS was stimulated with TMS (Gutteling et al., 2013), effectively abolishing enhanced orientation perception during grasping preparation as we observed in earlier work (Gutteling et al., 2011).

Importantly, pattern analysis techniques are very sensitive to any kind of consistent difference in the fMRI signals between movement conditions, either of neuronal or non-neuronal origin. First, the direct effect of movement or body position in the magnetic field of the scanner, causing inhomogeneities of the B0-field, can affect the signal in the acquired volume. In the go condition, the actual movement following shortly after the go cue was unlikely to co-occur with the BOLD response of interest that is delayed for several seconds (Logothetis et al., 2001) and subjects returned to the start position immediately after action performance. Furthermore, in the no-go condition, the movement was near fully suppressed as evidenced by the dataglove recordings, but still clear grasping versus pointing decoding performance was observed in all ROIs. Second, care was taken during the acquisition and analysis to eliminate head movements that often co-occur with arm movement. The upper arm was strapped to the torso, to avoid movement being carried over to the head. In the analysis, both realignment parameters and two white matter regressors (one for each hemisphere) were added as nuisance regressors, effectively removing head-movement biases. We also ensured that there was no visual difference between the stimuli used in grasping and pointing trials, which might otherwise have explained the results. Unfortunately, eye tracking could not be performed during the experiment, as the prism glasses precluded this. Although subjects were instructed to maintain fixation at all times, eye movements may be present during action execution. However, it is unlikely that this is a significant source of classification information in the no-go and instruction conditions, where only fixation was required.

The phenomenon of action-preparation induced visual processing enhancements should not be considered in isolation. In general, the extensive body of literature on selective visual attention (Posner, 1980) indicates that our visual field is not processed equally for all locations and features. Rather, by means of visual saliency (bottom-up attention) or task requirements (top-down attention), different parts of the visual information will be selectively processed (Treisman and Gelade, 1980). Also, a link between attention and action preparation has been suggested in influential theories, such as the premotor theory of attention (Rizzolatti et al., 1987) and selection-for-action (Allport, 1987). Furthermore, neuronal signals in early visual areas can be modulated by top-down task demands originating elsewhere in the brain (McAdams and Maunsell, 2000). Interestingly, recent studies that investigated action planning using multivariate techniques have uncovered extensive action planning activity, not only in dorsal stream areas (Gallivan et al., 2013b) but also in ventral steam areas (Gallivan et al., 2013a) not previously found using univariate techniques. This finding fits well with the idea of a neuronal mechanism proposed here, which reflects a novel and specialized form of top-down signal related to the preparation of an imminent grasping action, specifically and functionally aimed at perceiving object orientation optimally to facilitate the upcoming grasp. Whether feedback reaches the ventral stream through direct connections or through the early visual areas remains unclear.

In conjunction with other recent reports (van Elk et al., 2010; Gutteling et al., 2013), a coherent picture emerges that several specific neuronal top-down signals related to different forms of motor control exist. Such action preparation signals modulate visual processing and perception according to that action's needs. These signals are most likely originating from activation in higher-order visuomotor control brain areas downstream from early visual cortex in the dorsal processing stream.

In summary, we conclude that the high grasping versus pointing classification scores in V1 and the other early visual areas V2 and V3 are due to modulatory influences related to action preparation. This modulation may be instantiated by a shift in orientation tuning for neurons in the orientation columns (Kamitani and Tong, 2005; Norman et al., 2006; Jehee et al., 2011), or on a larger cortical scale by changes in radial bias of orientation representation (Sasaki et al., 2006; Freeman et al., 2011). Both explanations, however, imply neural changes in the orientation representation in early visual cortex due to action preparation. The current results show strong evidence that grasping preparation modulates orientation selectivity in the early visual areas, enabling action-modulated perception effects that improve processing of action relevant visual features.

Footnotes

  • This work was supported by a Netherlands Foundation for Scientific Research (NOW; http://www.nwo.nl/) Open Competition Grant NWO 400-05-134.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Dr Tjerk Gutteling, Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Montessorilaan 3, 6525 HR Nijmegen, the Netherlands. t.gutteling{at}donders.ru.nl

References

  1. ↵
    1. Allport A
    (1987) in Perspectives on perception and action, Selection for action: some behavioral and neurophysiological considerations of attention and action, eds Heuer H, Sanders AF (Erlbaum, Hillsdale, NJ), pp 395–419.
  2. ↵
    1. Andersen RA,
    2. Buneo CA
    (2002) Intentional maps in posterior parietal cortex. Annu Rev Neurosci 25:189–220, doi:10.1146/annurev.neuro.25.112701.142922, pmid:12052908.
    OpenUrlCrossRefPubMed
  3. ↵
    1. Andersen RA,
    2. Cui H
    (2009) Intention, action planning, and decision making in parietal-frontal circuits. Neuron 63:568–583, doi:10.1016/j.neuron.2009.08.028, pmid:19755101.
    OpenUrlCrossRefPubMed
  4. ↵
    1. Ashburner J,
    2. Friston KJ
    (2005) Unified segmentation. Neuroimage 26:839–851, doi:10.1016/j.neuroimage.2005.02.018, pmid:15955494.
    OpenUrlCrossRefPubMed
  5. ↵
    1. Baumann MA,
    2. Fluet MC,
    3. Scherberger H
    (2009) Context-specific grasp movement representation in the macaque anterior intraparietal area. J Neurosci 29:6436–6448, doi:10.1523/JNEUROSCI.5479-08.2009, pmid:19458215.
    OpenUrlAbstract/FREE Full Text
  6. ↵
    1. Bekkering H,
    2. Neggers SF
    (2002) Visual search is modulated by action intentions. Psychol Sci 13:370–374, doi:10.1111/j.0956-7976.2002.00466.x, pmid:12137141.
    OpenUrlAbstract/FREE Full Text
  7. ↵
    1. Binkofski F,
    2. Buccino G,
    3. Stephan KM,
    4. Rizzolatti G,
    5. Seitz RJ,
    6. Freund HJ
    (1999) A parieto-premotor network for object manipulation: evidence from neuroimaging. Exp Brain Res 128:210–213, doi:10.1007/s002210050838, pmid:10473761.
    OpenUrlCrossRefPubMed
  8. ↵
    1. Borra E,
    2. Rockland KS
    (2011) Projections to early visual areas v1 and v2 in the calcarine fissure from parietal association areas in the macaque. Front Neuroanat 5:35, doi:10.3389/fnana.2011.00035, pmid:21734867.
    OpenUrlCrossRefPubMed
  9. ↵
    1. Calton JL,
    2. Dickinson AR,
    3. Snyder LH
    (2002) Non-spatial, motor-specific activation in posterior parietal cortex. Nat Neurosci 5:580–588, doi:10.1038/nn0602-862, pmid:12021766.
    OpenUrlCrossRefPubMed
  10. ↵
    1. Cavina-Pratesi C,
    2. Goodale MA,
    3. Culham JC
    (2007) FMRI reveals a dissociation between grasping and perceiving the size of real 3D objects. PLoS One 2:e424, doi:10.1371/journal.pone.0000424, pmid:17487272.
    OpenUrlCrossRefPubMed
  11. ↵
    1. Chang CC,
    2. Lin CJ
    (2011) ACM Trans Intell Syst Technol, LIBSVM: a library for support vector machines. Software available at http://www.csie.ntu.edu.tw/∼cjlin/libsvm, 2, pp 1–27.
  12. ↵
    1. Chen Y,
    2. Namburi P,
    3. Elliott LT,
    4. Heinzle J,
    5. Soon CS,
    6. Chee MW,
    7. Haynes JD
    (2011) Cortical surface-based searchlight decoding. Neuroimage 56:582–592, doi:10.1016/j.neuroimage.2010.07.035, pmid:20656043.
    OpenUrlCrossRefPubMed
  13. ↵
    1. Cohen NR,
    2. Cross ES,
    3. Tunik E,
    4. Grafton ST,
    5. Culham JC
    (2009) Ventral and dorsal stream contributions to the online control of immediate and delayed grasping: a TMS approach. Neuropsychologia 47:1553–1562, doi:10.1016/j.neuropsychologia.2008.12.034, pmid:19168086.
    OpenUrlCrossRefPubMed
  14. ↵
    1. Craighero L,
    2. Fadiga L,
    3. Rizzolatti G,
    4. Umiltà C
    (1999) Action for perception: a motor-visual attentional effect. J Exp Psychol Hum Percept Perform 25:1673–1692, doi:10.1037/0096-1523.25.6.1673, pmid:10641315.
    OpenUrlCrossRefPubMed
  15. ↵
    1. Culham JC,
    2. Valyear KF
    (2006) Human parietal cortex in action. Curr Opin Neurobiol 16:205–212, doi:10.1016/j.conb.2006.03.005, pmid:16563735.
    OpenUrlCrossRefPubMed
  16. ↵
    1. Culham JC,
    2. Danckert SL,
    3. DeSouza JF,
    4. Gati JS,
    5. Menon RS,
    6. Goodale MA
    (2003) Visually guided grasping produces fMRI activation in dorsal but not ventral stream brain areas. Exp Brain Res 153:180–189, doi:10.1007/s00221-003-1591-5, pmid:12961051.
    OpenUrlCrossRefPubMed
  17. ↵
    1. Engel SA,
    2. Rumelhart DE,
    3. Wandell BA,
    4. Lee AT,
    5. Glover GH,
    6. Chichilnisky EJ,
    7. Shadlen MN
    (1994) fMRI of human visual cortex. Nature 369:525, doi:10.1038/369525a0, pmid:8031403.
    OpenUrlCrossRefPubMed
  18. ↵
    1. Fagioli S,
    2. Hommel B,
    3. Schubotz RI
    (2007) Intentional control of attention: action planning primes action-related stimulus dimensions. Psychol Res 71:22–29, doi:10.1007/s00426-005-0033-3, pmid:16317565.
    OpenUrlCrossRefPubMed
  19. ↵
    1. Fogassi L,
    2. Luppino G
    (2005) Motor functions of the parietal lobe. Curr Opin Neurobiol 15:626–631, doi:10.1016/j.conb.2005.10.015, pmid:16271458.
    OpenUrlCrossRefPubMed
  20. ↵
    1. Freeman J,
    2. Brouwer GJ,
    3. Heeger DJ,
    4. Merriam EP
    (2011) Orientation decoding depends on maps, not columns. J Neurosci 31:4792–4804, doi:10.1523/JNEUROSCI.5160-10.2011, pmid:21451017.
    OpenUrlAbstract/FREE Full Text
  21. ↵
    1. Galletti C,
    2. Kutz DF,
    3. Gamberini M,
    4. Breveglieri R,
    5. Fattori P
    (2003) Role of the medial parieto-occipital cortex in the control of reaching and grasping movements. Exp Brain Res 153:158–170, doi:10.1007/s00221-003-1589-z, pmid:14517595.
    OpenUrlCrossRefPubMed
  22. ↵
    1. Gallivan JP,
    2. McLean A,
    3. Culham JC
    (2011) Neuroimaging reveals enhanced activation in a reach-selective brain area for objects located within participants' typical hand workspaces. Neuropsychologia 49:3710–3721, doi:10.1016/j.neuropsychologia.2011.09.027, pmid:21958649.
    OpenUrlCrossRefPubMed
  23. ↵
    1. Gallivan JP,
    2. Chapman CS,
    3. McLean DA,
    4. Flanagan JR,
    5. Culham JC
    (2013a) Activity patterns in the category-selective occipitotemporal cortex predict upcoming motor actions. Eur J Neurosci 38:2408–2424, doi:10.1111/ejn.12215, pmid:23581683.
    OpenUrlCrossRefPubMed
  24. ↵
    1. Gallivan JP,
    2. McLean DA,
    3. Valyear KF,
    4. Culham JC
    (2013b) Decoding the neural mechanisms of human tool use. Elife 2:e00425, doi:10.7554/eLife.00425, pmid:23741616.
    OpenUrlAbstract/FREE Full Text
  25. ↵
    1. Golland P,
    2. Fischl B
    (2003) Permutation tests for classification: towards statistical significance in image-based studies. Inf Process Med Imaging 18:330–341, pmid:15344469.
    OpenUrlCrossRefPubMed
  26. ↵
    1. Greenberg AS,
    2. Verstynen T,
    3. Chiu YC,
    4. Yantis S,
    5. Schneider W,
    6. Behrmann M
    (2012) Visuotopic cortical connectivity underlying attention revealed with white-matter tractography. J Neurosci 32:2773–2782, doi:10.1523/JNEUROSCI.5419-11.2012, pmid:22357860.
    OpenUrlAbstract/FREE Full Text
  27. ↵
    1. Gutteling TP,
    2. van Ettinger-Veenstra HM,
    3. Kenemans JL,
    4. Neggers SF
    (2010) Lateralized frontal eye field activity precedes occipital activity shortly before saccades: evidence for cortico-cortical feedback as a mechanism underlying covert attention shifts. J Cogn Neurosci 22:1931–1943, doi:10.1162/jocn.2009.21342, pmid:19702472.
    OpenUrlCrossRefPubMed
  28. ↵
    1. Gutteling TP,
    2. Kenemans JL,
    3. Neggers SF
    (2011) Grasping preparation enhances orientation change detection. PLoS One 6:e17675, doi:10.1371/journal.pone.0017675, pmid:21408131.
    OpenUrlCrossRefPubMed
  29. ↵
    1. Gutteling TP,
    2. Park SY,
    3. Kenemans JL,
    4. Neggers SF
    (2013) TMS of the anterior intraparietal area selectively modulates orientation change detection during action preparation. J Neurophysiol 110:33–41, doi:10.1152/jn.00622.2012, pmid:23596329.
    OpenUrlAbstract/FREE Full Text
  30. ↵
    1. Hannus A,
    2. Cornelissen FW,
    3. Lindemann O,
    4. Bekkering H
    (2005) Selection-for-action in visual search. Acta Psychol (Amst) 118:171–191, doi:10.1016/j.actpsy.2004.10.010, pmid:15627415.
    OpenUrlCrossRefPubMed
  31. ↵
    1. Hubel DH,
    2. Wiesel TN
    (1974) Sequence regularity and geometry of orientation columns in the monkey striate cortex. J Comp Neurol 158:267–293, doi:10.1002/cne.901580304, pmid:4436456.
    OpenUrlCrossRefPubMed
  32. ↵
    1. Jehee JF,
    2. Brady DK,
    3. Tong F
    (2011) Attention improves encoding of task-relevant features in the human visual cortex. J Neurosci 31:8210–8219, doi:10.1523/JNEUROSCI.6153-09.2011, pmid:21632942.
    OpenUrlAbstract/FREE Full Text
  33. ↵
    1. Kamitani Y,
    2. Tong F
    (2005) Decoding the visual and subjective contents of the human brain. Nat Neurosci 8:679–685, doi:10.1038/nn1444, pmid:15852014.
    OpenUrlCrossRefPubMed
  34. ↵
    1. Kohavi R
    (1995) Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence A study of cross-validation and bootstrap for accuracy estimation and model selection, pp 1137–1143.
  35. ↵
    1. Logothetis NK,
    2. Pauls J,
    3. Augath M,
    4. Trinath T,
    5. Oeltermann A
    (2001) Neurophysiological investigation of the basis of the fMRI signal. Nature 412:150–157, doi:10.1038/35084005, pmid:11449264.
    OpenUrlCrossRefPubMed
  36. ↵
    1. McAdams CJ,
    2. Maunsell JH
    (2000) Attention to both space and feature modulates neuronal responses in macaque area V4. J Neurophysiol 83:1751–1755, pmid:10712494.
    OpenUrlAbstract/FREE Full Text
  37. ↵
    1. Mizuguchi N,
    2. Nakata H,
    3. Hayashi T,
    4. Sakamoto M,
    5. Muraoka T,
    6. Uchida Y,
    7. Kanosue K
    (2013) Brain activity during motor imagery of an action with an object: a functional magnetic resonance imaging study. Neurosci Res 76:150–155, doi:10.1016/j.neures.2013.03.012, pmid:23562793.
    OpenUrlCrossRefPubMed
  38. ↵
    1. Mizuguchi N,
    2. Nakata H,
    3. Kanosue K
    (2014) Effector-independent brain activity during motor imagery of the upper and lower limbs: an fMRI study. Neurosci Lett 581:69–74, doi:10.1016/j.neulet.2014.08.025, pmid:25150928.
    OpenUrlCrossRefPubMed
  39. ↵
    1. Moore T,
    2. Fallah M
    (2004) Microstimulation of the frontal eye field and its effects on covert spatial attention. J Neurophysiol 91:152–162, doi:10.1152/jn.00741.2002, pmid:13679398.
    OpenUrlAbstract/FREE Full Text
  40. ↵
    1. Murata A,
    2. Gallese V,
    3. Luppino G,
    4. Kaseda M,
    5. Sakata H
    (2000) Selectivity for the shape, size, and orientation of objects for grasping in neurons of monkey parietal area AIP. J Neurophysiol 83:2580–2601, pmid:10805659.
    OpenUrlAbstract/FREE Full Text
  41. ↵
    1. Neggers SF,
    2. Huijbers W,
    3. Vrijlandt CM,
    4. Vlaskamp BN,
    5. Schutter DJ,
    6. Kenemans JL
    (2007) TMS pulses on the frontal eye fields break coupling between visuospatial attention and eye movements. J Neurophysiol 98:2765–2778, doi:10.1152/jn.00357.2007, pmid:17699696.
    OpenUrlAbstract/FREE Full Text
  42. ↵
    1. Norman KA,
    2. Polyn SM,
    3. Detre GJ,
    4. Haxby JV
    (2006) Beyond mind-reading: multi-voxel pattern analysis of fMRI data. Trends Cogn Sci 10:424–430, doi:10.1016/j.tics.2006.07.005, pmid:16899397.
    OpenUrlCrossRefPubMed
  43. ↵
    1. Posner MI
    (1980) Orienting of attention. Q J Exp Psychol 32:3–25, doi:10.1080/00335558008248231, pmid:7367577.
    OpenUrlCrossRefPubMed
  44. ↵
    1. Rizzolatti G,
    2. Riggio L,
    3. Dascola I,
    4. Umiltá C
    (1987) Reorienting attention across the horizontal and vertical meridians: evidence in favor of a premotor theory of attention. Neuropsychologia 25:31–40, doi:10.1016/0028-3932(87)90041-8, pmid:3574648.
    OpenUrlCrossRefPubMed
  45. ↵
    1. Ruff CC,
    2. Blankenburg F,
    3. Bjoertomt O,
    4. Bestmann S,
    5. Freeman E,
    6. Haynes JD,
    7. Rees G,
    8. Josephs O,
    9. Deichmann R,
    10. Driver J
    (2006) Concurrent TMS-fMRI and psychophysics reveal frontal influences on human retinotopic visual cortex. Curr Biol 16:1479–1488, doi:10.1016/j.cub.2006.06.057, pmid:16890523.
    OpenUrlCrossRefPubMed
  46. ↵
    1. Ruff CC,
    2. Bestmann S,
    3. Blankenburg F,
    4. Bjoertomt O,
    5. Josephs O,
    6. Weiskopf N,
    7. Deichmann R,
    8. Driver J
    (2008) Distinct causal influences of parietal versus frontal areas on human visual cortex: evidence from concurrent TMS-fMRI. Cereb Cortex 18:817–827, doi:10.1093/cercor/bhm128, pmid:17652468.
    OpenUrlAbstract/FREE Full Text
  47. ↵
    1. Sasaki Y,
    2. Rajimehr R,
    3. Kim BW,
    4. Ekstrom LB,
    5. Vanduffel W,
    6. Tootell RB
    (2006) The radial bias: a different slant on visual orientation sensitivity in human and nonhuman primates. Neuron 51:661–670, doi:10.1016/j.neuron.2006.07.021, pmid:16950163.
    OpenUrlCrossRefPubMed
  48. ↵
    1. Sereno MI,
    2. Dale AM,
    3. Reppas JB,
    4. Kwong KK,
    5. Belliveau JW,
    6. Brady TJ,
    7. Rosen BR,
    8. Tootell RB
    (1995) Borders of multiple visual areas in humans revealed by functional magnetic resonance imaging. Science 268:889–893, doi:10.1126/science.7754376, pmid:7754376.
    OpenUrlAbstract/FREE Full Text
  49. ↵
    1. Treisman AM,
    2. Gelade G
    (1980) A feature-integration theory of attention. Cogn Psychol 12:97–136, doi:10.1016/0010-0285(80)90005-5, pmid:7351125.
    OpenUrlCrossRefPubMed
  50. ↵
    1. Tunik E,
    2. Frey SH,
    3. Grafton ST
    (2005) Virtual lesions of the anterior intraparietal area disrupt goal-dependent on-line adjustments of grasp. Nat Neurosci 8:505–511, doi:10.1038/nn1430, pmid:15778711.
    OpenUrlCrossRefPubMed
  51. ↵
    1. Tunik E,
    2. Ortigue S,
    3. Adamovich SV,
    4. Grafton ST
    (2008) Differential recruitment of anterior intraparietal sulcus and superior parietal lobule during visually guided grasping revealed by electrical neuroimaging. J Neurosci 28:13615–13620, doi:10.1523/JNEUROSCI.3303-08.2008, pmid:19074035.
    OpenUrlAbstract/FREE Full Text
  52. ↵
    1. van Elk M,
    2. van Schie HT,
    3. Neggers SF,
    4. Bekkering H
    (2010) Neural and temporal dynamics underlying visual selection for action. J Neurophysiol 104:972–983, doi:10.1152/jn.01079.2009, pmid:20538783.
    OpenUrlAbstract/FREE Full Text
  53. ↵
    1. Van Essen DC,
    2. Drury HA,
    3. Dickson J,
    4. Harwell J,
    5. Hanlon D,
    6. Anderson CH
    (2001) An integrated software suite for surface-based analyses of cerebral cortex. J Am Med Inform Assoc 8:443–459, doi:10.1136/jamia.2001.0080443, pmid:11522765.
    OpenUrlAbstract/FREE Full Text
  54. ↵
    1. Van Ettinger-Veenstra HM,
    2. Huijbers W,
    3. Gutteling TP,
    4. Vink M,
    5. Kenemans JL,
    6. Neggers SF
    (2009) fMRI-guided TMS on cortical eye fields: the frontal but not intraparietal eye fields regulate the coupling between visuospatial attention and eye movements. J Neurophysiol 102:3469–3480, doi:10.1152/jn.00350.2009, pmid:19812293.
    OpenUrlAbstract/FREE Full Text
  55. ↵
    1. Witt JK
    (2011) Action's effect on perception. Curr Dir Psychol Sci 20:201–206, doi:10.1177/0963721411408770.
    OpenUrlAbstract/FREE Full Text
Back to top

In this issue

The Journal of Neuroscience: 35 (16)
Journal of Neuroscience
Vol. 35, Issue 16
22 Apr 2015
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Advertising (PDF)
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Action Preparation Shapes Processing in Early Visual Cortex
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Action Preparation Shapes Processing in Early Visual Cortex
Tjerk P. Gutteling, Natalia Petridou, Serge O. Dumoulin, Ben M. Harvey, Erik J. Aarnoutse, J. Leon Kenemans, Sebastian F.W. Neggers
Journal of Neuroscience 22 April 2015, 35 (16) 6472-6480; DOI: 10.1523/JNEUROSCI.1358-14.2015

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Action Preparation Shapes Processing in Early Visual Cortex
Tjerk P. Gutteling, Natalia Petridou, Serge O. Dumoulin, Ben M. Harvey, Erik J. Aarnoutse, J. Leon Kenemans, Sebastian F.W. Neggers
Journal of Neuroscience 22 April 2015, 35 (16) 6472-6480; DOI: 10.1523/JNEUROSCI.1358-14.2015
del.icio.us logo Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • action preparation
  • feature perception
  • grasping
  • high-field fMRI
  • MVPA
  • visual cortex

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Articles

  • Choice Behavior Guided by Learned, But Not Innate, Taste Aversion Recruits the Orbitofrontal Cortex
  • Maturation of Spontaneous Firing Properties after Hearing Onset in Rat Auditory Nerve Fibers: Spontaneous Rates, Refractoriness, and Interfiber Correlations
  • Insulin Treatment Prevents Neuroinflammation and Neuronal Injury with Restored Neurobehavioral Function in Models of HIV/AIDS Neurodegeneration
Show more Articles

Behavioral/Cognitive

  • Signatures of Electrical Stimulation Driven Network Interactions in the Human Limbic System
  • Dissociable Neural Mechanisms Underlie the Effects of Attention on Visual Appearance and Response Bias
  • Rhythmic Entrainment Echoes in Auditory Perception
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
(JNeurosci logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.