Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

Multivariate Analysis of Electrophysiological Signals Reveals the Time Course of Precision Grasps Programs: Evidence for Nonhierarchical Evolution of Grasp Control

Lin Lawrence Guo, Yazan Shamli Oghli, Adam Frost and Matthias Niemeier
Journal of Neuroscience 3 November 2021, 41 (44) 9210-9222; DOI: https://doi.org/10.1523/JNEUROSCI.0992-21.2021
Lin Lawrence Guo
1Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Lin Lawrence Guo
Yazan Shamli Oghli
1Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Adam Frost
1Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Matthias Niemeier
1Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
2Centre for Vision Research, York University, Toronto, Ontario M4N 3M6, Canada
3Vision: Science to Applications, York University, Toronto, Ontario M3J 1P3, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Current understanding of the neural processes underlying human grasping suggests that grasp computations involve gradients of higher to lower level representations and, relatedly, visual to motor processes. However, it is unclear whether these processes evolve in a strictly canonical manner from higher to intermediate and to lower levels given that this knowledge importantly relies on functional imaging, which lacks temporal resolution. To examine grasping in fine temporal detail here we used multivariate EEG analysis. We asked participants to grasp objects while controlling the time at which crucial elements of grasp programs were specified. We first specified the orientation with which participants should grasp objects, and only after a delay we instructed participants about which effector to use to grasp, either the right or the left hand. We also asked participants to grasp with both hands because bimanual and left-hand grasping share intermediate-level grasp representations. We observed that grasp programs evolved in a canonical manner from visual representations, which were independent of effectors to motor representations that distinguished between effectors. However, we found that intermediate representations of effectors that partially distinguished between effectors arose after representations that distinguished among all effector types. Our results show that grasp computations do not proceed in a strictly hierarchically canonical fashion, highlighting the importance of the fine temporal resolution of EEG for a comprehensive understanding of human grasp control.

SIGNIFICANCE STATEMENT A long-standing assumption of the grasp computations is that grasp representations progress from higher to lower level control in a regular, or canonical, fashion. Here, we combined EEG and multivariate pattern analysis to characterize the temporal dynamics of grasp representations while participants viewed objects and were subsequently cued to execute an unimanual or bimanual grasp. Interrogation of the temporal dynamics revealed that lower level effector representations emerged before intermediate levels of grasp representations, thereby suggesting a partially noncanonical progression from higher to lower and then to intermediate level grasp control.

  • EEG
  • grasp program
  • motor
  • MVPA
  • precision grasp
  • visual

Introduction

The human brain computes sensorimotor representations and processes in multiple stages of sensorimotor transformations (Flash and Hogan, 1985) to interact with the world. For example, to grasp an object, the brain extracts sensory information about the object and the intended effector to generate motor programs that then guide the movements of the grasping hand. To this end the brain relies on lateral and medial pathways within a parietofrontal network (Fattori et al., 2009, 2017; Gallivan and Culham, 2015; Janssen and Scherberger, 2015) that computes grasp-relevant visuomotor features (Tunik et al., 2005; Davare et al., 2006; Baumann et al., 2009; Cavina-Pratesi et al., 2010; Fabbri et al., 2016; Schaffelhofer and Scherberger, 2016).

Two (partially related) properties of the grasp network are suggestive of the multistage dynamics of the sensorimotor transformations that they perform—hierarchical organization and visual-to-motor gradients. The grasp network is hierarchically organized; at higher (or upstream) levels areas represent grasp features regardless of effectors, at lower levels areas encode grasp features for the right or left hand more specifically (Davare et al., 2006; Gallivan et al., 2013; Turella et al., 2016, 2020; Michaels and Scherberger 2018). Furthermore, some areas share neural resources at intermediate levels, downstream from effector-independent representations but upstream from the control of individual effectors. For example, right parietal areas, more specialized for grasping with the left than the dominant right hand, also control bimanual grasps (Le et al., 2014, 2017). Other examples of intermediate-level sensorimotor control can be found in Kadmon Harpaz et al. (2014) and Turella et al. (2020).

A concept related to hierarchical organization is a visual-to-motor gradient (Gallivan and Culham, 2015; Janssen and Scherberger, 2015), which is seen in parietal regions more associated with representations of visual object properties (upstream of motor control and effector representations); whereas premotor and motor regions are more associated with more downstream motor representations for movement planning and execution (Fabbri et al., 2016; Schaffelhofer and Scherberger, 2016), operating in a relatively modular fashion (Michaels et al., 2020).

Because of the hierarchical structure of the frontoparietal grasp network and sensory-to-motor organization, it appears obvious to assume that information flows from sensory to motor stages and from higher to lower levels of planning and executing grasp movements. Yet, empirical evidence remains limited. Functional magnetic imaging (fMRI), the dominant method to study human brain functions, is handicapped by its coarse temporal resolution, although fMRI paradigms slowing down grasp processes offer some albeit limited insights into timing (Monaco et al., 2011; Gallivan et al., 2011, 2013; Ariani et al., 2018).

To unravel the time course of the sensorimotor grasp processes in detail, two studies were begun to use magnetoencephalography (MEG) or electroencephalography (EEG) together with multivariate decoding techniques. Turella et al. (2016) identified action planning processes ∼750 ms after object presentation and upstream from effector-related processes. Guo et al. (2019) revealed the time course of object shape and grasp orientation representations during grasp planning and execution.

Here, we used multivariate EEG analysis to test whether sensorimotor processes underlying grasping evolve in a strictly canonical fashion from higher to lower levels. To tease apart higher, intermediate, and lower levels of computations, we mapped neural representations that were effector independent and partially differentiated between effectors or completely differentiated between effectors. Therefore, we extracted representations of grasp orientation to map grasp programs before and after effector specification. Participants knew which grasp orientation to perform while viewing objects but learned only later whether to grasp the object with the left or right hand or whether to grasp bimanually. We included bimanual grasps to identify intermediate-level action planning where bimanual and left-hand grasping are computationally and physiologically similar but different from right-hand grasping (Le et al., 2014, 2017, 2019). We found that effector-independent visual representations emerged before motor representations. Surprisingly however, representations partially differentiating between effectors emerged after representations that completely differentiated between effectors, thereby revealing a noncanonical progression from higher to lower and then to intermediate-level grasp control.

Materials and Methods

Participants

Fifteen participants (eight females; median age, 20 years; range, 18–35) from the University of Toronto community gave their written and informed consent to participate in the experiment in exchange for cash payments. Whereas we were (and are) not aware of a power analysis for the analysis that we intended to conduct here (i.e., representational dissimilarity analysis, see below), we determined our sample size a pr iori based on similar EEG or MEG studies using multivariate methods. First, in the main experiment of a study from our lab (Guo et al., 2019) we had found that using support vector machines to analyze 6 h of data for each of 15 participants yielded a significant grasping effect (Guo et al., 2019), including a significant effect for 15 of 15 individual participants for one of the independent variables. (Nemrodov et al., 2016, 2018, have similar sample sizes in two support vector machine studies on face perception.) In addition, a literature search for related articles that conducted representational dissimilarity analysis on MEG/EEG data produced an average sample size of 13.1 (standard deviation = 5.2, range = 5–20; Cichy et al., 2014; Cichy and Pantasis, 2017; Kaneshiro et al., 2015; Kassraian-Fard et al., 2016; Kietzmann et al., 2019; Mohsenzadeh et al., 2019; Wardle et al., 2016; Sburlea et al., 2021).

The participants included in the present study were all right-handed (Oldfield, 1971) and had normal or corrected-to-normal vision. All procedures were approved by the Human Participants Review Subcommittee of the University of Toronto and conformed to the ethical standards in the Declaration of Helsinki.

Procedures and apparatus

Participants were seated in a light-sealed room at a table waiting to make visually guided grasp movements. Before each block of trials an experimenter sitting beside them told them whether grasps should be clockwise (CW) or counterclockwise (CCW). Also, the experimenter practiced with them (two or three trials) as they waited for a high- or low-pitched tone as a signal about which effector to use. For left-handed and bimanual (LB) blocks, one pitch (high or low in different blocks) signaled grasps with the left hand using the index finger and the thumb, and the other pitch signaled grasps with both hands using the index fingers of both hands and the middle fingers for support; for right-handed and bimanual (RB) blocks, the pitch of the tone cued grasps with the right hand or with both hands; and for left-handed and right-handed (LR) blocks, it cued grasps with the left or right hand. We tested only two types of grasping in any block to avoid confusion; learning to associate three different pitches to three grasp types as well as relearning the association after each block of trials would have been too difficult for participants. Contingency between pitch and effector randomly changed for participants from block to block to ensure that our analyses did not falsely include representations of auditory frequencies when decoding effector representations.

The participants then placed their left and right hand on a button box so that each index finger blocked a beam of infrared light. Earplugs and an opaque shutter glass screen (Smart Glass Technologies) ensured that the participants could not hear or see how the experimenter prepared each trial. To that end, the experimenter turned on a set of LEDs that illuminated a black-clad grasp space with a slanted platform and a square-shaped peg in the middle. On the peg the experimenter mounted objects always with the same position and orientation, 43 cm away from the participant with the surface of the object tilted toward the participant's line of sight.

All objects were made from 2 cm thick wooden blocks and were either shaped like a pillow with four concave edges or like a flower with four convex edges (Fig. 1A). All objects measured 6 cm across opposing edges, thus with identical grip sizes, and they were painted middle gray on the sides. The top surfaces were covered either with a grid or checkerboard texture. All combinations of textures and shapes were equally likely to occur. However, only object shapes were relevant for grasping. Texture was irrelevant for the task and merely helped create a greater variety of objects to better engage the attention of the participants.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Methods. A, Objects used in the experiment. Note that the pillow and flower shapes provide equal grip sizes. B, Combination of grasp orientation and effector. C, Timeline of a trial. The auditory Go cue (high- or low-pitched tone) informed participants of the effector to use for the grasp.

Next, the experimenter pressed a key to start the trial (Fig. 1B). The LEDs switched off, and the shutter glass screen turned transparent. Seven-hundred fifty to 1250 ms later, the LED lights turned on to illuminate the object for the participant to see for a Preview time of 500–1000 ms. Then the pitch of the auditory Go signal (loud enough to be heard through the earplugs) instructed participants which hand or hands to use to grasp the object.

Once participants moved their hand or hands, the infrared beams on the button box were unblocked, marking the time of movement onset (note that only at that time the participant's hand or a hands came into view through the shutter glass). As the participants reached to grasp the object, their fingers crossed a curtain of infrared beams created by two 15-cm-tall pillars located 40 cm apart from one another and directly in front of the object. The participant's hand or hands crossing the beam defined the end of the reach-to-grasp movement (i.e., immediately before the participant touched the object). The trial ended with the participant picking up the object and placing it on the table near the experimenter. Trials with incorrect grasps or dropped objects were marked as invalid before the start of the next trial. In total there were 40 trials in one block (2 shapes × 2 textures × 2 effectors × 5 repetitions in random order) for a total of 42 blocks (2 orientations × 2 effector combinations, i.e., LB/RB/LR × 7 repetitions in random order) across two 3 h sessions conducted on different days, and there was one practice block at the beginning of the experiment. Breaks were provided in between blocks as requested by the participants.

Data acquisition and preprocessing

EEG data were recorded using a 64-electrode BioSemi ActiveTwo recording system, digitized at a rate of 512 Hz with 24-bit analog-to-digital conversion. The electrodes were arranged according to the International 10/20 System. The electrode offset was kept below 40 mV.

EEG preprocessing was performed offline in MATLAB using the EEGLAB Toolbox (Delorme and Makeig, 2004) and ERPLAB Toolbox (Lopez-Calderon and Luck, 2014). Signals from each block were bandpass filtered (noncausal Butterworth impulse response function, 12 dB/octave roll-off) with half-amplitude cut-offs at 0.1 and 40 Hz and down sampled to 256 Hz (to improve statistical power). Noisy electrodes (correlation with nearby electrodes < 0.6) were interpolated (on average 0.78 electrodes per subject), and all electrodes were rereferenced to the average of all electrodes. Next, independent component analysis (ICA) was performed, and ICLabel (Pion-Tonachini et al., 2019) was used to help identify and remove components that were associated with blinks, eye movements, muscle activity, and channel noise. The ICA-corrected data were segmented relative to the onset of Preview (−100 to 500 ms) and Go signal (−100 to 800 ms). In addition, invalid trials and epochs containing abnormal reaction times (<100 ms or >1000 ms) or incorrect grasps were removed. As a result, an average of 9.03% of trials from each subject was removed from further analyses.

As an additional preprocessing procedure, we conducted multivariate noise normalization on EEG patterns separately for Preview and Go. The procedure is recommended for multivariate pattern analysis of electrophysiological signals to avoid any individual electrode value overly influencing dissimilarity calculations (Guggenmos et al., 2018). To this end we computed covariance matrices based on electrode activation patterns of all trials for each condition and time point separately. The obtained covariance matrices were subsequently averaged across time points and then across conditions. The averaged covariance matrix was then inverted and multiplied with EEG patterns at each epoch and time point.

Next, the preprocessed epochs were averaged into ERP traces to increase the signal-to-noise ratio of spatiotemporal patterns (Grootswagers et al., 2017). Specifically, up to five epochs within a given block that corresponded to the same condition (i.e., shape, texture, effector, and grasp orientation) were averaged together, resulting in 14 separate ERP traces for each of the 24 conditions (2 orientations × 3 effectors × 4 objects) for Preview and Go, respectively.

Representational dissimilarity analysis

Multivariate pattern analysis of ERP traces was conducted using representational dissimilarity analysis (RDA; Kriegeskorte et al., 2008). RDA captures the representational structure among conditions based on dissimilarities in EEG patterns. The obtained structure can then be compared with the expected representational structure derived according to the shared characteristics (i.e., shape, texture, effector, orientation) among conditions to test for the presence of certain representations. A benefit of RDA over decoding approaches (Guo et al., 2019) is the ability to statistically remove influences of artifacts (e.g., eye movements) from the representational structure (see below, Main effect RDM models).

RDA was performed at each time point using spatial features from all electrodes to assess the time course of representations. We used a cross-validated Euclidean distance to index dissimilarity between each pair of conditions. Here, cross-validation was performed by averaging the estimated distance (D̂i,j) between spatial features (x; i.e., the 64 electrodes) of two conditions (i,j) across all pairs (m,n) of blocks (B), resulting in 91 folds using the following equation: D̂i,j=∑m=1B−1∑n=m+1B(xim−xjm)T(xin−xjn)B(B−1)/2(1)

Cross validation ensured that the resulting Euclidean distance was unbiased; that is, the expected value of distance would be zero if two patterns are not statistically different from one another, and less than zero otherwise (Nili et al., 2014; Walther et al., 2016; Guggenmos et al., 2018). Note that because of the nature of cross-validation the estimated Euclidean distance, D̂i,j can be negative.

RDA produced a 24 × 24 representational dissimilarity matrix (RDM) at each time point during Preview and Go (Fig. 2A, RDMs obtained at selected time points). Aside from RDMs obtained from ERPs, we also obtained a set of eye movement RDMs that reflect the degree to which conditions differed from one another based on eye movement artifacts. That is, based on the eye movement artifacts as identified by the independent components analysis during preprocessing, we calculated RDMs applying the same procedure.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

RDM models. A, Main effect models. B, Similar representation models of effectors (e.g., left = bimanual model, or more precisely, left = bimanual ≠ right model. Note that a complete similarity model where left = bimanual = right would mean that all cells are similar would not be suitable for RDA. Instead, to identify effector-independent representations, we used the grasp orientation model as a proxy to map effector independent representations. The diagonal elements in the model RDMs are excluded because the corresponding elements in the RDMs derived from ERPs are always zero, and so this avoids inflating the β weights from multiple regression.

Main effect RDM models

To identify the dissimilarity information contained in the RDMs of the ERPs, they were compared with model matrices (Fig. 2) designed to test the presence of specific representations. To test for visual representations, the shape model took into account that flower and pillow shapes are represented differently regardless of texture and visuomotor properties, and the texture model assumed that grid and checkerboard textures are represented differently regardless of shape and visuomotor properties. To test for visuomotor and motor representations, the grip orientation model assumed that CW and CCW grasps are represented differently regardless of effector and visual properties, and the hand model assumed that left, right, and bimanual grasps are represented differently regardless of orientation and visual properties. The four models, together with the eye movement RDMs, then simultaneously entered a multiple regression to serve as predictors of the RDMs of the ERPs at each time point. This way, we obtained β weights that reflected the unique contribution of each model while partialing out other models as well as the artifactual contribution of eye movements. The resulting β weights were compared against zero using one-sample t tests.

Effector-dependent RDM models

The models above tested for the presence of single representations independent of other features, analogous to testing for main effects in ANOVAs. Additionally (and similar to linear contrasts of ANOVAs), we tested for effector-dependent processes using models constructed based on a priori hypotheses. Specifically, we hypothesized that left-handed and bimanual grasps might be similarly represented given their similarities in neural (Le et al., 2014, 2017) and computational processes (Le and Niemeier, 2013a, b; Le et al., 2019). This hypothesis resulted in a model that assumes that grasp representations for left-handed and bimanual grasps are the same, whereas grasp representations for right-hand grasping are different (Fig. 2B). For completeness, we also tested the possibility that other effector pairs might share similar grasp representations, and so a second model took into account more similar representations between right-handed and bimanual grasps, and finally a third model took into account more similar representations between left- and right-handed grasps. Significant time courses of these special models would indicate that some aspects of grasp representations depend on shared neural processes between effectors. We tested each special model in a separate multiple regression using the special model together with the effector model as well as eye movement RDMs as regressors (shape and texture models were statistically entirely orthogonal and therefore not included).

Electrode informativeness

To assess the informativeness of electrodes for the RDA of ERP patterns, we performed a searchlight analysis across electrodes. Specifically, we defined a 50 mm radius neighborhood around each electrode and conducted separate RDAs on spatiotemporal features obtained from each electrode neighborhood across 100 ms time bins. The selected radius sufficiently captures the nearest surrounding electrodes. The cross-validation procedure here followed Equation 1 with the exception that x now reflects spatiotemporal features (∼8 electrodes in a neighborhood × 40 time points) rather than entirely spatial features. The resulting RDMs from this searchlight analysis were compared with all models previously described using the same multiple regression approach.

Statistics

For all tests conducted on EEG data, statistical significance was assessed using a nonparametric, cluster-based approach to determine clusters of time points (or electrodes for searchlight analyses) in which there was a significant effect at the group level (Nichols and Holmes, 2002). For time-resolved analyses, we defined clusters as consecutive time points that exceeded a statistical threshold (cluster-defining threshold) defined as the 95th percentile of the distribution of t values at each time point obtained using sign permutation tests computed 10,000 times (equivalent to p < 0.05, one tailed). Significant temporal clusters were then defined as cluster sizes that are equal to or greater than the 95th percentile of maximum cluster sizes across all permutations (equivalent to p < 0.05, one tailed). In addition, we calculated 95% confidence intervals for the onset times of the first significant cluster of the observed effects. This was accomplished by bootstrapping across participants (i.e., by selecting datasets randomly with replacement) 10,000 times and conducting the same data analysis including the permutation tests. For searchlight analyses (conducted across electrodes), cluster-based correction was conducted on each 100 ms time window separately on spatial clusters defined as nearby electrodes within a 50 mm radius.

Statistical tests for behavioral data (reaction time and movement time) were performed using four-way repeated-measures ANOVAs (Effector × Grasp Orientation × Shape × Texture) and adjusted for sphericity violations using the Greenhouse–Geisser (GHG) correction when needed. Additional post hoc analyses were conducted using repeated-measures t tests and corrected for multiple comparisons using the false discovery rate (Benjamini and Hochberg, 1995).

Results

Behavioral results

Average reaction time (RT; defined as the time between Go onset and movement onset) was 456 ms (SD = 89 ms). RTs submitted to a four-way repeated-measures ANOVA (Effector x Grasp Orientation x Shape x Texture) yielded a main effect of effector (F(1.782, 32) = 19.654, p < 0.001, ηp2=0.551, GHG corrected) so that reaction times were longer during bimanual grasps (average RT = 490 ms) than left-handed (t(16) = 3.607, q = 0.003, d = 0.875, average RT = 470 ms) and right-handed grasps (t(16) = 7.373, q < 0.001, d = 1.788, average RT = 460 ms). In addition, there was an interaction between Effector and Orientation (F(1.695, 32) = 10.910, p = 0.007, ηp2=0.287, GHG corrected). Specifically, right-handed grasps were slower when performed in a counterclockwise orientation (t(16) = 2.729, q = 0.045, d = 0.662, average CW RT = 450 ms, average CCW RT = 470 ms), whereas other grasps had similar reaction times across orientations (t values < 1.981, q values > 0.098). No other comparison was significant (F values < 1.198, p values > 0.313).

Average movement time (MT; defined as the time between movement onset and movement end) was 249 ms (SD = 44 ms). The four-way repeated-measures ANOVA of MTs showed a main effect of Effector (F(1.968, 32) = 21.230, p < 0.001, ηp2=0.570, GHG corrected), so that bimanual grasps (average MT = 280 ms) were slower in comparison to left-handed (t(16) = 5.299, q < 0.001, d = 1.285, average MT = 249 ms) and right-handed grasps (t(16) = 5.644, q < 0.001, d = 1.369, average MT = 245 ms). No other comparison was significant (F values < 4.450, p values > 0.051).

Time course of visual and visuomotor representations and electrode informativeness

The middle rows in Figure 3 show the group-averaged 24 × 24 RDMs obtained at four different sample time points relative to Preview and Go onset, respectively. These RDMs captured dissimilarity information as can be visualized with multidimensional scaling (MDS; top and bottom rows). MDS was applied to the group-averaged RDMs, and the top three dimensions accounting for the most variance in the data are plotted as 3D plots of abstract representational space. Visual inspection of the plots shows that data during Preview are organized primarily based on shape and orientation (the organization based on orientation is more subtle with filled symbols for clockwise orientation plotted somewhat higher in the plots than nonfilled symbols for counterclockwise orientation), whereas during Go, data are organized according to orientation and effector.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Group-averaged RDMs (second and third rows) and visualization by MDS (top and bottom rows) from four selected time points aligned to Preview and Go. Note that texture conditions were not graphed in the MDS plots given the weak effect of texture.

To quantify the extent to which the obtained RDMs captured the representation of visual object and of motor features, we conducted multiple regression on the obtained RDMs at each time point for each participant using the four RDM models (shape, texture, orientation, and effector). This allowed us to examine the unique contribution of each model to explaining the obtained RDMs. Including all four RDM models within the same analysis was not necessary because they were statistically independent of one another; the purpose was to obtain the same data format as in subsequent analyses that included statistically dependent RDMs.

Beta weights for the shape model reached significance 85 ms after object onset (Fig. 4A, top row). The weights peaked at 140 ms, dropping to lower levels thereafter. Beta weights for shape aligned to (shape irrelevant) Go were around zero (Fig. 4B).

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Representations of shape, texture, grasp orientation, and hand. A, B, Time course of representations aligned to the onset of Preview (A) and (B) Go. Shaded envelopes around the curve indicate ±1 SEM. Shaded areas under the curve indicate time points that were significant (cluster-based sign permutation test with cluster-defining and cluster-size thresholds of p < 0.05). Horizontal colored error bars mark 95% bootstrapped confidence intervals of representation onset during Preview for shape (80 ms, 105 ms) and grasp orientation (70 ms, 155 ms), and during Go for grasp orientation (25 ms and 195 ms), and hand (105 ms, 145 ms). Note that during Go, shape representations reached significance from 550 to 670 ms but were excluded from the bootstrapping analysis given that the time periods occurred during movement. C, Electrode informativeness for representations. Open circles indicate significant electrodes (cluster-based sign-permutation test with cluster-defining and cluster-size thresholds of p < 0.05). Note that in A and B β-weight curves are differently scaled along the y-axis.

Beta weights for texture did not reach significance (Fig. 4A,B, second row) as expected (Guo et al., 2019), and so texture models were not included in any subsequent analyses.

Grasp orientation representations formed during Preview (80–450 ms) and thus before effector specification (Fig. 4A, third row). During Go, β weights for grasp orientation became significant after 115 ms, with a brief interruption between 265 and 275 ms). Note that a small cluster of β weights (∼50–100 ms) was slightly too brief to reach significance but contributed to the wide confidence interval (25–195 ms).

Hand representations rose rapidly 115 ms after the Go signal and maintained significance afterward (Fig. 4B, fourth row; hand representations were absent before effector specification during Preview, as expected; Fig. 4A).

To explore the spatial profiles of shape, grasp orientation, and hand representations, we applied RDA to neighborhoods of electrodes (50 mm around each electrode, ∼8 electrodes) for temporal bins of 100 ms. Shape information during Preview peaked at posterior electrodes (Fig. 4C, first row) starting early (0–100 ms), then extending to nearly all electrodes for the 100-200 ms bin, and gradually declining thereafter. During Go, no electrodes reached significance, consistent with the results from time-resolved RDA.

Grasp orientation representations during Preview peaked at posterior and central electrodes and became gradually more prominent with a maximum at 200-300 ms (Fig. 4C, second row). By contrast, during Go, grasp orientation information mostly came from parietal, central, and temporal electrodes with relatively less involvement of occipital electrodes compared with Preview during 0-200 ms and 300-500 ms.

Hand representations during Preview showed a small effect at right frontal electrodes during the 400-500 ms bin, perhaps reflecting a previous-hand effect (because each block of trials tested only two effectors there was a 50% chance that the same hand was used in two consecutive trials). Hand representations after Go especially involved a peak at central electrodes from 100 ms on but also included posterior electrodes (Fig. 4C, third row). After 200 ms they involved nearly all electrodes.

In sum, RDA allowed us to extract information about the time course and the informativeness of electrodes with regard to representations of grasp actions. Next we used two strategies to map in detail how neural programs for grasp actions unfold. First, to identify abstract or high-level (visual) processes, we looked for representations of grasp orientation before effector specification to then show that these representations re-emerged after effector specification yet earlier than effector representations. Second, to identify intermediate-level processes, we searched for action representations that partially differentiated between effectors. That is, we looked for representations shared by left-hand and bimanual effector representations, finding them to arise after representations that distinguished among all effectors.

Effector-independent representations of grasp orientation before and after effector specification

As illustrated in Figure 5 we conducted a series of analyses to demonstrate (1) that Preview representations of grasp orientation reflected effector-independent grasp processes (Fig. 5A–C) and (2) that these representations re-emerged during Go (Fig. 5D). Regarding the former, we first tested whether Preview grasp orientation representations were truly effector independent. That is, even before the Go signal specified the effector, participants might have generated motor plans based on the effector they had used during the respective previous trial. Behavioral evidence for motor priming was reflected in slower reaction times when the effector did or did not change from one trial to the next (different effector, 474 ms; same effector, 443 ms; t(14) = 5.058; p < 0.001; d = 1.306). Therefore, we calculated ERPs sorting together trials that shared the same previous-trial effector condition (regardless of which effector would be used in the current trial) and reran RDA. We observed significant representations of the previous-trial effector from ∼400 ms to the end of Preview (Fig. 5A). This shows that motor priming did influence brain processes before effector specification but well after the representations of grasp orientation had formed. Thus, Preview orientation representations between 80 and 400 ms did not depend on effectors.

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Effector independence of grasp orientation representations during Preview and Go. A, Representations of the primed hand. Shaded area under the curve indicates significant time periods. B, Time course of grasp orientation representations during Preview. Red curve shows results baselined for −200 to −100 ms superimposed onto the original data (baseline, −100–0 ms). C, Representations of grasp orientation before (black lines) and after (colored lines) temporally jittering temporal alignment. Shaded areas between the black and colored lines indicate time periods of significant differences. Open circles on the scalp plots indicate significant changes in electrode involvement during periods of significant differences. C1, Preview data. C2, Go data. D, Dynamics of grasp orientation representations during D1 Preview, D2 Go, and D3 across the two events as reflected by temporal generalization of RDA. Representational dissimilarity matrices were first computed using ERP patterns across two time points (e.g., Preview at 100 ms vs Go at 150 ms) and were subsequently submitted to multiple regression to test for grasp orientation representations. Note that in D1 and D2 significant β weights yield symmetrical patterns because if a representation, say, 300 ms after object onset is similar to a representation 120 ms after object onset or Go, then the reverse must be true too. In contrast, in D2 the pattern of significant β values is asymmetrical because a representation 300 ms after object onset might be similar to a representation 120 ms after Go, but the reverse is not necessarily true, Preview representations at 120 ms do not necessarily have to be similar to Go representations at 300 ms. Blue dashed lines and brackets illustrate how data in C and D correspond to one another in time. All statistics in A–D were computed using cluster-based sign permutation tests with cluster-defining and cluster-size thresholds of p < 0.05.

Second, we tested whether Preview orientation representations emerged 80 ms after object onset because they were triggered by the object or because of an artifact that had to do with our choice of baseline (−100 to 0 ms), which would have eliminated any information right before object onset. However, we found that with an earlier baseline (−200 to −100 ms) orientation representations exhibited essentially the same trajectory (Fig. 5B). Crucially, this shows that the grasp orientation representations as studied here did not form based on the verbal instructions given at the start of each block to grasp clockwise or counterclockwise as opposed to visual and visuomotor processes (participants obviously followed verbal instructions, but the respective representations were invisible to our ERP data analysis).

Third, to further show that orientation representations reflected visual processes, we tested whether these representations were sensitive to the precise timing of object onset to illustrate how important correct temporal alignment is for RDA of ERPs; for example, in Fig. 4A Preview shape representations persist after 500 ms, but in Fig. 4B, with the same data aligned to Go, shape representations disappear. Therefore, we added random temporal jitter (±250 ms) to the time of visual object onset of individual trials to recalculate ERPs and rerun RDA. We found that orientation representations were significantly reduced between 200 and 400 ms (Fig. 5C1). Also, occipital electrode involvement largely disappeared during that time. This suggests that grasp orientation representations during Preview relied on processes that were tightly linked to visual object onset.

Curiously, orientation representations during Go showed a similar sensitivity to the timing of the auditory signal. Orientation representations as well as electrode involvement during two clusters between 50 ms and 200 ms were also significantly reduced (Fig. 5C2). This shows that orientation representations during Go were triggered by the auditory Go signal without that signal carrying any information relevant for grasp orientation. A possible reason is that the sound served as an impulse that pinged visual orientation representations reflexively (Wolff et al., 2017). Alternatively, Go signal specifying effectors might have caused visual preparatory grasp computations to repeat. If so, orientation representations during Preview and Go should be similar. Indeed, this is what we found in the next section.

To demonstrate that Preview orientation representations re-emerged during Go, we used cross-temporal and cross-event generalization analysis (King and Dehaene, 2014). That is, we computed RDMs using ERP patterns from different time points during Preview and Go and then used multiple regression the same way as before to test for the presence of grasp orientation representations. This analysis produced time-by-time matrices in which the diagonals of the matrices reflect time-specific representations same as the results discussed above, and the off diagonals of the matrices reflect generalizability of representations from one time point to another. During Preview (Fig. 5D1) the analysis revealed a chain of consecutive representations ∼70–110 ms (significant weights along the diagonal) followed by a mix of sustained and reactivating representations (weights form a roughly square-shaped cluster with some armlike patterns). King and Dehaene, 2014 have a discussion of the different activation patterns. During Go (Fig. 5D2), a first cluster from ∼0–300 ms showed a similar mix of sustained activation and reactivation. A second cluster (∼300–800 ms) mainly showed sustained activation coinciding with movement execution. Crucially, comparing Preview and Go through cross-event generalization (Fig. 5D3) revealed similarities in orientation representations. That is, Go representations from 50 to 120 ms generalized from ∼150 to 500 ms during Preview, and Go from ∼150 to 300 ms generalized from ∼150 to 300 ms during Preview. Note that these times of generalization correspond well with times of jitter-sensitive representations in Figure 5C1 and 5C2. Hence, Preview orientation representations sensitive to visual object onset transiently reactivated during Go. Note that these representations preceded motor representations of the hand (Fig. 4B).

Similar representation models of effectors

Our second strategy to map unfolding grasp actions in detail sought to identify intermediate-level effector representations that partially differentiated between effectors. To this end we moved away from RDAs investigating main effect-like representations of visual and motor processes (Fig. 2A). Instead, we tested a special model (analogous to an ANOVA linear contrast) that assumed grasp processes to be similar for left-handed and bimanual grasps as shown previously (Le et al., 2014, 2017, 2019) but dissimilar for right-handed grasps (L = B ≠ R model or LB model for short, Fig. 2B). For completeness we also tested models that assumed similar grasp representations between right-handed and bimanual grasps (RB model), as well as left- and right-handed grasps (LR model; RB and LR models were not predicted by previous studies, nevertheless their existence was not ruled out either).

As shown in Figure 6, similar effector representations arose after the Go signal that specified the effector to be used, as expected. LB representations rose rapidly and maintained significance from 165 ms onward (Fig. 6A, top row). Analyzing electrode informativeness showed that mainly central electrodes, with a bias for the right scalp, were involved starting 100 ms after Go (Fig. 6B). These results are consistent with previous behavioral and transcranial magnetic stimulation studies on bimanual grasping (Le et al., 2014, 2017, 2019).

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Similar effector representations dependent on shared neural processes between LB grasps, RB grasps, and LR grasps. A, Time course of representations aligned to the onset of Preview and Go. Shaded envelopes around the curve indicate ±1 SEM. Shaded areas under the curve indicate time points that were significant (cluster-based sign permutation test with cluster-defining and cluster-size thresholds of p < 0.05). Error bars indicate 95% bootstrapped confidence intervals of representation onset during Go for LB (130 ms, 175 ms) and RB (95 ms, 230 ms) representations of grasp orientation. Inset, The two plots above the LB model in the top row present MDS results to visualize how effector representations separate. The MDS plot on the left (130 ms) shows that first all effectors separate from one another, right-hand (red), left-hand (green), and bimanual grasping (blue). The MDS plot on the right (350 ms) illustrates that later a similarity of left-hand and bimanual grasping emerges. Note that evidence for the RB model did not emerge within the first three dimensions of the MDS analysis. The LR model attained negative β weights after Go (light green) and zero weights when tested as sole predictor (dark green), indicating that it served as a suppressor variable. B, Electrode informativeness for representations. Open circles indicate significant electrodes (cluster-based sign permutation test with cluster-defining and cluster-size thresholds of p < 0.05). C, Time delay between similar effector representations and hand representations (Fig. 4B, bottom). Top row, LB model. Bottom row, RB model. Shaded areas mark time delays down to the fifth percentile of the 10,000 bootstrapped comparisons.

Interestingly, RB representations also reached significance, although with smaller effect sizes (150–290 ms; Fig. 6A, second row; the β weight curve for RB representations is somewhat similar to the LB curve probably because both curves reflect visuomotor processes unfolding in time in a similar way). Electrode involvement for RB representations was less pronounced and mostly came from the left hemisphere with peaks around central electrodes after 200 ms.

Furthermore, LR representations obtained negative β weights (Fig. 6A, third row, light green curve) because the regression analysis turned the LR model into a suppressor variable to filter out irrelevant variability (i.e., a separate regression using the LR model as the only predictor variable yielded no significant β values; Fig. 6A, third row, dark green curve).

Finally and crucially, we were interested in the timing of the LB model relative to the representations of individual effectors (Fig. 4B, bottom row). If sensorimotor control of grasps evolved in a strictly canonical fashion from higher to intermediate and then lower level representations, then LB representations should be computed before effector representations (Fig. 4B, bottom row). Instead, we found that LB representations arose later (Fig. 6A, MDS plots). The median delay of 47 ms relative to individual effector representations was significant (Fig. 6C). To statistically test this, we subtracted the bootstrapped onset times of LB representations by the bootstrapped onset times of effector representations to obtain one-tailed 95% confidence intervals (CI lower bound, 15 ms). Likewise, RB representations arose with a median delay of 40 ms after effector representations (bootstrapped CI lower bound, 4 ms). When we realigned ERPs to movement onset, we observed similar trends of time differences (hand vs LB, 82 ms; CI lower bound, −65 ms; CI lower bound at α = 0.1, 12 ms, p = 0.095; hand vs RB, 110 ms; CI lower bound, 35 ms). Finally, it should be noted that these time differences are, if at all, conservative estimates because statistically the hand model for individual effector representations (Fig. 2A) had a slightly poorer signal-to-noise ratio than the effector similarity models (Fig. 2B) because of more uneven numbers of similar versus dissimilar cells (∼1/3–2/3 vs ∼5/9–4/9). In contrast, a systematic difference in computation times (e.g., left-hand grasping might take longer to compute) had no effect on time differences because all models included all effector conditions.

Contribution of eye movement artifacts

Eye movements can differ systematically across grasp conditions (Brouwer et al., 2009) and can contaminate EEG signals and, thus, contribute to multivariate analyses (Quax et al., 2019). To address this, we partialed out eye-movement artifacts captured by ICA in our multiple regression analyses of RDMs. However, it is still possible that residual eye movements not captured by ICA contaminated neural-based ERPs. To test this possibility, we ran RDA only using frontal electrodes (FP1, FPz, FP2, AF7, AF3, AFz, AF4, AF8). However, β weights only showed spurious significance for shape during Preview (Fig. 7, top row), effector representations during Go (Fig. 7, fourth row), and RB representations during Go (Fig. 7, sixth row). These results are also consistent with our observation that frontal electrodes carried little information in our main analyses (Figs. 4C, 6B). In sum, our analyses show that eye movement artifacts cannot sufficiently account for the results of our multivariate analyses.

Figure 7.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 7.

RDA using frontal-most electrodes (Fp1, Fpz, Fp2, AF7, AF3, AFz, AF4, AF8). Shaded envelopes around the curve indicate ±1 SEM. Shaded areas under the curve indicate time points that were significant (cluster-based sign permutation test with cluster-defining and cluster-size thresholds of p < 0.05).

Discussion

We investigated the temporal evolution of grasp programs using multivariate analysis of ERPs recorded from human participants. As a proxy of grasp computations, we used representations of grasp orientation, relating them in time to the development of high-level, effector-independent visual object information and lower level effector representations. In addition, to identify intermediate levels of grasp programming we included a bimanual grasping task. The results provide novel insights into the hierarchical structure underlying the control of human precision grasps. Notably, they suggest that grasp programs evolve in a partially noncanonical manner.

We studied grasp programs during a preview and a movement execution phase. During Preview, grasp programs emerged despite effectors not being specified yet. This is consistent with previous findings that effector-independent representations in posterior parietal and dorsal premotor cortex distinguish between grasp and reach plans (Gallivan et al., 2013; Turella et al., 2016, 2020). In extension of these findings here we show that effector-independent grasp programs already incorporate fine-grained information about grasp orientation. Of further significance, our EEG data allow us to determine that grasp programs emerge 80 ms after object onset, which is similar to when shape representations arose in the present study. It is also similar to orientation representations in a previous study where the effector was known ahead of time (Guo et al., 2019). This shows that grasp programs with or without delayed effector specification commence immediately after objects become visible, suggesting that early grasp programs reflect visual processes that are modulated by action intentions. In support of this idea, here we show that the grasp programs did not reflect the verbal instructions about grasp orientations that we gave at the start of each experimental block because orientation representations did not form before object onset. Also, orientation representations during Preview did not form based on motor priming. Instead, they were temporally yoked to visual object onset, just like shape representations, especially at occipital and parietal electrodes. Our results suggest that early grasp orientation representations reflect high-level grasp programs that rely on object-based visual processes, regardless of whether effectors are specified or not.

The visual nature of orientation representations is consistent with the fact that visually guided grasping requires vision-based grasp point computations (Blake, 1992) and causes the attentional spotlight to split into two regions near the grasp points (Schiegg et al., 2003). Further, occipital and parieto-occipital electrodes being involved in orientation representations suggest that the underlying processes recruited occipitotemporal and occipitoparietal areas, all of which play a role in extracting grasp-relevant object information and action selection (Astafiev et al., 2004; Rice et al., 2007; Monaco et al., 2014; Fabbri et al., 2016) or abstract action representation (Tucciarelli et al., 2015). Further, visual representations of grasp orientation might reflect sensory predictions during grasp planning associated with contact points (Flanagan et al., 2006). In sum, the results highlight the time course and dynamics of visual grasp-goal representations that are modulated by action intentions independent of effector-related processes.

Grasp orientation representations re-emerged after the auditory Go signal announced the hand (or hands) with which to grasp, warranting two key observations. First, orientation representations were temporally yoked to the auditory signal between 50 and 200 ms. At about the same time (50–300 ms after Go), orientation representations were like those during Preview (150–300 ms after object onset, Fig. 5D3), possibly because the Go signal caused visual preparatory responses to repeat. Similar forms of reactivation of visual processes have been observed in fMRI studies where delayed grasping in darkness is associated with reactivation of object area lateral occipital cortex (Singhal et al., 2013; Monaco et al., 2017). Interestingly however, in the present study, shape representations, as opposed to vision-based orientation representations, did not reactivate. This suggests that object-based visual processes reactivated selectively for object features that were relevant for grasp planning consistent with the known selectivity of visuomotor control of grasps (Ganel and Goodale, 2003). Second, effector-independent visual orientation representations re-emerged during Go before the Go signal was converted into lower level representations of individual effectors, as expected from a canonical progression from higher to lower level visuomotor computations.

However, we found that grasp computations did not unfold in a strictly canonical fashion when we looked at hierarchies in more detail, mapping intermediate-level grasp control. Here, we defined intermediate-level computations as grasp representations that had partially, yet not fully, incorporated effector choices. We expected partially shared grasp representations because previous studies have shown that left-hand and bimanual (but not right hand) grasping overlap in computational and neural resources in the right hemisphere (Le and Niemeier, 2013a, b; Le et al., 2014, 2017, 2019). Indeed, we found that a model that assumed left-hand and bimanual grasps to be the same produced significant representations and that these representations relied on right-lateralized parietal, central, and frontocentral electrodes, indicating the involvement of the right frontoparietal grasp network as expected (Le et al., 2014, 2017). Also, the model produced β weights that were more prominent than those of a model assuming right-hand and bimanual grasping to be similar. These partially effector-dependent grasp representations reflected processes not entirely abstract from effector choice. For example, effector choice might have routed grasp point computations for left-hand and bimanual grasps to shared neural resources in the right hemisphere, such as the right anterior intraparietal sulcus (Le et al., 2014). In addition, right-hand and bimanual grasps might have shared neural resources in the left hemisphere. At any rate, right-hand, left-hand, and bimanual grasping implicate different patterns of muscle activation and thus partially shared grasp representations must have reflected intermediate control processes upstream from control circuits for arm and finger movements.

Therefore, given that partially shared grasp representations mark a level of motor control where left-hand grasping is different from right-hand grasping but not yet different from bimanual grasping, do these intermediate LB model representations arise before downstream effector representations distinguish between all three effector choices? Intriguingly, we found that this was not the case. Partially shared grasp representations emerged not before but 47 ms after effector representations formed.

Perhaps we found intermediate representations to be delayed because representations of individual effectors included the processes necessary to map the pitch of the Go signal onto a given effector. However, aligning ERPs to movement onset should have filtered out stimulus-response mapping, only revealing effector representations related to action preparation. Even so, downstream effector representations did not arise after shared representations but earlier.

Perhaps LB model representations did not reflect an intermediate stage of processing but a specialized system for planning atypical or awkward actions. It has been shown that left-hand grasping is a less common or more awkward action than right-hand grasping. For example, unlike right-hand grasping, left-hand and awkward (and, thus, little practiced) right-hand grasps are susceptible to size-contrast illusions (Gonzalez et al., 2008). Therefore, do bimanual grasps liken left-hand grasps in being uncommon or awkward? This appears to be unlikely. Bimanual actions are not uncommon, they are frequently used in daily life (Kilbreath and Heard, 2005). Of course, it could be argued that bimanual precision grasps (as tested here) are less common compared with bimanual grasping with the whole hand. Nevertheless, previous research suggests that bimanual precision grasps are about as proficient as right-hand precision grasps, for example, bimanual grip apertures are as proficiently scaled to object size as apertures for right-hand grasping (compare Le and Niemeier, 2013a, with Le and Niemeier, 2014). Furthermore, a recent study directly showed that shared computations between left-hand and bimanual grasping were not because of actions being awkward or unusual (Le et al., 2019).

Perhaps the LB model did not flag an intermediate processing stage but brain activity that is less lateralized relative to individual effector representations. However, not all aspects of shared left-hand and bimanual activity are strictly bilateral. For example, magnetic stimulation studies have shown that only right parietal regions disrupted visuomotor transformations for bimanual grasping just like left-hand grasping (Le et al., 2014, 2017). Furthermore, not all downstream effector-specific representations are necessarily lateralized; at least executing bimanual actions requires activation of primary motor cortex in both hemispheres.

In conclusion, studying the time course of the neural processes underlying the visuomotor control of grasping, the present study offers novel insights into the temporal structure of visual-to-motor transformations underlying grasp computations. We show that effector-independent grasp representations start as object-based visual processes followed by visuomotor and motor processes. However, partially shared grasp representations and, thus, intermediate levels of control emerge after lower level effector-related motor representations. Our results strongly suggest that grasp control does not necessarily evolve in a canonical fashion, thereby highlighting the need for methods like EEG or MEG and their fine temporal resolution to attain a comprehensive understanding of human sensorimotor control.

Footnotes

  • This work was supported in part by grants from the Natural Sciences and Engineering Research Council of Canada.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Matthias Niemeier at m.niemeier{at}utoronto.ca

SfN exclusive license.

References

  1. ↵
    1. Ariani G,
    2. Oosterhof NN,
    3. Lingnau A
    (2018) Time-resolved decoding of planned delayed and immediate prehension movements. Cortex 99:330–345. doi:10.1016/j.cortex.2017.12.007 pmid:29334647
    OpenUrlCrossRefPubMed
  2. ↵
    1. Astafiev SV,
    2. Stanley CM,
    3. Shulman GL,
    4. Corbetta M
    (2004) Extrastriate body area in human occipital cortex responds to the performance of motor actions. Nat Neurosci 7:542–548. doi:10.1038/nn1241 pmid:15107859
    OpenUrlCrossRefPubMed
  3. ↵
    1. Baumann MA,
    2. Fluet MC,
    3. Scherberger H
    (2009) Context-specific grasp movement representation in the macaque anterior intraparietal area. J Neurosci 29:6436–6448. doi:10.1523/JNEUROSCI.5479-08.2009 pmid:19458215
    OpenUrlAbstract/FREE Full Text
  4. ↵
    1. Benjamini Y,
    2. Hochberg Y
    (1995) Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc Series B 57:289–300.
    OpenUrl
  5. ↵
    1. Blake A
    (1992) Computational modelling of hand-eye coordination. Philosophical transactions of the Royal Society of London 337:351–360.
    OpenUrl
  6. ↵
    1. Brouwer AM,
    2. Franz VH,
    3. Gegenfurtner KR
    (2009) Differences in fixations between grasping and viewing objects. J Vis. 9:18.1–24. pmid:19271888
    OpenUrlCrossRefPubMed
  7. ↵
    1. Cavina-Pratesi C,
    2. Monaco S,
    3. Fattori P,
    4. Galletti C,
    5. McAdam TD,
    6. Quinlan DJ,
    7. Goodale MA,
    8. Culham JC
    (2010) Functional magnetic resonance imaging reveals the neural substrates of arm transport and grip formation in reach-to-grasp actions in humans. J Neurosci 30:10306–10323. doi:10.1523/JNEUROSCI.2023-10.2010 pmid:20685975
    OpenUrlAbstract/FREE Full Text
  8. ↵
    1. Cichy RM,
    2. Pantazis D
    (2017) Multivariate pattern analysis of MEG and EEG: a comparison of representational structure in time and space. Neuroimage 158:441–454. doi:10.1016/j.neuroimage.2017.07.023 pmid:28716718
    OpenUrlCrossRefPubMed
  9. ↵
    1. Cichy RM,
    2. Pantazis D,
    3. Oliva A
    (2014) Resolving human object recognition in space and time. Nat Neurosci 17:455–462. doi:10.1038/nn.3635 pmid:24464044
    OpenUrlCrossRefPubMed
  10. ↵
    1. Davare M,
    2. Andres M,
    3. Cosnard G,
    4. Thonnard JL,
    5. Olivier E
    (2006) Dissociating the role of ventral and dorsal premotor cortex in precision grasping. J Neurosci 26:2260–2268. doi:10.1523/JNEUROSCI.3386-05.2006 pmid:16495453
    OpenUrlAbstract/FREE Full Text
  11. ↵
    1. Delorme A,
    2. Makeig S
    (2004) EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods 134:9–21. doi:10.1016/j.jneumeth.2003.10.009
    OpenUrlCrossRefPubMed
  12. ↵
    1. Fabbri S,
    2. Stubbs KM,
    3. Cusack R,
    4. Culham JC
    (2016) Disentangling representations of object and grasp properties in the human brain. J Neurosci 36:7648–7662. doi:10.1523/JNEUROSCI.0313-16.2016 pmid:27445143
    OpenUrlAbstract/FREE Full Text
  13. ↵
    1. Fattori P,
    2. Breveglieri R,
    3. Marzocchi N,
    4. Filippini D,
    5. Bosco A,
    6. Galletti C
    (2009) Hand orientation during reach-to-grasp movements modulates neuronal activity in the medial posterior parietal area V6A. J Neurosci 29:1928–1936. doi:10.1523/JNEUROSCI.4998-08.2009 pmid:19211899
    OpenUrlAbstract/FREE Full Text
  14. ↵
    1. Fattori P,
    2. Breveglieri R,
    3. Bosco A,
    4. Gamberini M,
    5. Galletti C
    (2017) Vision for prehension in the medial parietal cortex. Cereb Cortex 27:1149–1163. doi:10.1093/cercor/bhv302 pmid:26656999
    OpenUrlCrossRefPubMed
  15. ↵
    1. Flanagan JR,
    2. Bowman MC,
    3. Johansson RS
    (2006) Control strategies in object manipulation tasks. Curr Opin Neurobiol 16:650–659. doi:10.1016/j.conb.2006.10.005 pmid:17084619
    OpenUrlCrossRefPubMed
  16. ↵
    1. Flash T,
    2. Hogan N
    (1985) The coordination of arm movements: an experimentally confirmed mathematical model. J Neurosci 5:1688–1703. doi:10.1523/JNEUROSCI.05-07-01688.1985
    OpenUrlAbstract/FREE Full Text
  17. ↵
    1. Gallivan JP,
    2. Culham JC
    (2015) Neural coding within human brain areas involved in actions. Curr Opin Neurobiol 33:141–149. doi:10.1016/j.conb.2015.03.012 pmid:25876179
    OpenUrlCrossRefPubMed
  18. ↵
    1. Gallivan JP,
    2. McLean DA,
    3. Valyear KF,
    4. Pettypiece CE,
    5. Culham JC
    (2011) Decoding action intentions from preparatory brain activity in human parieto-frontal networks. J Neurosci 31:9599–9610. doi:10.1523/JNEUROSCI.0080-11.2011 pmid:21715625
    OpenUrlAbstract/FREE Full Text
  19. ↵
    1. Gallivan JP,
    2. McLean DA,
    3. Flanagan JR,
    4. Culham JC
    (2013) Where one hand meets the other: limb-specific and action-dependent movement plans decoded from preparatory signals in single human frontoparietal brain areas. J Neurosci 33:1991–2008. doi:10.1523/JNEUROSCI.0541-12.2013 pmid:23365237
    OpenUrlAbstract/FREE Full Text
  20. ↵
    1. Ganel T,
    2. Goodale MA
    (2003) Visual control of action but not perception requires analytical processing of object shape. Nature 426:664–667. doi:10.1038/nature02156 pmid:14668865
    OpenUrlCrossRefPubMed
  21. ↵
    1. Gonzalez CL,
    2. Ganel T,
    3. Whitwell RL,
    4. Morrissey B,
    5. Goodale MA
    (2008) Practice makes perfect, but only with the right hand: sensitivity to perceptual illusions with awkward grasps decreases with practice in the right but not the left hand. Neuropsychologia 46:624–631. doi:10.1016/j.neuropsychologia.2007.09.006 pmid:17950763
    OpenUrlCrossRefPubMed
  22. ↵
    1. Grootswagers TG,
    2. Wardle S,
    3. Carlson TA
    (2017) Decoding dynamic brain patterns from evoked responses: a tutorial on multivariate pattern analysis applied to time-series neuroimaging data. J Cogn Neurosci 26:194–198.
    OpenUrlCrossRef
  23. ↵
    1. Guggenmos M,
    2. Sterzer P,
    3. Cichy RM
    (2018) Multivariate pattern analysis for MEG: a comparison of dissimilarity measures. Neuroimage 173:434–447. doi:10.1016/j.neuroimage.2018.02.044 pmid:29499313
    OpenUrlCrossRefPubMed
  24. ↵
    1. Guo LL,
    2. Nestor A,
    3. Nemrodov D,
    4. Frost A,
    5. Niemeier M
    (2019) Multivariate analysis of electrophysiological signals reveals the temporal properties of visuomotor computations for precision grips. J Neurosci 39:9585–9597. doi:10.1523/JNEUROSCI.0914-19.2019 pmid:31628180
    OpenUrlAbstract/FREE Full Text
  25. ↵
    1. Janssen P,
    2. Scherberger H
    (2015) Visual guidance in control of grasping. Annu Rev Neurosci 38:69–86. doi:10.1146/annurev-neuro-071714-034028 pmid:25840009
    OpenUrlCrossRefPubMed
  26. ↵
    1. Kadmon Harpaz N,
    2. Flash T,
    3. Dinstein I
    (2014) Scale-invariant movement encoding in the human motor system. Neuron 81:452–461. pmid:24462105
    OpenUrlCrossRefPubMed
  27. ↵
    1. Kaneshiro B,
    2. Perreau Guimaraes M,
    3. Kim HS,
    4. Norcia AM,
    5. Suppes P
    (2015) A representational similarity analysis of the dynamics of object processing using single-trial EEG classification. PLoS One 10:e0135697. doi:10.1371/journal.pone.0135697 pmid:26295970
    OpenUrlCrossRefPubMed
  28. ↵
    1. Kassraian-Fard P,
    2. Matthis C,
    3. Balsters JH,
    4. Maathuis MH,
    5. Wenderoth N
    (2016) Promises, pitfalls, and basic guidelines for applying machine learning classifiers to psychiatric imaging data, with autism as an example. Front Psychiatry 7:177. doi:10.3389/fpsyt.2016.00177 pmid:27990125
    OpenUrlCrossRefPubMed
  29. ↵
    1. Kietzmann TC,
    2. Spoerer CJ,
    3. Sörensen LKA,
    4. Cichy RM,
    5. Hauk O,
    6. Kriegeskorte N
    (2019) Recurrence is required to capture the representational dynamics of the human visual system. Proc Natl Acad Sci U S A 116:21854–21863. doi:10.1073/pnas.1905544116 pmid:31591217
    OpenUrlAbstract/FREE Full Text
  30. ↵
    1. Kilbreath SL,
    2. Heard RC
    (2005) Frequency of hand use in healthy older persons. Aust J Physiother 51:119–122. doi:10.1016/S0004-9514(05)70040-4
    OpenUrlCrossRefPubMed
  31. ↵
    1. King JR,
    2. Dehaene S
    (2014) Characterizing the dynamics of mental representations: the temporal generalization method. Trends Cogn Sci 18:203–210. doi:10.1016/j.tics.2014.01.002 pmid:24593982
    OpenUrlCrossRefPubMed
  32. ↵
    1. Kriegeskorte N,
    2. Mur M,
    3. Ruff DA,
    4. Kiani R,
    5. Bodurka J,
    6. Esteky H,
    7. Tanaka K,
    8. Bandettini PA
    (2008) Matching categorical object representations in inferior temporal cortex of man and monkey. Neuron 60:1126–1141. doi:10.1016/j.neuron.2008.10.043 pmid:19109916
    OpenUrlCrossRefPubMed
  33. ↵
    1. Le A,
    2. Niemeier M
    (2013a) A right hemisphere dominance for bimanual grasps. Exp Brain Res 224:263–273. doi:10.1007/s00221-012-3309-z pmid:23109083
    OpenUrlCrossRefPubMed
  34. ↵
    1. Le A,
    2. Niemeier M
    (2013b) Left visual field preference for a bimanual grasping task with ecologically valid object sizes. Exp Brain Res 230:187–196. doi:10.1007/s00221-013-3643-9 pmid:23857170
    OpenUrlCrossRefPubMed
  35. ↵
    1. Le A,
    2. Niemeier M
    (2014) Visual field preferences of object analysis for grasping with one hand. Front Hum Neurosci 8:782.
    OpenUrlCrossRefPubMed
  36. ↵
    1. Le A,
    2. Vesia M,
    3. Yan X,
    4. Niemeier M,
    5. Crawford JD
    (2014) The right anterior intraparietal sulcus is critical for bimanual grasping: ATMS study. Cerebral Cortex 24:2591–2603. doi:10.1093/cercor/bht115
    OpenUrlCrossRefPubMed
  37. ↵
    1. Le A,
    2. Crawford JD,
    3. Niemeier M,
    4. Yan X,
    5. Vesia M
    (2017) Parietal area BA7 integrates motor programs for reaching, grasping, and bimanual coordination. J Neurophysiol 117:624–636. doi:10.1152/jn.00299.2016 pmid:27832593
    OpenUrlCrossRefPubMed
  38. ↵
    1. Le A,
    2. Wall FB,
    3. Lin G,
    4. Arunthavarajah R,
    5. Niemeier M
    (2019) Shared right-hemispheric representations of sensorimotor goals in dynamic task environments. Exp Brain Res 237:977–987. doi:10.1007/s00221-019-05478-2 pmid:30694342
    OpenUrlCrossRefPubMed
  39. ↵
    1. Lopez-Calderon J,
    2. Luck SJ
    (2014) ERPLAB: An open-source toolbox for the analysis of event-related potentials. Front Hum Neuroscience 8:213.
    OpenUrl
  40. ↵
    1. Michaels JA,
    2. Scherberger H
    (2018) Population coding of grasp and laterality-related information in the macaque fronto-parietal network. Sci Rep 8:1710–1715. doi:10.1038/s41598-018-20051-7 pmid:29374242
    OpenUrlCrossRefPubMed
  41. ↵
    1. Michaels JA,
    2. Schaffelhofer S,
    3. Agudelo-Toro A,
    4. Scherberger H
    (2020) A goal-driven modular neural network predicts parietofrontal neural dynamics during grasping. Proc Natl Acad Sci U S A 117:32124–32135. doi:10.1073/pnas.2005087117 pmid:33257539
    OpenUrlAbstract/FREE Full Text
  42. ↵
    1. Mohsenzadeh Y,
    2. Mullin C,
    3. Lahner B,
    4. Cichy RM,
    5. Oliva A
    (2019) Reliability and generalizability of similarity-based fusion of MEG and fMRI data in human ventral and dorsal visual streams. Vision 3:8. doi:10.3390/vision3010008
    OpenUrlCrossRef
  43. ↵
    1. Monaco S,
    2. Cavina-Pratesi C,
    3. Sedda A,
    4. Fattori P,
    5. Galletti C,
    6. Culham JC
    (2011) Functional magnetic resonance adaptation reveals the involvement of the dorsomedial stream in hand orientation for grasping. J Neurophysiol 106:2248–2263. doi:10.1152/jn.01069.2010 pmid:21795615
    OpenUrlCrossRefPubMed
  44. ↵
    1. Monaco S,
    2. Chen Y,
    3. Medendorp WP,
    4. Crawford JD,
    5. Fiehler K,
    6. Henriques DYP
    (2014) Functional magnetic resonance imaging adaptation reveals the cortical networks for processing grasp-relevant object properties. Cereb Cortex 24:1540–1554. doi:10.1093/cercor/bht006 pmid:23362111
    OpenUrlCrossRefPubMed
  45. ↵
    1. Monaco S,
    2. Gallivan JP,
    3. Figley TD,
    4. Singhal A,
    5. Culham JC
    (2017) Recruitment of foveal retinotopic cortex during haptic exploration of shapes and actions in the dark. J Neurosci 37:11572–11591. doi:10.1523/JNEUROSCI.2428-16.2017
    OpenUrlAbstract/FREE Full Text
  46. ↵
    1. Nemrodov D,
    2. Niemeier M,
    3. Mok JNY,
    4. Nestor A
    (2016) The time course of individual face recognition: A pattern analysis of ERP signals. Neuroimage 132:469–476. doi:10.1016/j.neuroimage.2016.03.006 pmid:26973169
    OpenUrlCrossRefPubMed
  47. ↵
    1. Nemrodov D,
    2. Niemeier M,
    3. Patel A,
    4. Nestor A
    (2018) The neural dynamics of facial identity processing: insights from EEG-based pattern analysis and image reconstruction. Eneuro 5: doi:10.1523/ENEURO.0358-17.2018
    OpenUrlAbstract/FREE Full Text
  48. ↵
    1. Nichols TE,
    2. Holmes AP
    (2002) Nonparametric permutation tests for functional neuroimaging: A primer with examples. Hum Brain Mapp 15:1–25. pmid:11747097
    OpenUrlCrossRefPubMed
  49. ↵
    1. Nili H,
    2. Wingfield C,
    3. Walther A,
    4. Su L,
    5. Marslen-Wilson W,
    6. Kriegeskorte N
    (2014) A toolbox for representational similarity analysis. PLoS Comput Biol 10:e1003553. doi:10.1371/journal.pcbi.1003553 pmid:24743308
    OpenUrlCrossRefPubMed
  50. ↵
    1. Oldfield RC
    (1971) The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia 9:97–113. doi:10.1016/0028-3932(71)90067-4 pmid:5146491
    OpenUrlCrossRefPubMed
  51. ↵
    1. Pion-Tonachini L,
    2. Kreutz-Delgado K,
    3. Makeig S
    (2019) ICLabel: An automated electroencephalographic independent component classifier, dataset, and website. Neuroimage 198:181–197. doi:10.1016/j.neuroimage.2019.05.026 pmid:31103785
    OpenUrlCrossRefPubMed
  52. ↵
    1. Quax SC,
    2. Dijkstra N,
    3. van Staveren MJ,
    4. Bosch SE,
    5. van Gerven MAJ
    (2019) Eye movements explain decodability during perception and cued attention in MEG. Neuroimage 195:444–453. pmid:30951848
    OpenUrlCrossRefPubMed
  53. ↵
    1. Rice NJ,
    2. Valyear KF,
    3. Goodale MA,
    4. Milner AD,
    5. Culham JC
    (2007) Orientation sensitivity to graspable objects: An fMRI adaptation study. Neuroimage 36 Suppl 2:T87–T93. doi:10.1016/j.neuroimage.2007.03.032 pmid:17499174
    OpenUrlCrossRefPubMed
  54. ↵
    1. Schaffelhofer S,
    2. Scherberger H
    (2016) Object vision to hand action in macaque parietal, premotor, and motor cortices. Elife 5:1–24. doi:10.7554/eLife.15278
    OpenUrlCrossRefPubMed
  55. ↵
    1. Schiegg A,
    2. Deubel H,
    3. Schneider WX
    (2003) Attentional selection during preparation of prehension movements. Vis Cogn 10:409–431. doi:10.1080/13506280244000140
    OpenUrlCrossRef
  56. ↵
    1. Singhal A,
    2. Monaco S,
    3. Kaufman LD,
    4. Culham JC
    (2013) Human fMRI reveals that delayed action re-recruits visual perception. PLoS One 8:e73629. doi:10.1371/journal.pone.0073629 pmid:24040007
    OpenUrlCrossRefPubMed
  57. ↵
    1. Sburlea AI,
    2. Wilding M,
    3. Muller-Putz GR
    (2021) Disentangling human grasping type from the object's instrinsic properties using low-frequency EEG signals. Neuroimage Reports 1(2):100012.
    OpenUrl
  58. ↵
    1. Tucciarelli R,
    2. Turella L,
    3. Oosterhof NN,
    4. Weisz N,
    5. Lingnau A
    (2015) MEG multivariate analysis reveals early abstract action representations in the lateral occipitotemporal cortex. J Neurosci 35:16034–16045. doi:10.1523/JNEUROSCI.1422-15.2015 pmid:26658857
    OpenUrlAbstract/FREE Full Text
  59. ↵
    1. Tunik E,
    2. Frey SH,
    3. Grafton ST
    (2005) Virtual lesions of the anterior intraparietal area disrupt goal-dependent on-line adjustments of grasp. Nat Neurosci 8:505–511. doi:10.1038/nn1430 pmid:15778711
    OpenUrlCrossRefPubMed
  60. ↵
    1. Turella L,
    2. Tucciarelli R,
    3. Oosterhof NN,
    4. Weisz N,
    5. Rumiati R,
    6. Lingnau A
    (2016) Beta band modulations underlie action representations for movement planning. Neuroimage 136:197–207. doi:10.1016/j.neuroimage.2016.05.027 pmid:27173760
    OpenUrlCrossRefPubMed
  61. ↵
    1. Turella L,
    2. Rumiati R,
    3. Lingnau A
    (2020) Hierarchical action encoding within the human brain. Cereb Cortex 30:2924–2915. doi:10.1093/cercor/bhz284 pmid:31942941
    OpenUrlCrossRefPubMed
  62. ↵
    1. Walther A,
    2. Nili H,
    3. Ejaz N,
    4. Alink A,
    5. Kriegeskorte N,
    6. Diedrichsen J
    (2016) Reliability of dissimilarity measures for multi-voxel pattern analysis. Neuroimage 137:188–200. doi:10.1016/j.neuroimage.2015.12.012 pmid:26707889
    OpenUrlCrossRefPubMed
  63. ↵
    1. Wardle SG,
    2. Kriegeskorte N,
    3. Grootswagers T,
    4. Khaligh-Razavi SM,
    5. Carlson TA
    (2016) Perceptual similarity of visual patterns predicts dynamic neural activation patterns measured with MEG. Neuroimage 132:59–70. doi:10.1016/j.neuroimage.2016.02.019 pmid:26899210
    OpenUrlCrossRefPubMed
  64. ↵
    1. Wolff MJ,
    2. Jochim J,
    3. Akyurek EG,
    4. Stokes MG
    (2017) Dynamic hidden states underlying working-memory-guided behavior. Nat Neurosci 20:864–871. doi:10.1038/nn.4546 pmid:28414333
    OpenUrlCrossRefPubMed
Back to top

In this issue

The Journal of Neuroscience: 41 (44)
Journal of Neuroscience
Vol. 41, Issue 44
3 Nov 2021
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Multivariate Analysis of Electrophysiological Signals Reveals the Time Course of Precision Grasps Programs: Evidence for Nonhierarchical Evolution of Grasp Control
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Multivariate Analysis of Electrophysiological Signals Reveals the Time Course of Precision Grasps Programs: Evidence for Nonhierarchical Evolution of Grasp Control
Lin Lawrence Guo, Yazan Shamli Oghli, Adam Frost, Matthias Niemeier
Journal of Neuroscience 3 November 2021, 41 (44) 9210-9222; DOI: 10.1523/JNEUROSCI.0992-21.2021

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Multivariate Analysis of Electrophysiological Signals Reveals the Time Course of Precision Grasps Programs: Evidence for Nonhierarchical Evolution of Grasp Control
Lin Lawrence Guo, Yazan Shamli Oghli, Adam Frost, Matthias Niemeier
Journal of Neuroscience 3 November 2021, 41 (44) 9210-9222; DOI: 10.1523/JNEUROSCI.0992-21.2021
del.icio.us logo Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • EEG
  • grasp program
  • motor
  • MVPA
  • precision grasp
  • visual

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Basolateral amygdala astrocytes are engaged by the acquisition and expression of a contextual fear memory
  • Restoration of sleep and circadian behavior by autophagy modulation in Huntington’s disease
  • The stria vascularis in mice and humans is an early site of age-related cochlear degeneration, macrophage dysfunction, and inflammation
Show more Research Articles

Behavioral/Cognitive

  • Learning a Model of Shape Selectivity in V4 Cells Reveals Shape Encoding Mechanisms in the Brain
  • A Fluid Self-Concept: How the Brain Maintains Coherence and Positivity across an Interconnected Self-Concept While Incorporating Social Feedback
  • A Texture Statistics Encoding Model Reveals Hierarchical Feature Selectivity across Human Visual Cortex
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
(JNeurosci logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.