Abstract
Recent data and motor control theory argues that movement planning involves preparing the neural state of primary motor cortex (M1) for forthcoming action execution. Theories related to internal models, feedback control, and predictive coding also emphasize the importance of sensory prediction (and processing) before (and during) the movement itself, explaining why motor-related deficits can arise from damage to primary somatosensory cortex (S1). Motivated by this work, here we examined whether motor planning, in addition to changing the neural state of M1, changes the neural state of S1, preparing it for the sensory feedback that arises during action. We tested this idea in two human functional MRI studies (N = 31, 16 females) involving delayed object manipulation tasks, focusing our analysis on premovement activity patterns in M1 and S1. We found that the motor effector to be used in the upcoming action could be decoded, well before movement, from neural activity in M1 in both studies. Critically, we found that this effector information was also present, well before movement, in S1. In particular, we found that the encoding of effector information in area 3b (S1 proper) was linked to the contralateral hand, similarly to that found in M1, whereas in areas 1 and 2 this encoding was present in both the contralateral and ipsilateral hemispheres. Together, these findings suggest that motor planning not only prepares the motor system for movement but also changes the neural state of the somatosensory system, presumably allowing it to anticipate the sensory information received during movement.
SIGNIFICANCE STATEMENT Whereas recent work on motor cortex has emphasized the critical role of movement planning in preparing neural activity for movement generation, it has not investigated the extent to which planning also modulates the activity in the adjacent primary somatosensory cortex. This reflects a key gap in knowledge, given that recent motor control theories emphasize the importance of sensory feedback processing in effective movement generation. Here, we find through a convergence of experiments and analyses, that the planning of object manipulation tasks, in addition to modulating the activity in the motor cortex, changes the state of neural activity in different subfields of the human S1. We suggest that this modulation prepares the S1 for the sensory information it will receive during action execution.
Introduction
Motor planning has long been known to improve movement reaction time, speed, and accuracy (Keele, 1968; Klapp and Erwin, 1976; Rosenbaum, 1980; Wong et al., 2015; Haith et al., 2016). Consequently, a major focus of neural investigations in the field of motor control has been on studying the changes in motor cortical activity that precede movement and how this relates to various parameters (e.g., direction, extent, speed, curvature, force) of the forthcoming movement to be executed (Tanji and Evarts, 1976; Riehle and Requin, 1989; Hocherman and Wise, 1991; Shen and Alexander, 1997; Messier and Kalaska, 2000; Pesaran et al., 2006; Churchland et al., 2006b; Batista et al., 2007). Recent theories have argued that motor planning involves preparing the neural state of the motor system for upcoming movement execution and have drawn links between changes in neural population activity and subsequent muscle activity (Shenoy et al., 2013). This work has enhanced our understanding at the neural level of how motor cortex generates movement and has highlighted the importance of preparatory activity in setting up the state of the motor system for this to occur (Churchland et al., 2006a; Churchland and Shenoy, 2007; Afshar et al., 2011; Ames et al., 2014).
Separately from the motor-related process of generating movement, a key component to successful motor control is the prediction and processing of the sensory consequences of action (Wolpert and Flanagan, 2001). For example, the sensorimotor control of object manipulation tasks involves predicting sensory signals associated with object contact events (e.g., object lift-off, replacement), which can occur in multiple sensory modalities, including tactile, proprioceptive, and visual (Johansson and Flanagan, 2009). By comparing the expected to the actual sensory events that are experienced, the CNS can monitor task progression, detect performance errors, and quickly launch appropriate task-protective corrective actions as needed (Johansson and Flanagan, 2009). The anticipation of the sensory consequences of action has long been theorized to rely on an efference copy of motor commands being sent from the motor cortex to the relevant sensory cortices (von Holst and Mittelstaedt, 1950; Crapse and Sommer, 2008). Consistent with this idea, work from both rodents and nonhuman primates (NHPs) has demonstrated that the motor cortex sends direct projections to the somatosensory system (Porter and White, 1983; Darian-Smith et al., 1993; Miyashita et al., 1994; Burton and Fabri, 1995; Cauller et al., 1998; Huffman and Krubitzer, 2001; Kinnischtzke et al., 2014) and that these motor inputs can shape neural responses in primary somatosensory cortex (S1; Jiang et al., 1990a; Lee et al., 2008; Zagha et al., 2013; Khateb et al., 2017; Umeda et al., 2019).
Given the behavioral importance of predicting task-specific tactile consequences during object manipulation tasks, here we hypothesized that action planning, in addition to preparing motor areas for execution (Shenoy et al., 2013), involves preparing S1 for the anticipated task-specific somatosensory signals. Because these sensory signals should change depending on the exact action to be executed, we would predict that as is the case in the motor system (Porter and Lemon, 1995; Gallivan and Culham, 2015), planning-related modulations in S1 should exhibit the hallmarks of contralaterality and effector specificity and perhaps also differentiation, according to the hierarchical organization (i.e., changes in informational content across lower to higher order S1 subareas).
To explore these ideas in humans, here we examined, using functional magnetic resonance imaging (fMRI) in two separate experiments involving delayed object lifting tasks, whether the upcoming actions to be performed by an individual are represented in delay period activity in S1. In both studies, we find that information related to the motor effector to be used can be decoded from premovement activity patterns in different subareas of human S1. These findings suggest that motor planning changes the neural state of somatosensory cortex based on the movement being prepared, perhaps readying it to extract task-related sensory information during the unfolding movement.
Materials and Methods
Overview
To test whether and how the motor preparation of object manipulation tasks changes the neural state of somatosensory cortex, we performed new analyses on two recently published datasets (Gale et al., 2021). In brief, both experiments used delayed movement tasks, allowing us to separate motor-planning-related modulations in the somatosensory cortex from the later sensory feedback-related modulations that occur during movement execution. In the first experiment, each trial had participants first prepare and then execute either a left-hand or right-hand object lift-and-replace action (these movements were cued via two nonsense auditory commands; Fig. 1A,B). In the second experiment, each trial had participants first prepare, and then execute either a right-hand object lift-and-replace action or a target-directed eye movement (these movements were cued via a change in the color of a central fixation light; Fig. 1D,E). Both of these experiments allowed us to examine whether human somatosensory cortex, during the delay period before movement, encodes the upcoming motor effector to be used (left vs right hand in Experiment 1, and right hand versus eye in Experiment 2). Such a result would provide evidence that the neural state of somatosensory cortex is modulated before movement execution.
Participants
Sixteen healthy right-handed subjects (eight females, 21–25 years of age) participated in Experiment 1, and a separate cohort of 15 healthy right-handed subjects (eight females, 20–32 years of age) participated in experiment 2. All experiments were undertaken with the understanding and written consent of each subject, obtained in accordance with ethical standards set out by the Declaration of Helsinki and with procedures cleared by the Queen's University Health Sciences Research Ethics Board. Data from one subject in experiment 1 and from two subjects in experiment 2 were excluded from further analyses because of data collection problems in the experimental testing sessions, resulting in final sample sizes of 15 and 13 subjects, respectively. The complete methods for experiments 1 and 2 have recently been described in full detail previously (Gale et al., 2021). Here we provide more concise descriptions of the methods relevant for our new analyses.
Experiment 1
Subjects were scanned in a head-tilted configuration (allowing direct viewing of the hand workspace) while they performed a delayed object lift-and-replace task (Fig. 1A,B). During the experiment, the participant's workspace was illuminated by two bright white Light Emitting Diodes (LEDs) attached to flexible plastic stalks. To control for eye movements, a small red fixation LED, attached to a flexible plastic stalk, was positioned above the hand workspace. Experimental timing and lighting were controlled with in-house software created with C++ and MATLAB (MathWorks). Throughout fMRI testing, the subject's hand movements were monitored using an MR-compatible infrared-sensitive camera, optimally positioned on one side of the platform and facing toward the subject. The videos captured during the experiment were analyzed offline to verify that the subjects were performing the task as instructed and to identify error trials (e.g., performing the wrong action, moving the hand before the Execute epoch).
On each trial, subjects were required to perform the following one of two actions on a centrally located target object: (1) grasp, lift, and replace the object with the left hand and (2) grasp, lift, and replace the object with the right hand (Fig. 1A). These actions were cued via two nonsense speech cues, Compty or Midwig. For a given trial, each nonsense speech cue was paired with a corresponding hand action (i.e., subjects were instructed that for a predetermined set of trials, Compty cued a left-hand movement, whereas Midwig cued a right-hand movement). Halfway throughout the scan (following a Switch auditory cue), this cue hand mapping was reversed (e.g., Compty would now cue a right-hand movement, whereas Midwig would now cue a left-hand movement). Following the delivery of the auditory command, there was a jittered delay interval of 6–12 s (a Gaussian random jitter with a mean of 9 s), after which the verbal auditory command Go was delivered, prompting subjects to execute the prepared grasp, lift, and replace action. For this execution phase of the trial, subjects were required to precision grasp and then lift the object with their thumb and index finger (∼2 cm above the platform, via a rotation of the wrist), hold it in midair for ∼1 s, and then replace it. Two seconds following the onset of this Go cue, the illuminator lights were extinguished, and subjects then waited 16 s for the next trial to begin (intertrial interval). Throughout the entire time course of the trial, subjects were required to maintain gaze on the fixation LED, and other than the execution of the hand actions, participants were required to keep their hands still and in prespecified home positions to the left and right of the central object.
This experiment resulted in a total of the following four different auditory hand mappings (and thus trial types) per experimental run: Compty left hand, Compty right hand, Midwig left hand, and Midwig right hand (with five repetitions each; 20 trials in total per run). With the exception of the blocked nature of these trials, these trial types were pseudorandomized within a run and counterbalanced across all runs so that each trial type was preceded and followed equally often by every other trial type across the entire experiment. For the purposes of the present analysis (i.e., focused on decoding motor effector information from the primary somatosensory cortex), we collapsed trials across auditory cues (Compty vs Midwig) and only examined decoding with respect to hand information (left vs right).
During MRI testing, we also tracked subjects' behavior using an MRI-compatible force sensor located beneath the object (Nano 17 F/T sensors, ATI Industrial Automation), and attached to our MRI platform. This allowed us to track both subject reaction time, which we define as the time from the onset of the Go cue to object contact (mean = 1601 ms, SD = 389 ms), and movement time, which we define as the time from object lift to replacement (mean = 2582 ms, SD = 662 ms), as well as generally monitor task performance. Each subject participated in eight functional runs (for a total of 160 trials, 80 trials for each hand; Gale et al., 2021). Note that we did not conduct eye tracking during this experiment nor in experiment 2, because of difficulties in monitoring gaze in the head-tilted configuration with standard MRI-compatible eye trackers (because of occlusion from the eyelids). Nevertheless, behavioral control experiments have demonstrated that the same groups of subjects tested with MRI can reliably maintain fixation during behavioral testing (Gale et al., 2021).
Experiment 2
This study was similar to experiment 1, with the following exceptions: (1) participants performed either a right-hand object grasp and lift action on the centrally located object or a target-directed eye movement toward that same object (i.e., two experimental conditions, see; Fig. 1D), (2) the Delay epoch was a fixed duration (12 s), and (3) subjects were cued about the upcoming movement to be executed via a 0.5 s change in the fixation LED color (from red to either blue or green, with the color-action mapping being counterbalanced across subjects; i.e., an LED change to blue cued a grasp action in half the subjects and cued an eye movement in the other half of subjects). The eye movement action involved the subjects making a saccadic eye movement from the fixation LED to the target object, holding that position until the illuminator LEDs were extinguished, and then returning their gaze back to the fixation LED. The two trial types, with five repetitions per condition (10 trials total per run), were pseudorandomized as in experiment 1. Each subject participated in at least eight functional runs (thus creating 40 repetitions per condition across the experiment).
Data acquisition and analysis
Subjects were scanned using a 3T Siemens TIM MAGNETOM Trio MRI scanner located at the Center for Neuroscience Studies, Queen's University. An identical imaging protocol was used for both experiments 1 and 2, with the exception of slice thickness (experiment 1 = 4 mm; experiment 2 = 3 mm). In both experiments, MRI volumes were acquired using a T2*-weighted single-shot gradient-echo echo-planar imaging acquisition sequence [time to repetition = 2000 ms, in-plane resolution = 3 mm × 3 mm, time to echo = 30 ms, field of view = 240 mm × 240 mm, matrix size = 80 × 80, flip angle = 90°, and acceleration factor (integrated parallel acquisition technologies) = 2 with generalized autocalibrating partially parallel acquisitions reconstruction]. Each volume comprised 35 contiguous (no gap) oblique slices acquired at a ∼30° caudal tilt with respect to the plane of the anterior and posterior commissure. Subjects were scanned in a head-tilted configuration, allowing direct viewing of the hand workspace. We used a combination of imaging coils to achieve a good signal-to-noise ratio and to enable direct object workspace viewing without mirrors or occlusion. Specifically, we tilted (∼20° degrees) the posterior half of the 12-channel receive-only head coil (six channels) and suspended a four-channel receive-only flex coil over the anterior-superior part of the head. An identical T1-weighted magnetization-prepared rapid acquisition gradient echoanatomic (MPRAGE) scan was also collected for both experiments 1 and 2 (time to repetition = 1760 ms, time to echo = 2.98 ms, field of view = 192 mm × 240 mm × 256 mm, matrix size = 192 × 240 × 256, flip angle = 9°, 1 mm isotropic voxels).
fMRI data preprocessing
Preprocessing of functional data collected in experiments 1 and 2 was performed using fMRIPrep 1.4.1 (Esteban et al., 2018; https://zenodo.org/record/3588470), which is based on Nipype 1.2.0 (Gorgolewski et al., 2011; https://zenodo.org/record/3588470).
Anatomical data preprocessing
The T1-weighted (T1w) image was corrected for intensity nonuniformity with N4BiasFieldCorrection (Tustison et al., 2010), distributed with Advance Normalizaion Tools (ANTs 2.2.0; Avants et al., 2008), and used as T1w reference throughout the workflow. The T1w reference was then skull-stripped with a Nipype implementation of the antsBrainExtraction.sh workflow (from ANTs), using OASIS30ANTs as target template. Brain tissue segmentation of cerebrospinal fluid, white matter and gray matter was performed on the brain-extracted T1w using the Functional Magnetic Resonance Imaging of the Brain's (FMRIB) Automated Segmentation Tool [FMRIB Software Library (FSL) version 5.0.9; Zhang et al., 2001]. Brain surfaces were reconstructed using recon-all (FreeSurfer 6.0.1; Dale et al., 1999), and the brain mask estimated previously was refined with a custom variation of the method to reconcile ANTs-derived and FreeSurfer-derived segmentations of the cortical gray matter of Mindboggle (Klein et al., 2017). Volume-based spatial normalization to standard space, FSL's Montreal Neurological Institute (MNI) International Consortium for Brain Mapping 152 nonlinear sixth-generation Asymmetric Average Brain Stereotaxic Registration Model (Evans et al., 2012; TemplateFlow ID: MNI152NLin6Asym), was performed through nonlinear registration with antsRegistration (ANTs 2.2.0), using brain-extracted versions of both T1w reference and the T1w template.
Functional data preprocessing
For each blood oxygenation level-dependent (BOLD) run per subject (across all tasks and/or sessions), the following preprocessing was performed. First, a reference volume and its skull-stripped version were generated using a custom methodology of fMRIPrep. The BOLD reference was then coregistered to the T1w reference using bbregister (FreeSurfer), which implements boundary-based registration (Greve and Fischl, 2009). Coregistration was configured with nine degrees of freedom to account for distortions remaining in the BOLD reference. Head-motion parameters with respect to the BOLD reference (transformation matrices and six corresponding rotation and translation parameters) are estimated before any spatiotemporal filtering using mcflirt (FSL 5.0.9; Jenkinson et al., 2002). BOLD runs were slice-time corrected using 3dTshift from Analysis of Functional NeuroImages (AFNI) 20160207 (Cox and Hyde, 1997). The BOLD time series was normalized by resampling into standard space (voxel size = 2 × 2 × 2 mm). All resamplings were performed with a single interpolation step by composing all the pertinent transformations (i.e., head-motion transform matrices and coregistrations to anatomic and output spaces). Gridded (volumetric) resamplings were performed using antsApplyTransforms (ANTs), configured with Lanczos interpolation to minimize the smoothing effects of other kernels (Lanczos, 1964).
Many internal operations of fMRIPrep use Nilearn 0.5.2 (Abraham et al., 2014), mostly within the functional processing workflow. For more details of the pipeline, see the section corresponding to workflows in fMRIPrep's documentation.
Error trials
Error trials involving the hand were identified offline from the videos recorded during the experimental testing session and were excluded from analysis by assigning these trials predictors of no interest. Error trials included those in which the subject performed the incorrect instruction (experiment 1: nine trials, four subjects; experiment 2: one trial, one subject) or contaminated the delay epoch data by slightly moving a limb or moving too early (experiment 1: seven trials, four subjects; experiment 2: one trial, one subject). Note that because of our inability to record gaze during MRI testing (see above, Experiment 1), error trials involving eye movements could not be identified and were excluded from our analysis.
Statistical analyses
General Linear Models
We used a least-squares separate procedure (Mumford et al., 2012) to extract β coefficient estimates for decoding analyses. This procedure generated separate general linear models (GLMs) for each individual trial's Delay and Execute epochs (e.g., in experiment 1: 20 trials × 2 epochs 8 runs = 320 GLMs). The regressor of interest in each model consisted of a boxcar regressor aligned to the start of the epoch of interest. The duration of the regressor was set to the duration of the cue that initiates the epoch (0.5 s), the auditory command cue (Compty or Midwig) and the visual cue (fixation LED color change) for the Delay epoch in experiment 1 and 2, respectively, and the auditory Go cue for the Execute epoch in both experiments. For each GLM, we included a second regressor comprised of all the remaining trial epochs in the experimental run. Each regressor was then convolved with a double-gamma hemodynamic response function, and temporal derivatives of both regressors were included along with subjects' six motion parameters obtained from motion correction. High-pass filtering was added to each design matrix by the inclusion of regressors from a cosine drift model with a cutoff of 0.01 Hz. Isolating the regressor of interest in this single-trial fashion reduces regressor collinearity and has been shown to be advantageous in estimating single-trial voxel patterns and for multivoxel pattern classification (Mumford et al., 2012). These procedures were implemented using Nistats 0.0.1b1 and Nilearn 0.6.0 (Abraham et al., 2014).
We performed additional GLM contrast analyses to place our searchlight results (see below, Searchlight pattern–information analyses) in the context of univariate activity elicited by each effector (i.e., experiment 1: left hand and right hand; experiment 2: eye and right hand) during movement execution. At the subject level, Delay and Execute epochs for each condition were modeled as separate regressors (experiment 1: Left-Delay, Left-Execute, Right-Delay, Right-Execute; experiment 2: Look-Delay, Look-Execute, Grasp-Delay, Grasp-Execute), with onsets aligned to the start of each epoch and durations of 0.5 s for the Delay and Execute epochs (consistent with our single trial GLMs described above). Group-level Execute > Delay contrasts for each effector in each experiment were performed on smoothed subject-level parameter maps [Gaussian kernel full-width at half-maximum (FWHM) = 6 mm]. Contrasting the Execute versus Delay epoch within each condition rather than Execute epochs between conditions (i.e., experiment 1: Left Execute vs Right Execute; experiment 2: Look Execute vs Grasp Execute) enables us to show activity maps elicited by each effector separately rather than activity maps that directly compare the effectors to each other.
Regions of interest selection
Regions of interest (ROIs) for human S1 were defined using region masks for Brodmann's areas (BA) 3a, 3b, 1 and 2 (Brodmann, 1909; Vogt and Vogt, 1919) from the Jülich histologic (cytoarchitectonic and myeloarchitectonic) atlas (Geyer et al., 1999; Grefkes et al., 2001). Each nonoverlapping region mask was based on a 25% probability threshold for each region, which is packaged by default in FSL 5.0.10 (Grefkes et al., 2001; Jenkinson et al., 2012). We considered these four subdivisions of S1 separately as they are considered distinct functional areas and are thought to be positioned at different hierarchical stages of somatosensory processing (Kaas, 1983; Geyer et al., 1999). BA3b, which receives dense inputs from the ventroposterior nucleus of the thalamus, is often considered S1 proper as it primarily responds to cutaneous input (Kaas, 1983). BA3a also receives dense inputs from the thalamus but is thought to be primarily concerned with proprioceptive processing because of its deep (subcutaneous) receptor inputs (i.e., from muscle spindle afferents). The status of BA3a as part of primary somatosensory cortex is debatable as many neuroanatomists regard it as part of BA4 (primary motor cortex; Jones et al., 1978; Vogt and Pandya, 1978; Kaas, 1983). BA1 receives significant projections from BA3b and is thought to be concerned with texture-related processing, whereas BA2 receives significant projections from BA3a and 1 and is thought to be concerned with size/shape-related processing (Randolph and Semmes, 1974; Jones et al., 1978; Vogt and Pandya, 1978; Kaas, 1983; Pons and Kaas, 1986). Thus, BA1 and 2 are thought to be positioned at a slightly higher hierarchical level than BA3b. [We recognize that some researchers will take issue with our description of S1 as containing several different subfields (BA3a, BA3b, BA1, and BA2) as many neuroanatomists use S1 nomenclature to denote BA3b specifically (Kaas, 1983). However, in most fMRI studies, the delineation of these different subfields is not typically performed, and “primary somatosensory cortex” or area “S1” terminology is often used interchangeably to distinguish dorsomedial activity located on or near the postcentral gyrus from “secondary somatosensory cortex” or “S2” activity that is located on the same gyrus but more laterally in the parietal operculum (Eickhoff et al., 2006a,b, 2007). Given our use of fMRI in the current study, we have adopted the latter naming conventions but appreciate that the separate BA3a, BA3b, BA1, and BA2 fields have different patterns of cytoarchitectonics, connections, and physiological response properties (Jones et al., 1978; Vogt and Pandya, 1978; Kaas, 1983).
We also defined ROIs for primary motor cortex (M1) in BA4a (anterior) and BA4p (posterior; Brodmann, 1909; Vogt and Vogt, 1919), also from the Jülich histologic (cyto architectonic and myeloarchitectonic) atlas (Geyer et al., 1996) based on the same 25% probability threshold. These are known areas involved in motor planning and execution (Kalaska, 2009) and thus serve as a basis for comparing and interpreting the effects observed in S1 ROIs above. Together, the above sets of S1 and M1 ROIs were used as three-dimensional binary masks to constrain our neural decoding analyses and interpretations of motor-planning-related effects across the postcentral and precentral gyri.
Multivoxel pattern analysis
Multivoxel pattern analysis (MVPA) was performed with in-house software using Python 3.7.1 with Nilearn 0.6.0 and Scikit-Learn 0.20.1 (Abraham et al., 2014). All analyses implement linear support vector machine binary classifiers using Scikit-Learn's LinearSVC, which implements LIBLINEAR (Fan et al., 2008) with a fixed regulation parameter (C = 1) to compute a hyperplane that best separated the trial responses. The pattern of voxel β coefficients from the single-trial GLMs, which provided voxel patterns for each trial's Delay and Execute epochs, was used as input into the binary classifiers.
Decoding accuracies for each subject were computed as the average classification accuracy across train-and-test iterations using a leave-one-run-out cross-validation procedure. During each iteration, each voxel in the training set was standardized to have a mean of zero and SD of one, and the test set was standardized based on the scaling parameters of the training set. This standardization approach maintains independence of training and test sets by ensuring that parameters from the test set do not influence model fitting on the training set (i.e., data leakage). The cross-validation procedure was performed separately for each ROI, trial epoch (Delay and Execute), and pairwise discrimination (left-hand vs right-hand movements in experiment 1, and hand vs eye movements in experiment 2).
We assessed decoding significance at the group level using a previously published two-step permutation procedure (Gale et al., 2021), which is based on permutation approaches outlined in (Stelzer et al., 2013). The first step generates for each subject a chance decoding distribution by repeatedly (100 iterations) computing the average classification accuracy of leave-one-run-out cross-validation on randomly shuffled class labels within each run. The second step computes a distribution of group mean decoding accuracies by repeatedly (10,000 iterations) selecting a random decoding accuracy from each subject's decoding distribution and computing the mean decoding accuracy across subjects. This distribution of group mean decoding accuracies was then used to compute the probability of the actual group mean decoding accuracy. Here, we used a one-tailed significance threshold of p < 0.05 and controlled for the problem of multiple comparisons (number of ROIs examined) by applying a Benjamini–Hochberg false-discovery rate (FDR) correction of q < 0.05.
Searchlight pattern–information analyses
We performed confirmatory searchlight analyses for the Delay and Execute epochs in experiments 1 and 2. To set the scope of the searchlight, we generated a searchlight mask by combining all somatosensory and motor ROIs and then dilating the mask by 1 voxel. Then a searchlight sphere (4 mm radius, 33 voxels) was applied to each voxel in the searchlight mask, and the cross-validation decoding procedure (see above) was performed on the extracted β patterns produced by our GLM procedure. The decoding accuracy for each sphere of voxels was then written to the central voxel to generate a searchlight map. Searchlight maps for each subject were spatially smoothed (6 mm FWHM Gaussian kernel) to facilitate group-level analyses (i.e., account for individual variability in localization). Because spatial smoothing reduces spatial precision in favor of spatial overlap across subjects, we also performed group-level analyses on unsmoothed searchlight maps in which spatial overlap across subjects is reduced in favor of spatial precision. For all smoothed and unsmoothed data, subject searchlight maps were combined and a one-tailed t test versus 50% decoding (i.e., chance) was performed on each voxel, and the resulting group map was thresholded at p < .001. Together, this resulted in eight total searchlight analyses (i.e., smoothed and unsmoothed versions for the Delay and Execute epochs in experiments 1 and 2).
Each searchlight analysis was corrected for multiple comparisons using cluster-extent thresholds from a permutation approach based on Markiewicz and Bohland (2016), which provides a computationally feasible alternative to Stelzer et al. (2013) for searchlight permutation testing (see Gale et al., 2021, for a previous implementation). In this approach, 100 chance decoding maps for each subject are constructed by repeatedly applying our searchlight procedure with randomly shuffled class labels within each run. Then a distribution of cluster sizes was generated by 10e3 iterations of the following: (1) selecting a random chance decoding map from each subject, (2) performing one-tailed t tests versus 50% decoding (chance) on each voxel, and (3) thresholding the map at p < 0.001 and extracting the sizes of all individual clusters. The cluster-extent threshold was then determined by taking the minimum cluster size at which p < .05. This procedure was performed separately for all searchlight analyses. For visualization purposes, the corrected thresholded searchlight maps were projected onto an fsaverage surface (Fischl et al., 1999) using an accurate registration-fusion procedure from Wu et al. (2018).
Results
Experiment 1
Motor effector information is encoded in the primary somatosensory cortex before movement
In experiment 1, subjects performed a delayed object manipulation task (Fig. 1A) wherein they first prepared and then executed object grasp and lift-and-replace movements with their left hand or right hand. To determine whether S1 encodes information related to the upcoming movements, we performed neural decoding on the trial-related voxel patterns (β coefficients) associated with the Delay and Execute epochs from each of the four subareas that make up the human S1: BA3a, BA3b, BA1, and BA2 (Fig. 2A). BA3b is considered S1 proper (Kaas, 1983), with areas BA1 and BA2 being considered slightly higher order subdivisions (Jones et al., 1978; Vogt and Pandya, 1978; Pons and Kaas, 1986; see above, Materials and Methods). Decoding analyses (Fig. 2B) revealed that information related to the upcoming hand actions to be performed (i.e., during the Delay epoch) was present in all four subareas of S1 in both the left and right hemispheres (left hemisphere: BA3b, mean = 59.52%, p < 0.001; BA3a, mean = 56.56%, p < 0.001; BA1, mean = 60.22%, p < 0.001; BA2, mean = 57.95%, p < 0.001; right hemisphere: BA3b, mean = 60.27%, p = < 0.001; BA3a, mean = 53.17%, p = 0.003; BA1, mean = 61.54%, p < 0.001; BA2, mean = 55.41%, p < 0.001; all p values are FDR corrected). In addition, consistent with the influx of tactile and proprioceptive sensory-related information during movement execution, we found that decoding in each of these subareas was substantially higher during the Execute epoch (decoding accuracies between 73–94%, all p < 0.001). Critically, our finding that decoding occurred in S1 during the Delay period (and not just Execute period) indicates that S1 subareas are modulated by the movement being prepared but not yet executed. Consistent with this idea, a separate analysis on the classification accuracies for the decoding of auditory cue information (i.e., relabeling all trials according to the Compty versus Midwig auditory cues that instructed the movements and thus collapsing across left-hand vs right-hand trials) revealed no evidence for decoding across any of the S1 subareas in either hemisphere (left hemisphere: BA3b: mean = 49.25%, p = 0.944; BA3a, mean = 51.30%, p = 0.676; BA1, mean = 47.81%, p = 0.944; BA2, mean = 48.56%, p = 0.944; right hemisphere: BA3b, mean = 52.84%, p = 0.161; BA3a, mean = 51.10%, p = 0.852; BA1, mean = 50.65%, p = 0.916; BA2, mean = 48.10%, p = 0.944; all p values are FDR-corrected). This importantly indicates that the significant decoding of the hand movements above was not spurious in nature but instead specifically linked to the hand to be used in the upcoming action.
Overview of the two experiments. A, Experiment 1 task. Subject point of view (POV, left) of the experimental workspace during the Delay epoch. Red star indicates the fixation LED placed above the object. Illuminator LEDs, attached to flexible stalks, are shown on the left and right. During the Execute epoch (right), subjects executed either an object grasp and lift-and-replace action with their left hand (red bordering) or right hand (blue bordering). During the study, subjects maintained central fixation on the fixation LED. B, C, Sequence of events in a single trial (B) and corresponding group-averaged (N = 5) single-trial BOLD data (C) from left primary somatosensory cortex area BA3b (S1 proper, cyan region), time locked to events in B, Each trial begins with the hand workspace being illuminated while simultaneously participants receive the auditory cue (Compty or Midwig), which instructs either a left-hand or right-hand action (based on the mapping given to participants, see Materials and Methods). This is then followed by a jittered 6–12 s Delay interval. Next, an auditory Go cue initiates the start of the 2 s Execute epoch, during which the subject grasps, lifts, and replaces the object. Following the 2 s Execute epoch, illumination of the workspace is extinguished, and subjects then wait a fixed 16 s intertrial interval (ITI) before onset of the next trial. D, Experiment 2 task. Subject POV during the Delay (left) and Execute (right) epochs. During the Execute epoch, subjects executed either an object grasp and lift-and-replace action with their right hand (cyan bordering), or an object-directed saccadic eye movement (purple bordering). Other than the saccadic eye movement, subjects maintained central fixation on the fixation LED during all other phases of the trial, as in experiment 1. E, F, Sequence of events in a single-trial (E) and corresponding group-averaged (N = 13) single-trial BOLD data (F) from BA3b in the left hemisphere, time locked to events in E. In both C and F, note that left BA3b is primarily modulated by movements on the contralateral (right hand) during the Execute epoch.
Experiment 1 decoding of motor effector information (left vs right hand) from early somatosensory cortex during the delay epoch. A, Lateral surface view of the human brain (left), with the precentral and postcentral gyri demarcated (separated by the central sulcus, white line). Zoomed-in cross-sectional view of the precentral and postcentral gyri (right), demarcating the different cytoarchitectonic subareas of M1 (BA4a and BA4p) and S1 (BA3a, BA3b, BA1 and BA2). Figure is adapted from (Borich et al., 2015). B, Each individual subarea arranged posterior to anterior, shown on sagittal and transverse brain slices, is associated with a group-averaged percent signal change (%SC) BOLD time course and a decoding accuracy for hand information (point plots), separately for the Delay and Execute (Ex.) epochs. Left and right hemisphere S1 and M1 subareas are shown on the left and right, respectively. The %SC data are time locked to the onset of the Execute epoch (vertical dashed line). In the decoding accuracy plots, black circles indicate mean decoding accuracy, and gray points show individual subject decoding accuracies. Chance level (50%) is demarcated by the horizontal dashed line in each decoding plot. Note that all subareas show significant decoding of hand information during the Delay epoch despite the high degree of overlap among the time courses for the different experimental conditions. Significance of hand decoding accuracies were determined for each epoch using null decoding distributions derived via permutation tests (see above, Materials and Methods). Stars denote FDR-corrected significance levels (*p < 0.05, **p < 0.01, ***p < 0.001). L, Left; R, right.
One possibility is that the jittering of the Delay epoch may have resulted in some subjects producing small anticipatory movements during the Delay epoch in anticipation of the Go cue. Although we cannot definitively rule out such a possibility, we do not think this is a likely explanation of our decoding results for a few reasons. First, we failed to observe reliable evidence of pre-execute movements based on our video monitoring of subjects' hand movements during the task (and of those trials that were identified, 0.4% of all trials, they were removed from analysis). Second, we observed no association across participants between the Delay epoch duration and the movement execution reaction time, as might be expected from the prior literature (Niemi and Näätänen, 1981). This suggests that subjects were not overly anticipating the Go cue on the longer delay duration trials (e.g., trials in which the Delay epoch exceeded 9 s duration). Consistent with this, we also observed no differences in decoding magnitude between longer (>9 s) versus shorter (<9 s) delay duration trials. In summary, although we cannot exclude the possibility that subjects exhibited subthreshold hand/finger movements during the delay epoch, it is unlikely that such movements were linked to our ability to decode hand-related information before movement.
Delay period decoding from the somatosensory cortex is similar to that observed in the motor cortex
To provide a basis for interpreting the S1 decoding results above, we also examined delay period decoding in two regions in M1, BA4a and BA4p (see above, Materials and Methods). These areas served as positive control regions, given that they are well known to differentiate limb-related information during movement planning in both humans and nonhuman primates (Cisek et al., 2003; Gallivan et al., 2013a). As expected, we observed significant decoding during the Delay epoch in both M1 areas in both the left and right hemispheres (left hemisphere: BA4a, mean = 60.78%, p < 0.001; BA4p, mean = 56.17%, p < 0.001; Right hemisphere: BA4a, mean = 62.62%, p < 0.001; BA4p, mean = 58.83%, p < 0.001, all p values are FDR corrected, Fig. 2B). Unsurprisingly, decoding in both these areas was even more robust during the Execute epoch (decoding accuracies between 83 and 95%; all p < 0.001). These M1 findings not only offer proof of data quality but also provide initial evidence that similar levels of action-related information can be decoded from S1 as from M1 before movement.
Experiment 2
Our experiment 1 results show that motor effector-related information (left hand vs right hand) can be decoded from neural activity patterns in bilateral S1 before movement onset. What remains unclear from this first study, however, is the extent to which these S1 modulations are contralateral in nature. That is, because both hands were used in experiment 1, and we observe decoding in both left and right S1, we are unable to disentangle whether (1) left somatosensory cortex only encodes upcoming movements of the right limb and vice versa (i.e., a contralateral modulation) or, alternatively, (2) whether left (and right) somatosensory cortex is differentially modulated by planned movements of either limb (i.e., both a contralateral and ipsilateral modulation). A separate delayed movement task in which only one of the limbs is used would allow us to directly examine whether, during the delay period, both the contralateral and ipsilateral somatosensory cortices (with respect to the limb) are modulated during planning.
To test this, as well as replicate and extend the general findings reported in experiment 1, we performed a second experiment wherein we modified a classic task from primate neurophysiology used to dissociate motor- versus sensory-related representations in the context of motor planning (Snyder et al., 1997; Cui and Andersen, 2007). In our variant of this delayed movement task, we had participants in each trial either grasp and lift and replace an object with their right hand or make a saccadic eye movement toward the object (Fig. 1D,E). If somatosensory cortex is modulated in a purely contralateral fashion, then we would expect that only left somatosensory cortex will decode planned right-hand versus eye movements; by contrast, if somatosensory cortex is modulated in both a contralateral and ipsilateral fashion, then we would expect that both left and right somatosensory cortex will decode planned right-hand versus eye movements.
Movement preparation modulates BA3b in a contralateral fashion but modulates higher order areas BA1 and BA2 in a bilateral fashion
As in experiment 1, a decoding analysis on Delay epoch voxel patterns revealed that information related to the upcoming effector to be used (hand vs eye) could be decoded from S1 subareas (Fig. 3). Notably, however, we found that the motor effector decoding was not entirely bilateral. In the higher order subareas BA1 and BA2, we found that hand information could be decoded from both the contralateral (left hemisphere: BA1, mean = 63.17%, p < 0.001; BA2, mean = 59.04%, p < 0.001) and ipsilateral (right hemisphere: BA1, mean = 55.58%, p = 0.002; BA2, mean = 56.83%, p < 0.001; all p values are FDR corrected) somatosensory cortex. In S1 proper, by contrast, we found that hand information could only be decoded from the contralateral somatosensory cortex (left BA3b, mean = 57.98%, p < 0.001; right BA3b, mean = 51.25%, p = 0.335; all p values are FDR corrected). We also observed no significant decoding from either left or right BA3a (left hemisphere: BA3a, mean = 51.54%, p = 0.235; right hemisphere: BA3a, mean = 49.52%, p = 0.628; all p values are FDR corrected). However, during the Execute epoch, as in experiment 1, we found that effector-related decoding was robust in all four subareas in both hemispheres (decoding accuracies between 58 and 83%, all p < 0.001).
Experiment 2 decoding of motor effector information (right hand vs eye) from early somatosensory cortex during the delay epoch. Data are plotted and computed in the same way as in Figure 2 but for experiment 2 data.
As in experiment 1, it is useful to interpret these above decoding results in somatosensory cortex with respect to decoding in M1, known to differentiate the planning of hand versus eye movements in humans (Gallivan et al., 2011a). Notably, here we observed a similar pattern of effects in motor cortex to that observed in BA3b. Specifically, we found significant decoding in the contralateral left primary motor ROIs that was either absent or weaker in the ipsilateral right primary motor ROIs (left hemisphere: BA4a, mean = 58.46%, p < 0.001; BA4p, mean = 58.27%, p < 0.001; right hemisphere: BA4a, mean = 50.00, p = 0.521; BA4p, t(12) = 54.04%, p = 0.033, all p values are FDR corrected; see Fig. 3). Again, decoding in both these areas was also significant during the Execute epoch (decoding accuracies between 61 and 83%; all p < 0.001). In sum, this result suggests that the motor effector information that can be decoded from BA3b (S1 proper) before movement is qualitatively (and topographically) similar to what can be decoded from the primary motor cortex.
The results from experiment 2 when taken together support our main observation from experiment 1 that somatosensory cortex contains motor effector information before movement execution. Moreover, the finding that only left BA3b shows significant decoding of the upcoming movement in which its decoding accuracies are also significantly greater than in right BA3b (t(12) = 2.28, p = 0.042; paired t test), suggests that movement planning information is represented in S1 proper in a contralateral fashion. Meanwhile, at slightly higher levels of somatosensory cortex, in BA1 and BA2, this information is represented in a bilateral fashion. The fact that motor effector decoding was only observed in contralateral BA3b during planning but was observed bilaterally in BA3b during execution may suggest separate gating mechanisms for planning versus execution at the earliest levels of somatosensory cortex.
Searchlight analyses in experiments 1 and 2 reveal the contralateral nature of planning-related modulations in the somatosensory cortex
To complement our above ROI analyses and bolster our observations from both experiments 1 and 2, we also performed separate searchlight analyses in both datasets, with a focus on decoding along the postcentral and precentral gyri (see above, Materials and Methods). The presence or absence of spatial smoothing (see above, Materials and Methods) did not affect the overall pattern of results of our searchlight analyses in both experiments 1 and 2. As such, we focus on the results of the smoothed searchlight analyses (Fig. 4A,B). During the Delay epoch in experiment 1, the searchlight revealed large clusters in the left and right hemispheres that span all somatosensory and motor areas used in the ROI analyses, as well as within the supplemental motor cortex (Table 1, experiment 1). Together, these clusters shared the greatest degree of overlap with BA1, BA3b, and BA4p in both hemispheres (yellow bar plots in Fig. 4C). On movement execution, the majority of bilateral postcentral and precentral gyri exhibit decoding (Fig. 4D).
Searchlight clusters in experiments 1 and 2
Searchlight analyses show that the encoding of motor effector information during planning in experiment 1 (when both hands are used) occurs in bilateral S1 (and M1), whereas, in experiment 2 (when only one hand is used), it primarily occurs in contralateral S1 (and M1). Searchlight analyses were restricted to a mask encompassing all somatosensory and motor ROIs. Group-level searchlight maps were thresholded at z = 3.09 (one-tailed p < 0.001) and cluster corrected at p < 0.05 based on permutation procedures (see above, Materials and Methods). A, Axial slices of experiment 1 (orange-yellow) and experiment 2 (blue) searchlight maps. Black trace shows the boundaries of the searchlight mask. B, Searchlight results projected onto flat surface maps. White dashed lines denote the central sulcus in each hemisphere. C, Proportion of significant decoding voxels for left and right hemisphere ROIs, computed by taking the number of voxels belonging to searchlight clusters (decoding voxels) within an ROI and dividing by the total number of voxels within that ROI. D, Searchlight results of the Execute epochs in experiment 1 (left) and 2 (right).
During the Delay epoch in experiment 2, we observed a large searchlight cluster in the left hemisphere situated predominantly on the postcentral gyrus but also extending into the precentral gyrus, along with additional clusters in supplementary motor cortex and inferolateral precentral gyrus (Table 1, experiment 2 clusters 4, 6, 7; Fig. 4A,B). Meanwhile, the right hemisphere showed smaller clusters along the precentral and postcentral gyri, primarily in BA1 (Table 1, experiment 2 clusters 2–5, 8–9). Although the clusters in the left hemisphere had substantial overlap with the left ROIs, clusters in the right hemisphere occupied only a marginal number of voxels, if any, with the right ROIs (see blue bar plots in Fig. 4C). Overall, these findings are largely consistent with our observations in the ROI decoding analyses in that the searchlight demonstrates a striking contralateral (left) hemisphere bias, with BA1 showing the largest degree of bilateral decoding. Similar to experiment 1, bilateral postcentral and precentral gyri exhibit decoding during movement execution (Fig. 4D). The convergence of our ROI findings and searchlight results is also important in terms of identifying constraints on the sensitivity of the MVPA approach; that is, motor effector decoding in experiment 2 is not simply found everywhere throughout S1 and M1 (as in experiment 1), but rather it is largely confined to the contralateral hemisphere. For completeness, we also show these Delay epoch decoding results in experiment 1 and 2 in the context of the different effector activations (univariate contrast maps) elicited during movement execution (Fig. 5).
Searchlight results in context of activity produced by the execution of each effector in experiment 1 and 2. A, Top, The univariate contrast maps of individual Execute versus Delay contrasts for left (yellow) and right (green) hand (see Materials and Methods). Each contrast map was thresholded at p < .001 and cluster corrected (p < 0.05, using AFNI's 3dClustSim). Shaded regions in the three panels below show the overlay of each univariate contrast map, as well as their conjunction (bottom), onto the searchlight results displayed in Fig. 4B (orange = experiment 1: left-hand vs right-hand decoding; blue = experiment 2: Look vs Grasp decoding). B, Same as A, but for experiment 2. Execute > Delay contrast (yellow = Look contrast; green = Grasp contrast).
Discussion
Here, we asked whether the motor effector used in an upcoming action can be decoded before movement from neural activity patterns in S1. Experiment 1 showed that the planning of left versus right hand movements could be decoded from premovement activity in both left and right subfields of S1 (BA3a, BA3b, BA1, and BA2). Experiment 1 findings were replicated in experiment 2, wherein participants prepared object-directed movements of the right hand versus eye. However, we additionally showed that effector decoding differed across subareas, with decoding in the higher order somatosensory regions (BA1 and BA2) occurring bilaterally (i.e., in both the left and right hemispheres) and the decoding in the lower order S1 proper (BA3b) occurring contralaterally to the hand. Our S1 results in both experiments paralleled the decoding in primary motor cortex, suggesting that motor effector information was present in both somatosensory and motor cortices (see also Ariani et al., 2021). Together, these experiments suggest that motor planning, in addition to preparing the motor system for movement, involves changing the neural state of the somatosensory system, presumably allowing it to anticipate the cutaneous signals that arise during movement execution.
Premovement modulations of neural activity have been widely reported throughout the cortical motor system (Churchland et al., 2010; Gallivan and Culham, 2015). Traditionally, through the single-neuron recording approach, these modulations have been interpreted as reflecting the coding of various movement parameters (e.g., direction), wherein this activity is thought to represent a subthreshold version of the upcoming movement to be executed (Churchland et al., 2010). More recently, neural recordings performed at the population level (via multiunit arrays) argue that this modulation reflects a state transition that brings population activity to a point at which movement execution unfolds naturally through intrinsic neural processes (Churchland et al., 2010, 2012; Shenoy et al., 2013; Sussillo et al., 2015; Pandarinath et al., 2017; Lara et al., 2018). Despite key differences in these two frameworks, they both interpret the premovement modulations as some form of motor preparation and attempt to explain this activity in the context of descending spinal motor commands (Churchland et al., 2010; Russo et al., 2018). By contrast, what might the preparatory activity in S1 represent?
One possibility is that S1 actually has some level of control over the upcoming hand actions. There is evidence from rodents that S1 (barrel cortex) can directly control whisker movements via innervation of motoneurons through the spinal trigeminal nuclei (Matyas et al., 2010). Similarly, there is evidence in NHPs that S1 has descending projections that terminate in the ventral horn of the spinal cord, where motoneurons are located (Rathelot and Strick, 2006). However, it is unlikely that these S1 projections in NHPs are directly involved in movement generation per se (Rathelot and Strick, 2006), given that S1 electrical stimulation has been rarely shown to evoke overt movements (Widener and Cheney, 1997). Rather, these descending S1 projections may exert an indirect contribution to motor control by synapsing on the γ motoneurons that control the gain of muscle spindle afferents (Rathelot and Strick, 2006). In this way, S1 could contribute to the fine motor control of actions by changing the sensitivity of the spindle afferents to optimally process limb and finger position during movement.
A second possibility is that S1 could be encoding proprioceptive information conveying the current state (e.g., position) of the limb. State estimation is a critical component to current theories of motor control (Scott, 2004), with recent work suggesting that limb-related proprioceptive information is encoded in primate somatosensory cortex (Chowdhury et al., 2020). With respect to movement planning, prior work has shown that proprioceptive signals are used directly to specify joint-based motor commands (Sober and Sabes, 2003; Sarlegna and Sainburg, 2009), which could partly explain why the loss of proprioceptive information can be so devastating for motor control (Sainburg et al., 1993; Teasdale et al., 1993). Given that different effectors were compared in our studies (i.e., left vs right hand in experiment 1 and right hand vs eye in experiment 2), the relevant state parameters would presumably change (or require updating) on a per trial basis, which could explain, at least in part, our S1 decoding results.
A third possibility is that the premovement modulation of S1 relates to some form of motor-related imagery. Recent single-unit recording work in a tetraplegic human patient has shown that S1 encodes imagined reaching movements and that this coding is tuned to the imagined limb position (Jafari et al., 2020). This finding extends on work in spinal cord injured patients showing that effector movement imagery engages distinct cortical neural populations in the posterior parietal cortex that are also typically recruited during the actual movement of that effector (Aflalo et al., 2015; Zhang et al., 2017). Together, these patient findings suggest a role for S1 in motor imagery and demonstrate that S1 can be engaged in the complete absence of sensation or even expected sensation. Although the extent to which these observations in spinal cord injured patients generalize to neurologically healthy individuals is unclear (and thus whether they can account for the present results), our finding that distinct effector movements are represented in premovement S1 activity bolsters the emerging view from these patient studies that S1 is not simply a passive purveyor of tactile and proprioceptive information to the cortex.
A fourth, and we think likely, possibility is that the premovement modulation of S1 observed here reflects predictive coding of the sensory consequences of the prepared movements. Such prediction is theorized to arise through an internal forward model (Miall and Wolpert, 1996), which provides the brain with an internal mechanism to disambiguate self-generated versus externally generated sensory information (Wolpert and Flanagan, 2001). Studies focused on perceptual and/or sensory processing presume that the forward model has the effect of cancelling, or attenuating, the predictable sensory consequences of action (Bastos et al., 2012; Clark, 2013; Schneider and Mooney, 2018). Such sensory cancellation phenomena have been well studied in the context of tasks involving manual interactions with objects, such as the attenuation of perceived force produced by self-generated movements compared with the same force being delivered externally (Shergill et al., 2003). Importantly, this attenuation is temporally tuned to the timing of the predicted contact event rather than linked to movement, per se (Bays et al., 2005, 2006). This is consistent with neural recording work in animals showing that neural activity in S1 is attenuated to a greater extent during and before voluntary movements of the limb as compared with passive movements of the same limb (Starr and Cohen, 1985; Jiang et al., 1990b; Seki and Fetz, 2012).
The forward model is also theorized to support real-time accurate motor control, particularly in object manipulation tasks (Flanagan et al., 2006; Johansson and Flanagan, 2009). In such tasks, object-related contact events (e.g., contact of the digits with the object) give rise to discrete sensory signals in multiple modalities (e.g., tactile, visual) that can be used to efficiently monitor task performance (Wolpert et al., 2011) and launch rapid corrective actions based on mismatches between the predicted and actual sensory signals of these contact events. These corrective actions are intelligent and are updated depending on the nature of the mismatch and phase of the task (Flanagan et al., 2006). We and others have thus argued that outside the motor system, the preparation of manipulation tasks could also involve forming a sensory plan, that is, a predicted series of sensory signals, linked to contact events, that can be expected to arise as a function of known object properties and the outgoing motor commands (Johansson and Flanagan, 2009; Gale et al., 2021). If such sensory plans are represented in S1, then how might they arise?
It is possible that motor cortex provides S1 with an efference copy of upcoming movement execution signals through known reciprocal connections between these regions (Nelson, 1987; London and Miller, 2013; Chowdhury et al., 2020). Recently, Umeda et al. (2019) performed simultaneous neural recordings in S1, M1, and an ensemble of afferent neurons in the dorsal root ganglion and found that premovement activity changes in S1 during reaching and grasping are largely accounted for by the activity of M1, with S1 encoding information about the forthcoming activity of forelimb muscles only slightly after M1. During movement execution, by contrast, S1 activity reflected both motor cortex activity and afferent activity in the dorsal root ganglion. Together, this NHP study not only supports our observation here in humans that S1 activity encodes the imminent action to be performed (before the arrival of sensory feedback), but it also suggests that motor cortex is the origin of this premovement modulation.
Experiment 2 of the current study revealed decoding of motor effector information during planning only in the contralateral (left) BA3b, whereas this decoding was bilateral in adjacent areas BA1 and BA2. Classic work in nonhuman primates has demonstrated callosal connections between the primary somatosensory cortices (Jones et al., 1975, 1979), and the density of these connections varies according to subarea (Killackey et al., 1983). BA3b, or the S1 proper, exhibits the lowest density of these interhemispheric connections, particularly in the hand region, whereas BA1 and BA2 have increasingly denser interhemispheric connectivity (Killackey et al., 1983). This pattern of callosal connections resembles the early visual system, wherein the interhemispheric connection density increases in a stepwise fashion from lower to higher order areas (i.e., from V1 to V2 to V3; Newsome and Allman, 1980; Van Essen et al., 1982). To speculate, the rostro-to-caudal increase of callosal connections in S1 (Killackey et al. (1983)) may provide the basis for the contra- to bilateral topography of motor effector decoding across BA3b, BA1, and BA2 observed here. That is, for the hand, the bilateral exchange of sensory prediction information related to upcoming movement may only occur in the later stages of the serial processing chain (i.e., in BA1 and BA2).
Finally, we note that in previous investigations using fMRI and similar delayed movement tasks, we and others have not reported any premovement modulations in S1 (Gallivan et al., 2011a,b, 2013b, 2016; Ariani et al., 2015, 2018; Gertz et al., 2017). For our part, this earlier work often used activity in the somatosensory cortex as a negative control; that is, the lack of premovement decoding in the somatosensory cortex was consistent with the widely held notion that the region only responds to sensory feedback (associated with movement execution). However, the results of the current study suggest that our prior inability to detect premovement modulations in the S1 may have been because of our mislocalization of S1, which was based on motor execution activity (Gallivan et al., 2011a, 2013b), rather than the more precise cytoarchitectonic delineations used here.
In summary, this current study, when combined with our recent studies on planning-related decoding in the early visual (Gallivan et al., 2019) and auditory (Gale et al., 2021) systems, adds to mounting evidence that the early sensory cortices have direct access to ongoing sensorimotor processes in the motor system.
Footnotes
This work was supported by operating grants from the Canadian Institutes of Health Research to J.R.F. and J.P.G. (MOP126158). D.J.G. was supported by an RS McLaughlin Fellowship and a Natural Sciences and Engineering Research Council Graduate Fellowship. We thank Martin York, Sean Hickman, and Don O'Brien for technical assistance.
The authors declare no competing financial interests.
- Correspondence should be addressed to Jason P. Gallivan at gallivan{at}queensu.ca