From simple tasks like reaching for your morning cup of coffee to more complex activities like driving or playing sports, reach-to-grasp movements are integral to many daily activities. Although seemingly simple, reaching to grasp an object is a complex behavior requiring multiple sensory inputs for successful execution. This behavior can be broken down into three different phases: planning, reaching, and grasping. During the planning phase, information about the object's location and on the features of the object and task are necessary for appropriate motor planning (Betti et al., 2018; Guo and Niemeier, 2024). We process visual information on the object's features, such as its shape and size, to identify potential grasp points and estimate its weight. The task features include information on which specific effector (right hand, left hand, or both) and grasp orientation (clockwise and counterclockwise) would be used (Guo and Niemeier, 2024).
In the reaching and grasping phases, the brain integrates the visual information gathered during the planning phase with motor commands, enabling precise hand movements toward the object (Klein et al., 2023). This integration of visual and motor processes is known as visuomotor computation, where the visual properties of the object are aligned with the motor actions required to execute the grasp (Klein et al., 2023). This visuomotor integration ensures the appropriate trajectory for reaching, and the correct grip and load forces are applied to successfully lift and retrieve the object (Klein et al., 2023; Guo and Niemeier, 2024).
Despite the importance of visuomotor integration of both object and task features, these features are often studied in isolation. For instance, in one study, researchers focused solely on how grasp orientation and the alignment of handles on objects like beer mugs influence grasping behavior (Bub and Masson, 2010). In another study by Fattori et al. (2012), experimenters focused on investigating how the geometric features of objects can influence grasping behavior. Although these studies and others alike provide pertinent insight into the effect of task and object features in carrying out a reach-to-grasp task, studying these features in isolation may oversimplify the dynamic processes that the brain uses to integrate these features. This isolated approach can lead to missing critical information about how different task and object features interact and influence each other and how these changes can affect motor planning and execution.
In a recent article in The Journal of Neuroscience, Guo and Niemeier (2024) fill this gap in reach-to-grasp research by examining the concurrent integration of multiple object and task features underlying sensorimotor control. To investigate this integration, the researchers had 15 participants partake in a reach-to-grasp task using a set of four blocks that varied in shape (flower or cross) and size (small or large). Prior to performing the task, the experimenter informed the participants of whether grasps should be clockwise or counterclockwise (grasp orientation). Participants were then provided with a preview time of 300 or 500 ms to view the object prior to task onset, which was indicated by a subsequent auditory “Go” signal. The pitch of this signal informed participants about which hand(s) to use to grasp the object.
To map the time course of neural integration processes as participants engaged in the task, the experimenters used electroencephalography (EEG) and representational dissimilarity analysis (RDA). RDA is an analytical method used to calculate the dissimilarity in EEG patterns across experimental conditions (e.g., small cross versus large flower). To analyze the obtained data, EEG signals were first processed and aligned to either object preview onset or the “Go” signal. Next, RDA is applied, which produces representational dissimilarity matrices (RDMs) that show how distinct the EEG responses are for each individual and each combination of object and task features. The experimenters then compared these RDMs to theoretical models for single-features (e.g., size or orientation) and integrated features (e.g., size and orientation) to test for superadditive integration—neural responses reflecting combined effects beyond individual contributions. The temporal resolution of this analysis allowed for visualization of integrated representations of different object and task features during different phases of the reach-to-grasp task.
Prior to exploring the integration of object and task features, the experimenters first explored the temporal emergence of neural representations for individual features. Individual feature representations emerged at slightly different times depending on the length of the object preview time. For the 300 ms preview time, they found that representations emerged at 90 ms for object shape and at 80 ms for object size, relative to object preview onset. Notably, shape representations were transient, peaking during object preview and diminishing soon after, while size representations persisted after the “Go” signal. These findings suggest that although the brain considers shape and size during planning, it continues processing size after movement begins to adjust grip and load forces. Similarly to object size, grasp orientation representations emerged at 50 ms following object preview onset and were sustained after the “Go” signal and movement initiation, demonstrating its importance during all phases of the reach-to-grasp task, from motor planning to execution. Lastly, neural representations for effector specification emerged 190 ms after the “Go” signal and were sustained following movement initiation. This reflects the dependence of effector representations on effector specification via the “Go” signal and its importance during motor execution, during the reaching, and grasping phase of the task.
Next, experimenters investigated the integration of object features and found that visual information on both the object's shape and size was integrated during the planning phase of this reach-to-grasp task. Specifically, shape-by-size integration emerged ∼140 ms after object preview onset, which is later than the emergence of the individual representations for shape and size. This suggests a sequential process where the brain first analyzes the individual features and integrates them shortly after. This early visual integration of geometric object features is important for the selection of appropriate grasp points during the planning phase to ensure stable grasps. This integration does not persist once the action is underway; after the “Go” signal, the brain may rely more on other factors like task features.
When investigating the integration of task features with object features, the experimenters found significant integration of grasp orientation with object size at 50–90 and at 130–180 ms following the “Go” signal and effector specification. The timing of this visuomotor integration reflects another sequential process in which individual neural representations for size and orientation emerge first, during object preview, and then once effector specification is provided via the “Go” signal, the brain combines these individual representations to compute the integrated size-by-orientation representations observed. This visuomotor integration incorporates object size to adjust grip and grasp orientation to optimize trajectory alignment, which is critical for planning appropriate reach-to-grasp trajectories for successful execution of the task.
Overall, this study distinguishes itself by investigating how both object and task features, often studied in isolation, are integrated across different phases of a reach-to-grasp task. The findings of this study reveal the temporal specificity and sequential process underlying the emergence of individual feature representations and their integration into visual and visuomotor computations. Understanding these integration mechanisms is essential for advancing our knowledge of sensorimotor function, as it provides insights into how the brain coordinates complex movements.
While Guo and Niemeier's findings advance our understanding of object and task feature integration in reach-to-grasp tasks, some questions remain. This study utilized objects with similar symmetries and involved a relatively simple reach-to-grasp task; however, real-world scenarios are often more complex. Future research could explore the effects of using objects with irregular geometries, varying the timing of effector specification, or incorporating tasks requiring bimanual coordination, such as pulling a string. These approaches could provide further insights into how object and task feature integration adapt to more complex contexts.
This study provided valuable insights into sensorimotor integration mechanisms, which can be useful for advancing neuroprosthetic design and stroke rehabilitation paradigms. Current neuroprosthetic devices struggle to replicate the nuanced control of natural limb movements due to limitations in integrating sensory feedback and motor commands (Wijk et al., 2020). This results in slow and clumsy movements, making everyday tasks challenging for users. Algorithms mimicking the dynamic integration of visuomotor and motor processes observed in this study could enhance user control and functionality. Similarly, stroke rehabilitation strategies could benefit from incorporating tasks with different object and task features. Stroke often impacts upper-limb function, and rehabilitative interventions typically rely on simple reaching tasks which may not reflect the complexities of real-world activities (Levac et al., 2019). In a study by Parry et al. (2019), researchers found that stroke patients utilized compensatory grasp orientations to compensate for their reduced dexterity and muscle strength, but these strategies were not optimal for functional recovery. Using insights from this study, incorporating objects of different shapes and sizes and varying grasping strategies could potentially promote functional recovery.
By highlighting the interplay between object and task features, Guo and Niemeier's findings provide a foundation for developing interventions that enhance neuroprosthetic design and improve stroke rehabilitation outcomes, ultimately supporting better quality of life for individuals with motor impairments.
Footnotes
Review of Guo and Niemeier
This journal club was written under the mentorship of Andrew Pruszynski.
The author declares no competing financial interests.
Editor’s Note: These short reviews of recent JNeurosci articles, written exclusively by students or postdoctoral fellows, summarize the important findings of the paper and provide additional insight and commentary. If the authors of the highlighted article have written a response to the Journal Club, the response can be found by viewing the Journal Club at www.jneurosci.org. For more information on the format, review process, and purpose of Journal Club articles, please see http://jneurosci.org/content/jneurosci-journal-club.
- Correspondence should be addressed to Rana Abdelhalim at rana.abdelhalim{at}mail.mcgill.ca.