The ability to reach toward and act upon an object in our physical environment is a crucial skill that develops from an early age. It is, however, paradoxical that reaching toward an object often seems effortless despite the rarity with which we encounter the same reach environment (e.g., object location, weight, size, distance, etc.). Overcoming this complexity to efficiently act upon an object requires generalization from previous successful movements. Recently, such generalization has been investigated using a Bayesian framework that combines accumulated experience of the world (the prior probability) with incoming sensory information (the likelihood function) to guide reaches to novel locations (Krakauer et al., 2000).
In a recent issue of The Journal of Neuroscience, Fernandes et al. (2014) addressed how “prior uncertainty” learned from reaching to one target generalizes to other locations. The experimental paradigm neatly combined the methodology of two previous studies, enabling the investigation of both the generalization of the mean of a visuomotor perturbation (Krakauer et al., 2000) and the generalization of uncertainty surrounding the perturbation (i.e., the variance; Körding and Wolpert, 2004).
To operationalize prior uncertainty, a visuomotor perturbation was applied to center-out reaching movements. A perturbation, in this sense, refers to a mismatch between the actual location of the participant's hidden hand and the location of the cursor representing the hand. The perturbation was created by applying a rotation to the cursor at the onset of the reaching movement. During a learning phase, the target location remained fixed and participants were required to learn the mean (Experiments 2 and 3) and variance (4° or 12°, all experiments) of the visuomotor perturbation to successfully reach toward the target location. How this prior experience of the mean and variance generalized to novel target locations was investigated in a subsequent generalization phase. To infer the amount of uncertainty surrounding the prior, participants were presented with midpoint feedback about the hidden cursor location (a group of red dots whose accuracy varied by ∼5.1 mm to prevent sole reliance on the likelihood information). If prior uncertainty was high, reliance on the feedback (corresponding to the likelihood function) should be increased, while if uncertainty was low, reliance on the feedback should be decreased.
Reliance was computed via a regression equation taking the final hand angle (i.e., the Bayesian posterior) as a function of the angle from the start of the movement to the center of the midpoint feedback dots: the greater the slope of the regression, the higher the reliance on the feedback. As expected, in blocks of trials where the variance of the perturbation was high (12° vs 4°), reliance on the feedback increased. This uncertainty generalized to locations proximal to the location of the target during the learning phase (the learned target direction), with greater reliance on midpoint feedback at adjacent locations. Furthermore, despite the mean of the perturbation being unchanged in Experiment 1, reaches to adjacent targets were still drawn toward the learned target direction, consistent with use-dependent learning (Diedrichsen et al., 2010).
In Experiments 2 and 3, the mean of the perturbation was modified in combination with the variance. This resulted in an interesting asymmetry in the generalization of uncertainty. Rather than uncertainty spreading equally to all other locations, there was a significant bias toward the direction of the mean of the perturbation, i.e., feedback reliance was greatest when the target was located adjacent to the learned target direction (even more so than when the target appeared at the learned target direction itself). Importantly, this directional shift was also toward the location where midpoint feedback was presented during the initial learning phase. This is in accordance with a model of generalization that uses a visual feedback-centered reference frame for generalizing prior uncertainty (hence the directional bias toward the feedback location). In terms of mean generalization, the same asymmetrical pattern toward the learned target direction was observed as in Experiment 1, consistent with use-dependent learning. Thus, Fernandes et al.'s (2014) results support distinct reference frames for the generalization of the mean of the perturbation—based on target-centered coordinates—and the variance of the perturbation—based on feedback-centered coordinates.
It is worth highlighting another mechanism of generalization that is not explicitly ruled out by Fernandes et al. (2014) but may have contributed to their results. As stated previously, the experimental design of Fernandes et al. (2014) combined two earlier paradigms. The use of midpoint feedback as an index of likelihood reliance was first proposed by Körding and Wolpert (2004). In their paper, they varied the quality of the feedback resulting in concomitant modulation of endpoint error: the poorer the feedback quality, the greater the endpoint error. Because reaching performance varied as a function of feedback quality, it was clear that the subjects were relying on this midpoint feedback to guide their movements. However, Fernandes et al. (2014) did not systematically vary the quality of the midpoint feedback. Therefore, as discussed in Körding and Wolpert (2004), it is conceivable that the subjects simply adapted to the lateral perturbation and recalibrated the mapping of the visual signal (endpoint feedback that is available during learning) to the sensorimotor estimate of the arm location, using this coupling in the generalization phase when endpoint information is no longer available. Despite this possibility being left open by Fernandes et al.'s (2014) methodology, the results argue against it. The asymmetry of uncertainty generalization toward the location of the midpoint feedback strongly suggests that the subject is taking this feedback into account to guide their movement. Nevertheless, future investigations may wish to integrate the role of proprioceptive inputs in visuomotor generalization (shown to be crucial when combined with visual feedback; Shabbott and Sainburg, 2010) with Fernandes et al.'s (2014) findings.
Fernandes et al. (2014) focused on how learning accumulates over the course of multiple trials and influences subsequent movements (i.e., they compared learning blocks where the variance is large (12°) or small (4°) rather than resampling the variance on individual trials). An extended model of the generalization of prior uncertainty might also account for the trial-by-trial integration of multisensory information. For instance, weights attached to redundant sensory estimates could be combined as a function of their reliability on a particular trial (Ernst and Banks, 2002). Such maximum-likelihood estimation (MLE; with weights corresponding to the inverse of the variance of the distribution) means that when a visual signal is unreliable, more weight is given to proprioceptive feedback signals that provide more accurate information. Such a model is illustrated in Figure 1. Here, a coupling prior combines sources of sensory feedback during reaching. The coupling prior assumes that there is a cross-modal correspondence between two sources of sensory information, for example, vision and proprioception. Under normal circumstances, the two sources of information should be coupled and perceived as being fused. Figure 1 details the role of a coupling prior that brings together the sensory estimates of visual and proprioceptive information when a discrepancy between them exists (as with a visuomotor perturbation; for a detailed description of this process, see Ernst (2006)). Figure 1a illustrates the MLE process where two sensory cues are combined (with equal weights in this example). A perceptual estimate (Fig. 1c) of the hands' position is obtained from the posterior (maximum a posteriori) when the coupling prior (Fig. 1b) is combined with the different information sources. In this way, the coupling prior biases hand position toward that provided by proprioceptive feedback (Fig. 1c). This low-level integration of distinct sensory inputs could then feedforward and modulate the learned mean and variance of the visuomotor perturbation accumulated over many trials to influence behavior on a particular trial.
Schematic illustration demonstrating how visual and proprioceptive inputs can be combined with a Bayesian prior. Visual and proprioceptive cues are combined using maximum-likelihood estimation (MLE) (a), which are then multiplied with a moderately spread coupling prior (b) that leads to a multimodal percept (c) taken from the posterior. MAP, maximum a posteriori estimate. The physical input of the cues (white circle) is shifted toward the perceived input (black circle).
Fernandes et al. (2014) explained the asymmetric generalization of the mean of the prior by suggesting that repeated localization in the learned target direction biased the subsequent movements toward that location (cf. Diedrichsen et al., 2010). To further isolate the effects of generalization of the variance and the mean, future studies could borrow from a recent study by Neva and Henriques (2013). They showed that if, during the learning phase, the location of the learned target direction was varied by 5° (while remaining in the same display quadrant), generalization of the visuomotor perturbation was more complete than when the learning was restricted to a single target location. With respect to Fernandes et al.'s (2014) results, varying the learned target direction would overcome the effects of use-dependentlearning, meaning the asymmetry in mean generalization should be eradicated. This would enable insight into the pure effects of mean and variance generalization without associated motor-memory influences.
In conclusion, the study by Fernandes et al. (2014) provides insight into how uncertainty surrounding a particular action can influence related actions toward distinct movement targets. The nature of such generalization is in agreement with the growing literature on Bayesian integration for the guidance of movement and how such integration may be implemented in the human brain (for review, see Orbán and Wolpert, 2011). Specifically, Fernandes et al. (2014) were able to dissociate distinct reference frames used for the generalization of the mean (target-centered) and uncertainty (feedback-centered) of a visuomotor perturbation. Further efforts are required to shed light on how such reference frames are combined with other sources of information (proprioception, somatosensation, etc.). Generalization of the visuomotor perturbation was more complete than when the learning was restricted to a single target location. With respect to Fernandes et al.'s (2014) results, varying the learned target direction would overcome the effects of use-dependent learning, meaning the asymmetry in mean generalization should be eradicated.
Footnotes
Editor's Note: These short, critical reviews of recent papers in the Journal, written exclusively by graduate students or postdoctoral fellows, are intended to summarize the important findings of the paper and provide additional insight and commentary. For more information on the format and purpose of the Journal Club, please see http://www.jneurosci.org/misc/ifa_features.shtml.
We thank Massimiliano Di Luca for his helpful comments.
- Correspondence should be addressed to Darren Rhodes, Centre for Computational Neuroscience and Cognitive Robotics, School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK. dxr238{at}bham.ac.uk