Abstract
A prevailing question in sensorimotor research is the integration of sensory signals with abstract behavioral rules (contexts) and how this results in decisions about motor actions. We used neural network models to study how context-specific visuomotor remapping may depend on the functional connectivity among multiple layers. Networks were trained to perform different rotational visuomotor associations, depending on the stimulus color (a nonspatial context signal). In network I, the context signal was propagated forward through the network (bottom-up), whereas in network II, it was propagated backwards (top-down). During the presentation of the visual cue stimulus, both networks integrate the context with the sensory information via a mechanism similar to the classic gain field. The recurrence in the networks hidden layers allowed a simulation of the multimodal integration over time. Network I learned to perform the proper visuomotor transformations based on a context-modulated memory of the visual cue in its hidden layer activity. In network II, a brief visual response, which was driven by the sensory input, is quickly replaced by a context-modulated motor-goal representation in the hidden layer. This happens because of a dominant feedback signal from the output layer that first conveys context information, and then, after the disappearance of the visual cue, conveys motor goal information. We also show that the origin of the context information is not necessarily closely tied to the top-down feedback. However, we suggest that the predominance of motor-goal representations found in the parietal cortex during context-specific movement planning might be the consequence of strong top-down feedback originating from within the parietal lobe or from the frontal lobe.