Pointing to a visual target in 3-dimensional space requires a neural transformation from a visually derived representation of target location to an appropriate pattern of activity in arm muscles. Previous results suggested that 1 step in this process involves a transformation from a representation of target location to a representation of intended arm orientation, and that the neural implementation of this transformation involves a linear approximation to the mathematically exact, nonlinear solution. These results led to the hypothesis that the transformation is parceled into 2 separate channels. In 1 channel a representation of target azimuth is transformed into a representation of arm yaw angles, while in the other channel representations of target distance and target elevation are transformed into a representation of arm elevation angles. The present experiments tested this hypothesis by measuring the errors made by human subjects as they pointed to various parameters of the remembered location of a target in space. The results show that subjects can use the 2 hypothesized channels separately. For example, subjects can accurately point to the target's azimuth while ignoring the target's elevation and distance. The results also show that subjects are unable to point to the target's elevation while ignoring the target's distance, consistent with the hypothesis that information about target elevation and target distance is tied together in the same channel. The parcellation demonstrated in this study is compared to reports of parceled sensorimotor transformation in other vertebrate species.