RT Journal Article SR Electronic T1 Human Reinforcement Learning Subdivides Structured Action Spaces by Learning Effector-Specific Values JF The Journal of Neuroscience JO J. Neurosci. FD Society for Neuroscience SP 13524 OP 13531 DO 10.1523/JNEUROSCI.2469-09.2009 VO 29 IS 43 A1 Samuel J. Gershman A1 Bijan Pesaran A1 Nathaniel D. Daw YR 2009 UL http://www.jneurosci.org/content/29/43/13524.abstract AB Humans and animals are endowed with a large number of effectors. Although this enables great behavioral flexibility, it presents an equally formidable reinforcement learning problem of discovering which actions are most valuable because of the high dimensionality of the action space. An unresolved question is how neural systems for reinforcement learning—such as prediction error signals for action valuation associated with dopamine and the striatum—can cope with this “curse of dimensionality.” We propose a reinforcement learning framework that allows for learned action valuations to be decomposed into effector-specific components when appropriate to a task, and test it by studying to what extent human behavior and blood oxygen level-dependent (BOLD) activity can exploit such a decomposition in a multieffector choice task. Subjects made simultaneous decisions with their left and right hands and received separate reward feedback for each hand movement. We found that choice behavior was better described by a learning model that decomposed the values of bimanual movements into separate values for each effector, rather than a traditional model that treated the bimanual actions as unitary with a single value. A decomposition of value into effector-specific components was also observed in value-related BOLD signaling, in the form of lateralized biases in striatal correlates of prediction error and anticipatory value correlates in the intraparietal sulcus. These results suggest that the human brain can use decomposed value representations to “divide and conquer” reinforcement learning over high-dimensional action spaces.