In the macaque monkey brain, posterior inferior temporal cortex (PIT) cells contribute to visual object recognition. They receive concurrent inputs from visual areas V4, V3 and V2. We asked how these different anatomical pathways shape PIT response properties by deactivating them while monitoring PIT activity in two male macaques. We found that cooling of V4 or V2|3 did not lead to consistent changes in population excitatory drive, however, population pattern analyses showed that V4-based pathways were more important than V2|3-based pathways. We did not find any image features that predicted decoding accuracy differences between both interventions. Using the HMAX hierarchical model of visual recognition, we found that different groups of simulated “PIT” units with different input histories (lacking “V2|3” or “V4” input) allowed for comparable levels of object-decoding performance, and that removing a large fraction of “PIT” activity resulted in similar drops in performance as in the cooling experiments. We conclude that distinct input pathways to PIT relay similar types of shape information, with V1-dependent V4 cells providing more quantitatively useful information for overall encoding than cells in V2 projecting directly to PIT.
Convolutional neural networks are the best models of the visual system, but most emphasize input transformations across a serial hierarchy akin to the primary “ventral stream” (V1→V2→V4→IT). However, the ventral stream also comprises parallel “bypass” pathways: V1 also connects to V4, and V2 to IT. To explore the advantages of mixing long- and short-pathways in the macaque brain, we used cortical cooling to silence inputs to posterior IT and compared the findings to an HMAX model with parallel pathways.
The authors declare no competing financial interests.
We thank Gabriel Kreiman and John Assad for their comments, and also Tim LaFratta and John LeBlanc for machine shop support. This work was supported by US National Institutes of Health grants NEI EY 16187 (MSL), the Burroughs Wellcome Fund (CRP), and the Core Grant for Vision Research NEI EY12196.