Abstract
Humans can label and categorize objects in a visual scene with high accuracy and speed—a capacity well-characterized with studies using static images. However, motion is another cue that could be used by the visual system to classify objects. To determine how motion-defined object category information is processed by the brain in the absence of luminance-defined form information, we created a novel stimulus set of “object kinematograms” to isolate motion-defined signals from other sources of visual information. We did this by extracting motion information from videos of 6 object categories and applying this motion to limited-lifetime random dot patterns. Using fMRI (n = 15, 40% women), we investigated whether category information from the object kinematograms could be decoded within the occipitotemporal and parietal cortex and evaluated whether the information overlapped with category responses to static images from the original videos. We decoded object category for both stimulus formats in all higher order regions of interest. More posterior occipitotemporal and ventral regions showed higher accuracy in the static condition and more anterior occipitotemporal and dorsal regions showed higher accuracy in the dynamic condition. Further, decoding across the two stimulus formats was possible in all regions. These results demonstrate that motion-defined cues can elicit widespread and robust category responses on par with those elicited by static luminance cues, even in ventral regions of visual cortex that have traditionally been associated with primarily image-defined form processing.
SIGNIFICANCE STATEMENT:
Much research on visual object recognition has focused on recognizing objects in static images. However, motion is a rich source of information that humans might also use to categorize objects. Here, we present the first study to compare neural representations of several animate and inanimate objects when category information is presented in two formats: static cues or isolated dynamic motion cues. Our study shows that while higher order brain regions differentially process object categories depending on format, they also contain robust, abstract category representations that generalize across format. These results expand our previous understanding of motion-derived animate and inanimate object category processing and provide useful tools for future research on object category processing driven by multiple sources of visual information.
Footnotes
The authors declare no competing financial interests.
We thank Emalie McMahon for her assistance with the optical flow extraction code, Chris Baker for insightful feedback, and Julian De Freitas for inspiring discussions that helped in forming the initial interest in this research area. This research was supported by the National Institute of Mental Health Intramural Research Program (ZIA-MH-002909).
Jump to comment: