Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
PreviousNext
Research Articles, Behavioral/Cognitive

Disentangling object category representations driven by dynamic and static visual input

Sophia Robert, Leslie G. Ungerleider and Maryam Vaziri-Pashkam
Journal of Neuroscience 6 December 2022, JN-RM-0371-22; DOI: https://doi.org/10.1523/JNEUROSCI.0371-22.2022
Sophia Robert
1Lab of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA
2Department of Psychology and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Leslie G. Ungerleider
1Lab of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Leslie G. Ungerleider
Maryam Vaziri-Pashkam
1Lab of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Humans can label and categorize objects in a visual scene with high accuracy and speed—a capacity well-characterized with studies using static images. However, motion is another cue that could be used by the visual system to classify objects. To determine how motion-defined object category information is processed by the brain in the absence of luminance-defined form information, we created a novel stimulus set of “object kinematograms” to isolate motion-defined signals from other sources of visual information. We did this by extracting motion information from videos of 6 object categories and applying this motion to limited-lifetime random dot patterns. Using fMRI (n = 15, 40% women), we investigated whether category information from the object kinematograms could be decoded within the occipitotemporal and parietal cortex and evaluated whether the information overlapped with category responses to static images from the original videos. We decoded object category for both stimulus formats in all higher order regions of interest. More posterior occipitotemporal and ventral regions showed higher accuracy in the static condition and more anterior occipitotemporal and dorsal regions showed higher accuracy in the dynamic condition. Further, decoding across the two stimulus formats was possible in all regions. These results demonstrate that motion-defined cues can elicit widespread and robust category responses on par with those elicited by static luminance cues, even in ventral regions of visual cortex that have traditionally been associated with primarily image-defined form processing.

SIGNIFICANCE STATEMENT:

Much research on visual object recognition has focused on recognizing objects in static images. However, motion is a rich source of information that humans might also use to categorize objects. Here, we present the first study to compare neural representations of several animate and inanimate objects when category information is presented in two formats: static cues or isolated dynamic motion cues. Our study shows that while higher order brain regions differentially process object categories depending on format, they also contain robust, abstract category representations that generalize across format. These results expand our previous understanding of motion-derived animate and inanimate object category processing and provide useful tools for future research on object category processing driven by multiple sources of visual information.

Footnotes

  • The authors declare no competing financial interests.

  • We thank Emalie McMahon for her assistance with the optical flow extraction code, Chris Baker for insightful feedback, and Julian De Freitas for inspiring discussions that helped in forming the initial interest in this research area. This research was supported by the National Institute of Mental Health Intramural Research Program (ZIA-MH-002909).

SfN exclusive license.

Member Log In

Log in using your username and password

Enter your Journal of Neuroscience username.
Enter the password that accompanies your username.
Forgot your user name or password?

Purchase access

You may purchase access to this article. This will require you to create an account if you don't already have one.
Back to top
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Disentangling object category representations driven by dynamic and static visual input
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
View Full Page PDF
Citation Tools
Disentangling object category representations driven by dynamic and static visual input
Sophia Robert, Leslie G. Ungerleider, Maryam Vaziri-Pashkam
Journal of Neuroscience 6 December 2022, JN-RM-0371-22; DOI: 10.1523/JNEUROSCI.0371-22.2022

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Request Permissions
Share
Disentangling object category representations driven by dynamic and static visual input
Sophia Robert, Leslie G. Ungerleider, Maryam Vaziri-Pashkam
Journal of Neuroscience 6 December 2022, JN-RM-0371-22; DOI: 10.1523/JNEUROSCI.0371-22.2022
del.icio.us logo Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
  • Info & Metrics
  • eLetters
  • PDF

Responses to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Articles

  • Cortically-evoked movement in humans reflects history of prior executions, not plan for upcoming movement
  • Neuronally-derived soluble Abeta evokes cell-wide astrocytic calcium dysregulation in absence of amyloid plaques in vivo
  • Effect of aging and a dual orexin receptor antagonist on sleep architecture and NREM oscillations including a REM Behavior Disorder phenotype in the PS19 mouse model of tauopathy
Show more Research Articles

Behavioral/Cognitive

  • Cortically-evoked movement in humans reflects history of prior executions, not plan for upcoming movement
  • Anticipation of Appetitive Operant Action Induces Sustained Dopamine Release in the Nucleus Accumbens
  • Neither Enhanced Nor Lost: The Unique Role of Attention in Children's Neural Representations
Show more Behavioral/Cognitive
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
(JNeurosci logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.