Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
PreviousNext
Journal Club

A Single Route to Action? The Common Representation of Perceptual and Saccade Targets

Thérèse Collins
Journal of Neuroscience 11 April 2007, 27 (15) 3935-3936; DOI: https://doi.org/10.1523/JNEUROSCI.0960-07.2007
Thérèse Collins
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Info & Metrics
  • eLetters
  • PDF
Loading

The hypothesis that two distinct neural pathways mediate visual processing has been very influential in the cognitive sciences of vision. The idea is that each pathway processes different aspects of visual information: the dorsal pathway, leading from the primary visual cortex to the posterior parietal cortex, processes visual information for action, whereas the ventral pathway, leading from the primary visual cortex to the inferior temporal cortex, processes visual information for perception (Ungerleider and Mishkin, 1982). This dissociation is based on both neurophysiological investigations in the monkey and behavioral studies in patients (Goodale et al., 2005). Despite its elegant simplicity, this hypothesis appears incompatible with evidence converging on functional links between perception and action, in particular between visual perception and oculomotor actions. For example, visuospatial attention is directed toward the target position of the eyes (Deubel and Schneider, 1996); the size of the motion aftereffect is correlated with the speed of smooth-pursuit eye movements (Braun et al., 2006); and judgments about an object's position in space are compressed toward the saccadic endpoint just before the saccade (Ross et al., 1997). These experiments suggest that the production of a body movement (and often, its mere planning), influences which parts of a visual scene we see best, how we see them, or what position in space objects seem to occupy. This line of experiments leads to the intriguing hypothesis that our perception relies not only on the physical attributes of the stimulus but also on the actions we perform on them. Thus, visual information used for perception and action would be similar. Although several studies have shown functional similarities between perceptual and oculomotor performance, similar performance can arise from different mechanisms.

In an article published recently in the Journal of Neuroscience, Eckstein et al. (2007) address this question in human subjects with a novel method that avoids some of the problems of equating similar performance with a common mechanism. Rather than comparing performance, the authors propose a classification image (CI) analysis. The basic idea of this analysis is that clues about what information is used to accomplish a task can be obtained from the stimuli and the participant's decisions on those stimuli. CI analysis depends on the use of noisy stimuli and on the analysis of errors, the idea being that when a distracter is mistaken for the target, its noise must have contained some information that fooled the decision mechanism into selecting it. By averaging the noise profiles of all the error trials, one can estimate the information on which the decision was based.

The task proposed by Eckstein et al. (2007) is a five-alternative forced choice (5-AFC) visual search task: participants must detect a brighter Gaussian-shaped target among four distracters. Spatially uncorrelated Gaussian noise is added to the stimuli. Two conditions are contrasted: in the eye-movement condition, participants make an oculomotor decision by saccading to the target as quickly as possible. The decision is the target closest to the endpoint of the first saccade. In the perception-only condition, participants select the target by a button press. Importantly, processing time is maintained equivalent across the two conditions by measuring the average saccade latency of the first saccade in the eye-movement condition and then presenting the visual stimulus for that time minus 80 ms in the perception-only condition. The 80 ms interval corresponds to the approximate neural delays for motor programming, once visual processing is complete. This duration ranged from ∼100 to 250 ms across participants.

In such a paradigm, CI analysis will reveal what spatial information is used to drive the decision, i.e., the shape of the receptive field (RF) of the estimated visual mechanisms driving the oculomotor and perceptual decisions. For each condition (eye-movement vs perception-only), CIs were computed by averaging the noise profiles of the distractor stimuli to which participants erroneously saccaded or chose as a target. Then, the CIs were again averaged over pixels of equal radius from the center. The apparent radial symmetry of the CIs [Eckstein et al. (2007), their Fig. 2 (http://www.jneurosci.org/cgi/content/full/27/6/1266/F2)] suggests that this is a good way to reduce the degrees of freedom in the data without losing important features. The radial profiles are shown in Eckstein et al. (2007), their Fig. 3 (http://www.jneurosci.org/cgi/content/full/27/6/1266/F3). Two important conclusions emerge and were confirmed statistically by fitting the profiles to difference of Gaussian functions. First, CIs for the eye-movement and perception-only conditions were indistinguishable. Indeed, both profiles showed lateral inhibition (they dipped down below zero at the edge of the stimulus location) and both profiles were slightly wider than the target profile, meaning that information at the center of the RF is particularly important for driving the response. These characteristics are reminiscent of neural RFs. Second, the eye-movement and perception-only profiles were more similar to each other (t(5) = 0.13) than they were to the target profile (t(5) = 1.08). This last result is probably the most interesting because, as the authors point out in the discussion, the simplicity of their target could have led to behavioral RFs that matched it; therefore, the similarity between the eye-movement RF and the perception-only RF could have been the result of their correlation with the target. This last analysis proves that this is not the case, because behavioral RFs differed systematically from the target but not from each other.

The RF of the visual mechanism selecting the saccade target and that of the visual mechanism selecting the perceptual target are indistinguishable. If eye movements and visual perception are controlled by different brain areas, they would likely have different RFs and perform different computations. In contrast, this study shows that similar RFs are used, providing a strong argument that eye movements and visual perception could be controlled by the same brain areas, or that information is shared across areas. This interesting new result complements previous studies (cited above) providing evidence for similar outputs of the perceptual and motor systems by showing that the visual information in the input is also shared across the two systems, at least for simple visual tasks. It could be argued that in both the eye-movement and perception-only conditions, participants had to provide a motor response based on a visual discrimination. Therefore, an interesting next step might be to confirm these results with a perceptual task that does not require a motor response such as a button press.

Footnotes

  • Editor's Note: These short reviews of a recent paper in the Journal, written exclusively by graduate students or postdoctoral fellows, are intended to mimic the journal clubs that exist in your own departments or institutions. For more information on the format and purpose of the Journal Club, please see http://www.jneurosci.org/misc/ifa_features.shtml.

  • I thank the participants of the Perception Journal Club of Paris, France for a lively discussion on a previous draft.

  • Correspondence should be addressed to Thérèse Collins, Cognitive Psychology and Neurosciences Laboratory, Paris Descartes University and National Center for Scientific Research, 71 avenue Edouard Vaillant, 92774 Boulogne-Billancourt cedex, France. therese.collins{at}univ-paris5.fr

References

  1. ↵
    1. Braun D,
    2. Pracejus L,
    3. Gegenfurtner K
    (2006) Motion aftereffect elicits smooth pursuit eye movements. J Vis 6:671–684.
    OpenUrlCrossRefPubMed
  2. ↵
    1. Deubel H,
    2. Schneider W
    (1996) Saccade target selection and object recognition: evidence for a common attentional mechanism. Vision Res 26:1827–1837.
    OpenUrl
  3. ↵
    1. Eckstein M,
    2. Beutter B,
    3. Pham B,
    4. Shimozaki S,
    5. Stone L
    (2007) Similar neural representations of the target for saccades and perception during search. J Neurosci 27:1266–1270.
    OpenUrlAbstract/FREE Full Text
  4. ↵
    1. Goodale MA,
    2. Króliczak G,
    3. Westwood D
    (2005) Dual routes to action: contributions of the dorsal and ventral streams to adaptive behavior. Prog Brain Res 149:269–283.
    OpenUrlPubMed
  5. ↵
    1. Ross J,
    2. Morrone MC,
    3. Burr D
    (1997) Compression of visual space before saccades. Nature 386:598–601.
    OpenUrlCrossRefPubMed
  6. ↵
    1. Ingle DJ,
    2. Goodale MA,
    3. Mansfield RJW
    1. Ungerleider LG,
    2. Mishkin M
    (1982) in Analysis of visual behavior, Two cortical systems, eds Ingle DJ, Goodale MA, Mansfield RJW (MIT, Cambridge, MA), pp 549–586.
Back to top

In this issue

The Journal of Neuroscience: 27 (15)
Journal of Neuroscience
Vol. 27, Issue 15
11 Apr 2007
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
A Single Route to Action? The Common Representation of Perceptual and Saccade Targets
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
A Single Route to Action? The Common Representation of Perceptual and Saccade Targets
Thérèse Collins
Journal of Neuroscience 11 April 2007, 27 (15) 3935-3936; DOI: 10.1523/JNEUROSCI.0960-07.2007

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
A Single Route to Action? The Common Representation of Perceptual and Saccade Targets
Thérèse Collins
Journal of Neuroscience 11 April 2007, 27 (15) 3935-3936; DOI: 10.1523/JNEUROSCI.0960-07.2007
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Footnotes
    • References
  • Info & Metrics
  • eLetters
  • PDF

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

  • Cortical Inhibition, Plasticity, and Sleep
  • Dissociating Hippocampal and Cortical Contributions to Predictive Processing
  • Uncovering the Hippocampal Mechanisms Underpinning Spatial Learning and Flexible Navigation
Show more Journal Club
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
(JNeurosci logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.