Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
  • EDITORIAL BOARD
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
  • SUBSCRIBE
This Week in The Journal

This Week in The Journal

Teresa Esch [Ph.D.]
Journal of Neuroscience 2 December 2020, 40 (49) 9343; DOI: https://doi.org/10.1523/JNEUROSCI.twij.40.49.2020
Teresa Esch
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site

How Nociceptor Morphology Affects Integration of Inputs

Omer Barkai, Rachely Butterman, Ben Katz, Shaya Lev, and Alexander M. Binshtok

(see pages 9346–9363)

Nociceptor neurons in sensory ganglia extend a single axon that bifurcates, sending one branch to the periphery and one to the spinal cord. In the peripheral target, nociceptor axons may or may not arborize to generate complex trees that vary in the number and length of terminal branches. Importantly, each terminal branch can generate a spike when stimulated. These spikes propagate centrally, and, at each branch point, spikes generated in different terminals can converge. Unlike graded potentials, converging spikes do not summate. Moreover, each spike is followed by a refractory period during which the propagation of subsequent spikes is blocked. Consequently, it is difficult to predict how spikes generated in different branches will be integrated and thus how a given stimulus will affect nociceptor output. To investigate this, Barkai et al. turned to computational models.

The authors created a model nociceptor based on experimentally derived morphological and physiological parameters, then asked how the geometry of the peripheral arbor and the number of stimulated terminals affected the pattern of spiking measured in the central branch. Stimulation with a brief square current pulse elicited single action potentials regardless of which and how many terminals were stimulated. In contrast, a longer stimulus that mimicked currents evoked by capsaicin application produced different numbers and patterns of spikes depending on the length and axial resistance of terminal branches, the distance of stimulated terminals from the primary branch, the total number of terminal branches activated, and the conductance of NaV1.8-type voltage-gated sodium channels. Notably, changing specific parameters to mimic the effects of inflammatory or neuropathic pain significantly altered the temporal pattern of spiking, suggesting that spike timing influences the perception of pain.

These results indicate that the morphology of nociceptor peripheral arbors has a large impact on the input–output function. Given that nociceptors exhibit a range of terminal arbor structures, the results suggest that the same stimulus elicits different responses from different nociceptors. This might help to extend the dynamic range of nociception. Future work may use this nociceptor model to predict how specific perturbations, such as sprouting after injury or redistribution of intracellular organelles (which affects axial resistance), might affect nociception. These predictions could then be tested in vivo.

Figure
  • Download figure
  • Open in new tab
  • Download powerpoint

A classifier trained on MEG data recorded when a stimulus was in the left visual field was still able to decode the spatial frequency of the stimulus from data recorded after a saccade had shifted the stimulus to the right visual field. See Fabius et al. for details.

Persistence of Spatial Frequency Information across Saccades

Jasper H. Fabius, Alessio Fracasso, David J. Acunzo, Stefan Van der Stigchel, and David Melcher

(see pages 9476–9486)

When we survey our surroundings, our eyes do not typically scan slowly from point to point. Instead, they jump rapidly from object to object. Yet we perceive the world as continuous, rather than as a series of snapshots. How do our brains accomplish this feat? Although much previous work has focused on the role of saccade motor commands and predictive remapping of cortical neurons' receptive fields, Fabius et al. provide evidence that some information about visual images remains available in the brain after the eyes have shifted to a new position. They suggest that this information contributes to visual continuity.

The authors used magnetoencephalography to measure brain activity while people viewed gratings with high or low spatial frequency; these were presented in the center of a screen. On some trials, subjects fixated on a point to the right or to the left of the stimulus, and on other trials they began by fixating on the right point and then made a saccade to fixate on the left point. Thus, on saccade trials, the stimulus shifted from the left visual field to the right, activating different sets of neurons.

The authors trained classifiers to decode the spatial frequency of the stimulus based on data from fixation trials and used these classifiers to decode spatial frequency from saccade trial data. Not surprisingly, the classifier trained on right-fixation trials was able to decode spatial frequency from data recorded during the first part of saccade trials, when subjects were fixating to the right of the stimulus, whereas the classifier trained on left-fixation trials could decode spatial frequency from data collected shortly after the saccade, when subjects were fixating to the left. More importantly, the classifier trained on data from right-fixation trials was also able to decode spatial frequency from data recorded just after the saccade.

These results indicate that information about the spatial frequency of objects can be read out after the eyes have shifted to a new position. Consequently, there is a brief period when information about stimuli in the presaccadic and postsaccadic visual fields is present simultaneously. Higher visual areas might be able to use this information to construct a continuous representation of the scene.

Footnotes

  • This Week in The Journal was written by Teresa Esch, Ph.D.

  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
(JNeurosci logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.