Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Symposium and Mini-Symposium

Advanced Circuit and Cellular Imaging Methods in Nonhuman Primates

Stephen L. Macknik, Robert G. Alexander, Olivya Caballero, Jordi Chanovas, Kristina J. Nielsen, Nozomi Nishimura, Chris B. Schaffer, Hamutal Slovin, Amit Babayoff, Ravid Barak, Shiming Tang, Niansheng Ju, Azadeh Yazdan-Shahmorad, Jose-Manuel Alonso, Eugene Malinskiy and Susana Martinez-Conde
Journal of Neuroscience 16 October 2019, 39 (42) 8267-8274; https://doi.org/10.1523/JNEUROSCI.1168-19.2019
Stephen L. Macknik
1State University of New York Downstate Medical Center, Health Science Center at Brooklyn, New York 11203,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Stephen L. Macknik
Robert G. Alexander
1State University of New York Downstate Medical Center, Health Science Center at Brooklyn, New York 11203,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Robert G. Alexander
Olivya Caballero
1State University of New York Downstate Medical Center, Health Science Center at Brooklyn, New York 11203,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jordi Chanovas
1State University of New York Downstate Medical Center, Health Science Center at Brooklyn, New York 11203,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Kristina J. Nielsen
2Zanvyl Krieger Mind/Brain Institute, Department of Neuroscience, Johns Hopkins University, Baltimore, Maryland 21218,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Nozomi Nishimura
3Nancy E. and Peter C. Meinig School of Biomedical Engineering, Cornell University, Ithaca, New York 14853,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Chris B. Schaffer
3Nancy E. and Peter C. Meinig School of Biomedical Engineering, Cornell University, Ithaca, New York 14853,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Hamutal Slovin
4The Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan, 5290002, Israel,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Amit Babayoff
4The Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan, 5290002, Israel,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Ravid Barak
4The Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan, 5290002, Israel,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Shiming Tang
5Peking-Tsinghua Center for Life Sciences, School of Life Sciences, and Peking University-International Data Group-McGovern Institute for Brain Research, Peking University, Beijing 100871, China,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Niansheng Ju
5Peking-Tsinghua Center for Life Sciences, School of Life Sciences, and Peking University-International Data Group-McGovern Institute for Brain Research, Peking University, Beijing 100871, China,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Azadeh Yazdan-Shahmorad
6Department of Bioengineering, University of Washington, Seattle, Washington 98195,
7Department of Electrical and Computer Engineering, University of Washington, Seattle, Washington 98195,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Azadeh Yazdan-Shahmorad
Jose-Manuel Alonso
8State University of New York, College of Optometry, New York, New York 10036, and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Jose-Manuel Alonso
Eugene Malinskiy
9Indago, STE 206, Cleveland, Ohio 44103
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Eugene Malinskiy
Susana Martinez-Conde
1State University of New York Downstate Medical Center, Health Science Center at Brooklyn, New York 11203,
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Susana Martinez-Conde
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Novel genetically encoded tools and advanced microscopy methods have revolutionized neural circuit analyses in insects and rodents over the last two decades. Whereas numerous technical hurdles originally barred these methodologies from success in nonhuman primates (NHPs), current research has started to overcome those barriers. In some cases, methodological advances developed with NHPs have even surpassed their precursors. One such advance includes new ultra-large imaging windows on NHP cortex, which are larger than the entire rodent brain and allow analysis unprecedented ultra-large-scale circuits. NHP imaging chambers now remain patent for periods longer than a mouse's lifespan, allowing for long-term all-optical interrogation of identified circuits and neurons over timeframes that are relevant to human cognitive development. Here we present some recent imaging advances brought forth by research teams using macaques and marmosets. These include technical developments in optogenetics; voltage-, calcium- and glutamate-sensitive dye imaging; two-photon and wide-field optical imaging; viral delivery; and genetic expression of indicators and light-activated proteins that result in the visualization of tens of thousands of identified cortical neurons in NHPs. We describe a subset of the many recent advances in circuit and cellular imaging tools in NHPs focusing here primarily on the research presented during the corresponding mini-symposium at the 2019 Society for Neuroscience annual meeting.

  • two-photon microscopy
  • voltage-sensitive dye imaging
  • Adeno-Associated virus
  • optogenetics
  • cortical mapping
  • prosthetic vision

Introduction

Nonhuman primates (NHPs) are the most relevant animal models of human brain function available. Rodent and insect models have grown enormously in utility in the last two decades, due in large part to successful advances in genetic neural circuit analysis tools. It is imperative, however, to translate the utility of these advances to human therapeutics, which will often require preclinical testing in NHP brains. NHPs are also likely to be the most suitable model for the study of many human-relevant brain functions, such as those involving foveal vision, fine manual motor control and hand-eye coordination, finger sensation, high-cognition, and many psychiatric and neurological diseases. The same neural circuit analysis tools that were developed in insect and rodent models are now beginning to translate successfully to NHPs, including powerful genetic tools (Kuang et al., 2009; Okano and Mitra, 2015; He and Huang, 2018; Snyder and Chan, 2018), providing novel and exciting paths to both basic discovery and preclinical testing.

Here we describe a fraction of the many recent advances in circuit and cellular imaging tools and findings in NHPs. The field of NHP imaging has expanded in the past few years, and here we focus on the recent research, and on the future research directions of participants in the corresponding mini-symposium hosted at the 2019 Annual Meeting of the Society for Neuroscience. Thus, we review work with voltage-sensitive dyes (Slovin laboratory), all-optical interrogation with two-photon imaging, using both dyes (Nielsen) and transfected viral constructs for all-optical interrogation (Tang, Macknik, and Martinez-Conde laboratories), and new hardware allowing for ultra-wide field-of-view imaging (Macknik and Martinez-Conde laboratories). In addition to these technical advances, we discuss remaining obstacles to NHP research and approaches to overcoming them.

Long-term monitoring and optogenetic manipulation of target neurons have been especially challenging in NHPs. The application of these methods to in-depth analyses of neural circuits has required the field to overcome multiple technical obstacles, from achieving adequate expression of genetic calcium indicators in NHPs when previously tested rodent methods failed to translate easily to primates, to designing imaging implants suitable for maintaining long-term brain health, to minimizing motion artifacts during imaging of brains at a spatial scale that is relevant to humans. Given the large size of functional maps in NHPs, such research has required the development of microscopes with larger fields-of-view, as well as three-photon imaging to achieve greater imaging depths (Horton et al., 2013; T. Wang et al., 2018). Great strides have been made in clearing several of these hurdles; thus, NHP imaging is now possible for longer durations, and with a greater field-of-view than ever before.

Using voltage-sensitive dye imaging to encode and reconstruct visual stimuli in area V1

Voltage-sensitive dye imaging (VSDI) is the original imaging technique for exploring neuronal population voltage dynamics in behaving monkeys (Slovin et al., 2002; Grinvald and Hildesheim, 2004; Fig. 1A). The main advantages of VSDI over other extant methods remain temporal resolution on the scale of milliseconds (i.e., faster than the calcium signals measured using two-photon imaging) with large fields-of-view (typically 1–4 cm2) at the mesoscale resolution (502–2002 μm2/pixel; >10,000 pixels/frame). Thus, the application of VSDI in behaving monkeys enables the study of the spatiotemporal patterns of cortical populations (rather than of single cells, which is enabled by two-photon imaging) by using organic voltage-sensitive dyes that transduce the membrane potential of neurons into fluorescence (Tasaki et al., 1968; Salzberg et al., 1973; Cohen et al., 1974; Shoham et al., 1999). The in vivo VSD signal is highly correlated with the membrane potential, emphasizing subthreshold fluctuation (Sterkin et al., 1998; Petersen et al., 2003) but also reflecting spiking activity (Jancke et al., 2004; Ayzenshtat et al., 2010; Reynaud et al., 2011; Chen et al., 2012). Several groups have used this method to investigate circuits at the scale of entire functional columns during cortical processing of visual stimuli in area V1 of behaving monkeys (Slovin et al., 2002; Reynaud et al., 2012; Michel et al., 2013; Omer et al., 2013, 2019). Importantly, VSDI can be used to explore cortical correlates of higher visual functions in behaving NHPs, including perceptual grouping and figure-ground segregation (Ayzenshtat et al., 2010, 2012; Gilad et al., 2013; Gilad and Slovin, 2015), spatial attention (Chen and Seidemann, 2012), and the influence of saccades and microsaccades on V1 neural responses (Meirovithz et al., 2012; Gilad et al., 2014, 2017). Recent studies have been begun using genetically encoded voltage indicators in vivo (Gong et al., 2015; Yang et al., 2016; Adam et al., 2019), but their application in NHPs remains a challenge. The remainder of this section focuses on discoveries made through the successful use of VSDI in behaving NHPs.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

A, Experimental setup for VSDI in NHPs. B, Reconstruction of shape contours from VSD maps imaged in V1 of fixating monkeys. Left, The original shape stimuli (white contour over gray background) that were presented for a fixating monkey. Stimulus size: circle, radius = 0.8°; triangle and square were bounded within the circle. Fixation point appears in red. Middle, VSD response maps (color-coded; ΔF/F) evoked by the visual stimuli averaged over 100–200 ms after stimulus onset. The dashed line denotes the border between V1 and V2 areas. Maps were filtered with a 2D Gaussian (σ = 1 pixel) for visualization purposes only. Right, Reconstructed stimuli from the VSD maps, after applying an inverse model (Zurawel et al., 2016); baseline activity was subtracted before reconstruction.

One of the visual system's main tasks is to combine edges and surfaces into perceptual groups, to create representations of single or multiple objects that are segregated from other objects and the background. The Slovin laboratory has applied VSDI to measure cortical responses to visual stimuli and analyze neural circuits in V1. They presented visual objects defined by either luminance or color while recording with VSDI in V1 of NHPs to test the hypothesis that visual perception of uniform surfaces is mediated by an isomorphic, filled-in representation (Zurawel et al., 2014, 2016; Zweig et al., 2015). Contrary to this hypothesis, the experiments revealed that neural responses to both color and luminance surfaces were similarly edge-enhanced in that VSDI pixels at the spatial locations occupied by edges showed greater voltage changes than those responding to the middle regions of the surfaces, appearing rapidly after stimulus onset. Within cortical regions corresponding to achromatic squares of 1° of visual angle and larger, the surface's edges were more strongly activated than its center. Following this early period, responses inside the surface's edges increased slowly, partially “filling-in” the V1 region corresponding to the square's center. Surprisingly, responses to color squares remained edge-enhanced throughout the stimulus and there was no filling-in of the center. These results imply that chromatic and achromatic surfaces are represented differently in V1 and that uniform filled-in representations of surfaces in V1 are not required for the perception of uniform color surfaces.

A different set of studies investigated the neural representation of an object (i.e., the figure and its background). Gilad et al. (2013, 2017) found that cortical responses mediating figure-ground segregation and contour integration comprise divergent responses, including both figure enhancement and background suppression. Further investigation of a more realistic natural scene with few objects suggested that separate objects are labeled by different response amplitudes (Gilad and Slovin, 2015).

Finally, to test whether V1 can be used for high-resolution readout of visual stimuli, Slovin group attempted to reconstruct, at sub-degree resolution, visual contours from the VSD signal that were evoked by simple shapes (Zurawel et al., 2016; Fig. 1B). By applying an inverse version of a simplified bottom-up model to neuronal responses, they were able to reconstruct shape contours that were highly similar to the original stimuli. These results, together with stimulus reconstruction at the single trial level (Zurawel et al., 2016), suggest that V1 can be an important constituent in the detailed internal representation of visual experiences, and lay down the basis for future cortical artificial vision.

Using dye-based two-photon imaging to investigate the fine-scale organization of area V4

Neural circuits in mid-level visual areas such as V4 implement key algorithms in visual processing: they transform the image-level representations of early visual stages into explicit, compact, and stable representations of objects and scenes found in late pathway stages. Previous extracellular recording studies have identified some of the fundamental V4 tuning functions. Crucially, in addition to tuning for orientation and spatial frequency as in earlier areas, V4 neurons are tuned for more complex features of contour fragments, such as their curvature and object-relative position (Roe et al., 2012). Yet, much remains to be learned about processing of shape in V4. Two-photon calcium imaging promises to be an important tool for this purpose, and more generally for the study of higher-level visual areas.

One area of investigation facilitated by two-photon imaging and calcium-activated fluorescent dyes is an in-depth exploration of the functional micro-organization of V4 and the stimulus space encoded by V4. In NHPs, orderly feature maps have been observed in a number of visual areas, including V1 (Blasdel and Salama, 1986; Ts'o et al., 1990; Bartfeld and Grinvald, 1992; Obermayer and Blasdel, 1993; Malach et al., 1994; Nauhaus et al., 2012, 2016) and higher-order areas like MT (Albright et al., 1984; DeAngelis and Newsome, 1999). These maps exist for feature dimensions central to processing in an area, such as orientation in V1, or motion direction in MT. In V4, maps for orientation and color have been reported (Tanigawa et al., 2010), but the functional organization of other tuning properties remains unknown. Two-photon imaging allows simultaneous measurements of tuning functions across many neurons with known spatial relationships, and it is therefore ideal for mapping functional organization. Because maps in primate visual cortex appear to exist for the central processing dimensions, V4 will likely contain orderly feature maps for contour shape parameters (Fig. 2). The precise layout of these maps will offer a number of important clues about V4 circuit organization: first, certain contour fragments are more likely to co-occur in natural shapes than others. If the organization of V4 feature maps reflects these statistical dependencies, it will indicate that this is mechanism by which how the visual system rapidly links common contour configurations. Second, feature maps may represent a fundamental substrate for input integration (Nauhaus and Nielsen, 2014): This is best studied in V1, for which a tight relationship between spatial clustering of inputs and emerging tuning properties has been observed (Jin et al., 2011). Feature maps might similarly aid integration of related inputs in V4. Last, as demonstrated by the precise alignment of V1 feature maps within each other and the retinotopic map (Nauhaus et al., 2016), they provide mechanisms for efficiently achieving complete coverage of multiple tuning dimensions, which presumably must occur in V4 just as it does in V1.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Hypothetical layout of a feature map in V4. A range of factors constrains natural object structure (including material properties, mechanical laws, growth processes, and design and construction procedures) and generally result in smooth changes in contours. The example contour (left) highlights the fact that strongly curved object elements are often flanked by straighter elements oriented 90° away from the curved part. The figure on the right reflects a hypothetical feature map that adheres to this relationship by grouping neurons (circles) first according to their contour preference (line in each circle), and second according to the co-occurrence of these shape features.

More generally, two-photon imaging promises to be a useful tool for studying the encoding of high-dimensional stimulus spaces encountered in higher-level visual areas. To adequately capture tuning functions in these areas, responses to many stimuli need to be measured. Yet, with traditional recording techniques, recording durations for single neurons has been relatively limited. Because chronic two-photon imaging allows tracking of individual identified neurons across sessions, the recording duration per neuron is drastically increased, allowing adequate sampling even of very large stimulus spaces.

Long-term all-optical interrogation of neural circuits in macaque visual cortex

Recent research has achieved high-quality two-photon imaging in awake behaving macaques, capturing images with many neurons over long periods of time (Macknik and Haglund, 1999; Macknik et al., 2000; Ju et al., 2018, 2019). These studies reveal that various two-photon imaging techniques can be effectively applied in different brain areas, with different molecular tools, to study different cognitive functions. One example is the work by Li et al. (2017), who achieved long-term two-photon imaging in awake behaving macaques and were thus able to monitor the activities of thousands of cortical neurons at single-cell resolution for up to several months. Similar approaches could be used to investigate the neuronal population coding of working memory in prefrontal cortex or the neural mechanisms of object recognition in inferior temporal cortex (IT; i.e., in a face patch revealed by two-photon imaging).

High-resolution dendritic imaging has also been accomplished in awake macaque monkeys: Ju et al. (2019) functionally mapped excitatory synaptic inputs onto V1 neuronal dendrites with high spatial and temporal resolution with an iGluSnFR (green glutamate-indicator) sensor. This study found that synaptic inputs near one another tended to share similar functional properties in macaque V1, consistent with previous findings of spatial clustering in lower-mammal cortex (Kleindienst et al., 2011; Iacaruso et al., 2017; Scholl et al., 2017; Fig. 3A). Critically, excitatory synaptic inputs were highly scattered in multidimensional feature space, providing a potential substrate for local feature integration on dendritic branches.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

A, Left, Map of orientation-selective Regions of Interest (ROIs) on dendrites of an example neuron. Orientation preferences are colored for vector sum polarity for each ROI (color bar at lower right). Right, Map of color-selective inputs. Color preferences are labeled with the dominant polarity for each ROI. B, Coexpression of RCaMP and iGluSnFR in an IT neuron. Left, Two-photon image of iGluSnFR fluorescence were acquired under a 1000 nm excitation laser with a 525 ± 35 nm filter. Right, Two-photon image of RCaMP fluorescence of the same neuron was acquired under a 1060 nm excitation laser with a 615 ± 20 nm filter, the average of 1000 frames from each recording session. C, Left, Two-photon images of iGluSnFR fluorescence of one V1 neuron. Middle, Two-photon image of RCaMP fluorescence of the same neuron. Right, Summed synaptic input signals (green line) and somatic responses (red line) as a function of stimulus (below; error bar, SEM), indicating an orientation-sharpening mechanism affecting the soma's response. Bottom, A subset from the entire set of 81 visual stimuli consisting of either 1 of 9 different color patches, or a drifting grating having 1 of 12 orientations, with 1 of 2 drift directions, and 1 of 3 spatial frequencies.

Recently, the Tang laboratory achieved coexpression of iGluSnFR together with RCaMP (red Calcium-indicator) in each individual neuron, enabling the simultaneous monitoring of excitatory synaptic inputs and somatic outputs of individual neurons (see Fig. 3B for coexpression in an IT neuron; Fig. 3C for a V1 neuron). Interestingly, single V1 neurons seemed to receive a mix of excitatory inputs (with various preferred spatial frequencies and colors, Fig. 3C), whereas soma output was not mixed, preferring a specific orientation and spatial frequency. This effect might be explained by GABAergic inhibition by interneurons (Y. Wang et al., 2000). Because the synaptic inputs onto a neuron represent the presynaptic neurons' outputs, and calcium-imaging of the dendritic activity reveals the local response, these dendritic imaging techniques might reveal the circuit computations in dendrites, providing a deeper understanding of dendritic integration.

Ongoing and future directions: toward creating a novel visual prosthetic

Recent and ongoing imaging advances in NHPs, including those described, open new paths for the development of cortical prosthetics that may restore foveal vision in blind patients. The Macknik and Martinez-Conde laboratories previously conducted the first ultra-wide-field optical imaging of a visual illusory response (Macknik and Haglund, 1999) and of a stationary object's edge in V1 of NHPs (Macknik et al., 2000). Using precisely targeted optogenetic activation, a cortical prosthetic might optically stimulate spatially localized lateral geniculate nucleus (LGN) synaptic boutons, transfected with light-sensitive proteins and projecting into V1 layer 4, in a pattern that mimics naturalistic visual input.

The LGN input layer to V1 is the only thalamic connection to the cortex in which the topographical map of connectivity is known with synaptic precision (Kremkow et al., 2016; Lee et al., 2016). In this retinotopic map, four input modules, one ON input and one OFF input originating from each of the two eyes, constitute all of the fundamental projections that give rise to Hubel and Wiesel's hypercolumn in V1 and beyond. Because each hypercolumn conveys all visual information in a given retinotopic region of the retino-geniculo-striatal pathway, one may theoretically control vision by controlling the inputs from the LGN. Further, because all long-range connections from the LGN to V1 are glutamatergic, optogenetic targeting of these inputs would be free from unwanted coactivation of inhibitory neurons (a common problem in electrode-based prosthetic devices, which cannot isolate excitatory from inhibitory activation and thus result in diminished contrast perception).

Because prosthetic devices can only succeed in driving naturalistic stimulation when they account for rapidly changing cortical activity and response conditions (Born et al., 2015; Berry et al., 2019; Paraskevoudi and Pezaris, 2019), a successful system might integrate a real-time cortical readout mechanism to continually assess and provide feedback to modify stimulation levels, just as the natural visual system does. Then, one should be able to read out the resultant activity from a multicolored array of bioluminescent V1 calcium responses with single-cell resolution from a one-photon camera, for real-time feedback control. Oculomotor effects could be accounted for by tracking eye movements and adjusting the correlated inputs from the camera's field-of-view in real-time (just as the natural retina does).

NHP imaging work in the Macknik and Martinez-Conde laboratories has been directed toward the development of such a device. The planned system, called the Optogenetic Brain System (OBServe), is designed to function by optimally activating visual responses in V1 from a coplanar LED array/video camera (Fig. 4).

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Plan for the OBServe. The Macknik and Martinez-Conde laboratories (and their partners) have developed cortical optogenetic and bioluminescence techniques in area V1 circuits of NHPs with the long-term goal of restoring vision using naturalistic stereoscopic foveal input from a video camera at the highest attainable visual acuity and contrast; in area V1 of NHPs for subsequent translation to the brains of blind human patients. In its envisioned final use in humans, blind patients will wear eye-tracking glasses that monitor the center of gaze on the visual scene, and transmit encrypted visual information to OBServe via a built-in encrypted ultra-wide band transceiver module. A, To get the information from the glasses into the brain, adeno-associated virus will be injected into the LGN, where it will deliver genetically encoded optogenetic light-sensitive proteins. These channelrhodopsins will percolate up through the optic radiations into the boutons in V1, where they can be optically stimulated with precise targeting of purely excitatory glutamatergic inputs; using the same synaptic mechanisms as normal vision. Preliminary studies in the Macknik and Martinez-Conde laboratories show that V1 cells are indeed stimulated by LGN optical stimulation from the surface of the cortex. B, Because the cortex varies its response to stable inputs in an ongoing fashion (through a number of mechanisms), prosthetic activation must be correspondingly adjusted to achieve stable function. To read out the responses of V1 and make these real-time adjustments, in a way that can be enclosed within a human skull, the system will use a hyperspectral photometer to readout the activity from cortical neurons transfected with a multicolor array of bioluminescent calcium indicators. Results from the foveal region of NHP V1 indicate that five different fluorophores provide at least 4752 distinguishable colors in >60,000 uniquely identified and targeted neurons (Chanovas et al., 2019). C, OBServe's coplanar imager/emitter chip will have individual twenty-five 1 μm2 pixels in a 400 × 400 1 cm2 square array, each made up of an LED to stimulate LGN boutons, surrounded by five monochromatic cameras with colored microfilters to optimize their chromatic sensitivity to the different transfected bioluminescent colors. D, The human implant will contain the emitter/imager chip and its supporting electronics in a 1 cm3 case that is fully implantable with no percutaneous wiring. E, Novel method to mount the implant within the skull. F, Both left and right fovea will receive an implant.

The OBServe approach follows from the principle that if the LGN input modules are stimulated in the same pattern as natural vision, the recipient should perceive naturalistic prosthetic vision. Developing the necessary technologies for this approach will require precise maps of visual circuits, and long-duration recordings, which may be realized in the short-term, in light of recent imaging advances such as those described earlier. Yet, a significant challenge is that such visual cortical mapping will need to be developed for blind patients, whereas cortical mapping has only been achieved to date in sighted individuals through forward modeling (i.e., mapping responses to visual stimuli). To address this problem, the Macknik and Martinez-Conde laboratories have developed an inverse-modeling approach, inspired by research in the Slovin laboratory mentioned in Figure 1, to map V1 using prosthetic activation. The idea is that, because coactivated ON/OFF inputs within the same hypercolumn will null each other, one may establish which ON/OFF columns are corresponding pairs by activating them prosthetically to determine which inputs cancel each other. To test this hypothesis, precise forwardly modeled maps of cortical orientation, ocular dominance, ON/OFF and retinotopy will be created in NHPs (using the advanced two-photon imaging techniques developed in the Tang laboratory) and compared with the results from the inverse-mapping methods in the same tissue.

Implant for ultra-large field-of-view for mesoscopic imaging in primates

The first component of OBServe to present an engineering challenge was the cortical imaging chamber technology for NHPs. The Macknik and Martinez-Conde laboratories have designed and produced a printable PEEK imaging chamber that enables long-duration recordings with an ultra-large field-of-view (following from methods developed in the Tang laboratory for the relatively small fields-of-view necessary for microscopic two-photon circuit analysis; Li et al., 2017) required for OBServe testing. The PEEK chamber solves several obstacles endemic to mesoscopic imaging in the brain: (1) difficulty with positioning high-NA objectives near the brain, (2) creating a flat imaging window against the surface of the brain, (3) adjusting the imaging window in the face of changes in swelling and pressure in the brain, (4) preventing growth of dura and biofilms that obscure the imaging window, and (5) follow-on MRI imaging of the animal after implantation (Fig. 5).

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Exploded view of the pressure-regulating implant for OBServe testing. *Indicates part is made of PEEK plastic, chosen for its radiolucent properties, strength, and ability to be sterilized. Component listing with description:

  • Cap and cap screws: removable cap for imaging and cleaning.

  • Silicone O-ring: ∼0.4 mm thick, prevents bacterial movement between chamber and cap.

  • Thin securing ring: Secures rotating ring against the bottom shelf of the chamber and prevents it from moving up.

  • Stabilizing screws: pushes against cup and prevents the imaging cup from moving closer to the chamber. Threaded into the bottom securing ring and penetrates silicone.

  • Guide cannulas: cannulas are threaded into the rotating ring at 3 different locations and sealed with silicone glue. These cannulas allow 18G CED needles to penetrate the cylinder.

  • CED injection needles: convection enhanced delivery needles. Designed for cortical injections but can also be used to deliver drugs or imaging contrast agents into the soft tissue.

  • Rotating ring: multiple threaded holes for the height adjusting screws and can rotate to adjust the positions of the screws. Sits between shelf in chamber and thin securing ring.

  • Chamber: Has holes for bone screws that are perpendicular to the surface of the bone to increase the strength of the bond between the chamber and bone. It also includes three threaded holes on the top to allow attachments and to secure the cap.

  • Resistance member: silicone is preferred for its ease in manufacturing, control of mechanical properties, and ability to be sterilized. Serves as a spring to adjust for pressure changes caused by the variations in swelling in the brain. Although not indicated in the drawing, the silicone connects to the lip of the imaging cup and rotating ring to create a sealed environment.

  • Imaging cup and glass cover slip: the glass is glued to the cup. Together they create a bowl that can hold liquid for a water-immersion objective.

The PEEK chamber has a 2 cm diameter window for NHPs that regulates pressure to optimize long-term patency, and uses a stable, strong, and thin design. For context, its field-of-view is larger than three entire mouse brains placed side-by-side. The pressure-regulating implant is mechanically modeled and stress-tested to achieve access to the brain by large objectives, with design features that allow manual repositioning of the imaging lens. A thin implant design was prioritized to optimize the distance between the objective and the brain. A strong radiolucent implant was created using PEEK plastic, a strong, thermoresistant and biostable material. This method also allows manual repositioning of the coverslip to create a flat imaging window.

The chamber implant design includes an engineered silicone mount designed to maintain even pressure of the imaging window on the brain's surface, despite changes when the brain swells or moves within the skull. The mechanical properties of the silicone are manipulated to closely resemble that of brain tissue to be more biomimetic than traditional platinum-cured silicone and act as a cushion for motion. The approach prevents increases in pressure that could lead to neurodegeneration, and at the same time prevents dural and biofilm undergrowth by blocking the migration of migratory cells. The dynamic pressure maintenance on the brain may be an important component of this method's success. Through the ultra-large field-of-view produced by this implant, the Macknik and Martinez-Conde laboratories have achieved two-photon image results in >60,000 neurons (the largest two-photon images obtained in any model to date; Chanovas et al., 2019). By regulating pressure while allowing a larger field-of-view, the chamber is expected to enhance recording window longevity and may prove to be a critical advance in NHP and human brain imaging. This implant is designed to facilitate any of the imaging techniques described in this review.

In summary, optogenetic, macroscopic, and microscopic all-optical interrogation techniques have proven successful for manipulating neuronal populations with high spatial and temporal fidelity. Recent advances have overcome many obstacles endemic to using these methods in NHPs. Improvements to hardware and methods have increased the duration of recordings to months or years; time periods relevant to the development of human neural circuits. The resulting images include large numbers of neurons across ultra-large fields-of-view, allowing circuits to be understood in much greater depth than before, within functional maps that are precise, in V1 also higher-level visual areas. Each of these advances has enabled and new studies providing information critical to human-relevant translational research, including a path toward the development of novel visual prosthetics.

Footnotes

  • This work was supported by the NSF Awards 1523614 and 1734887 to S.L.M. and S.M.C., and an NSF NeuroNex Technology Hub–Nemonic Award and NEI R01EY029420 to K.J.N.

  • The authors declare no competing financial interests.

  • Correspondence should be addressed to Stephen L. Macknik at macknik{at}neuralcorrelate.com

References

  1. ↵
    1. Adam Y,
    2. Kim JJ,
    3. Lou S,
    4. Zhao Y,
    5. Xie ME,
    6. Brinks D,
    7. Parot V,
    8. Wu H,
    9. Mostajo-Radji MA,
    10. Kheifets S,
    11. Parot V,
    12. Chettih S,
    13. Williams KJ,
    14. Gmeiner B,
    15. Farhi SL,
    16. Madisen L,
    17. Buchanan EK,
    18. Kinsella I,
    19. Zhou D,
    20. Paninski L,
    21. Harvey CD,
    22. Zeng H, et al
    . (2019) Voltage imaging and optogenetics reveal behaviour-dependent changes in hippocampal dynamics. Nature 569:413–417. doi:10.1038/s41586-019-1166-7 pmid:31043747
    OpenUrlCrossRefPubMed
  2. ↵
    1. Albright TD,
    2. Desimone R,
    3. Gross CG
    (1984) Columnar organization of directionally selective cells in visual area MT of the macaque. J Neurophysiol 51:16–31. doi:10.1152/jn.1984.51.1.16 pmid:6693933
    OpenUrlCrossRefPubMed
  3. ↵
    1. Ayzenshtat I,
    2. Meirovithz E,
    3. Edelman H,
    4. Werner-Reiss U,
    5. Bienenstock E,
    6. Abeles M,
    7. Slovin H
    (2010) Precise spatiotemporal patterns among visual cortical areas and their relation to visual stimulus processing. J Neurosci 30:11232–11245. doi:10.1523/JNEUROSCI.5177-09.2010 pmid:20720131
    OpenUrlAbstract/FREE Full Text
  4. ↵
    1. Ayzenshtat I,
    2. Gilad A,
    3. Zurawel G,
    4. Slovin H
    (2012) Population response to natural images in the primary visual cortex encodes local stimulus attributes and perceptual processing. J Neurosci 32:13971–13986. doi:10.1523/JNEUROSCI.1596-12.2012 pmid:23035105
    OpenUrlAbstract/FREE Full Text
  5. ↵
    1. Bartfeld E,
    2. Grinvald A
    (1992) Relationships between orientation-preference pinwheels, cytochrome oxidase blobs, and ocular-dominance columns in primate striate cortex. Proc Natl Acad Sci U S A 89:11905–11909. doi:10.1073/pnas.89.24.11905 pmid:1465416
    OpenUrlAbstract/FREE Full Text
  6. ↵
    1. Berry MH,
    2. Holt A,
    3. Salari A,
    4. Veit J,
    5. Visel M,
    6. Levitz J,
    7. Aghi K,
    8. Gaub BM,
    9. Sivyer B,
    10. Flannery JG,
    11. Isacoff EY
    (2019) Restoration of high-sensitivity and adapting vision with a cone opsin. Nat Commun 10:1221. doi:10.1038/s41467-019-09124-x pmid:30874546
    OpenUrlCrossRefPubMed
  7. ↵
    1. Blasdel GG,
    2. Salama G
    (1986) Voltage-sensitive dyes reveal a modular organization in monkey striate cortex. Nature 321:579–585. doi:10.1038/321579a0 pmid:3713842
    OpenUrlCrossRefPubMed
  8. ↵
    1. Born RT,
    2. Trott AR,
    3. Hartmann TS
    (2015) Cortical magnification plus cortical plasticity equals vision? Vision Res 111:161–169. doi:10.1016/j.visres.2014.10.002 pmid:25449335
    OpenUrlCrossRefPubMed
  9. ↵
    1. Chanovas J,
    2. Caballero O,
    3. Ledo M,
    4. Yazdah-Shahmorad A,
    5. Cheng Y-T,
    6. Bizimana LA,
    7. Callaway E,
    8. Seidemann E,
    9. Reynolds J,
    10. Avery M,
    11. Li P,
    12. Nandy A,
    13. Tang S,
    14. Chen Y,
    15. Vaziri A,
    16. Nöbauer T,
    17. Martinez-Conde S,
    18. Nishimura N,
    19. Schaffer C,
    20. Macknik SL
    (2019) Ultra-large field-of-view two-photon imaging of primary visual cortex foveal region in non-human primates. Paper presented at the BRAIN Initiative Investigators Meeting, Washington, DC, April.
  10. ↵
    1. Chen Y,
    2. Seidemann E
    (2012) Attentional modulations related to spatial gating but not to allocation of limited resources in primate V1. Neuron 74:557–566. doi:10.1016/j.neuron.2012.03.033 pmid:22578506
    OpenUrlCrossRefPubMed
  11. ↵
    1. Chen Y,
    2. Palmer CR,
    3. Seidemann E
    (2012) The relationship between voltage-sensitive dye imaging signals and spiking activity of neural populations in primate V1. J Neurophysiol 107:3281–3295. doi:10.1152/jn.00977.2011 pmid:22422999
    OpenUrlCrossRefPubMed
  12. ↵
    1. Cohen LB,
    2. Salzberg BM,
    3. Davila HV,
    4. Ross WN,
    5. Landowne D,
    6. Waggoner AS,
    7. Wang CH
    (1974) Changes in axon fluorescence during activity: molecular probes of membrane potential. J Membr Biol 19:1–36. doi:10.1007/BF01869968 pmid:4431037
    OpenUrlCrossRefPubMed
  13. ↵
    1. DeAngelis GC,
    2. Newsome WT
    (1999) Organization of disparity-selective neurons in macaque area MT. J Neurosci 19:1398–1415. doi:10.1523/JNEUROSCI.19-04-01398.1999 pmid:9952417
    OpenUrlAbstract/FREE Full Text
  14. ↵
    1. Gilad A,
    2. Slovin H
    (2015) Population responses in V1 encode different figures by response amplitude. J Neurosci 35:6335–6349. doi:10.1523/JNEUROSCI.0971-14.2015 pmid:25904787
    OpenUrlAbstract/FREE Full Text
  15. ↵
    1. Gilad A,
    2. Meirovithz E,
    3. Slovin H
    (2013) Population responses to contour integration: early encoding of discrete elements and late perceptual grouping. Neuron 78:389–402. doi:10.1016/j.neuron.2013.02.013 pmid:23622069
    OpenUrlCrossRefPubMed
  16. ↵
    1. Gilad A,
    2. Pesoa Y,
    3. Ayzenshtat I,
    4. Slovin H
    (2014) Figure-ground processing during fixational saccades in V1: indication for higher-order stability. J Neurosci 34:3247–3252. doi:10.1523/JNEUROSCI.4375-13.2014 pmid:24573283
    OpenUrlAbstract/FREE Full Text
  17. ↵
    1. Gilad A,
    2. Oz R,
    3. Slovin H
    (2017) Uncovering the spatial profile of contour integration from fixational saccades: evidence for widespread processing in V1. Cereb Cortex 27:5261–5273. doi:10.1093/cercor/bhw305 pmid:28334181
    OpenUrlCrossRefPubMed
  18. ↵
    1. Gong Y,
    2. Huang C,
    3. Li JZ,
    4. Grewe BF,
    5. Zhang Y,
    6. Eismann S,
    7. Schnitzer MJ
    (2015) High-speed recording of neural spikes in awake mice and flies with a fluorescent voltage sensor. Science 350:1361–1366. doi:10.1126/science.aab0810 pmid:26586188
    OpenUrlAbstract/FREE Full Text
  19. ↵
    1. Grinvald A,
    2. Hildesheim R
    (2004) VSDI: a new era in functional imaging of cortical dynamics. Nat Rev Neurosci 5:874–875. doi:10.1038/nrn1536 pmid:15496865
    OpenUrlCrossRefPubMed
  20. ↵
    1. He M,
    2. Huang ZJ
    (2018) Genetic approaches to access cell types in mammalian nervous systems. Curr Opin Neurobiol 50:109–118. doi:10.1016/j.conb.2018.02.003 pmid:29471215
    OpenUrlCrossRefPubMed
  21. ↵
    1. Horton NG,
    2. Wang K,
    3. Kobat D,
    4. Clark CG,
    5. Wise FW,
    6. Schaffer CB,
    7. Xu C
    (2013) In vivo three-photon microscopy of subcortical structures within an intact mouse brain. Nat Photonics 7:205–209. doi:10.1038/nphoton.2012.336 pmid:24353743
    OpenUrlCrossRefPubMed
  22. ↵
    1. Iacaruso MF,
    2. Gasler IT,
    3. Hofer SB
    (2017) Synaptic organization of visual space in primary visual cortex. Nature 547:449–452. doi:10.1038/nature23019 pmid:28700575
    OpenUrlCrossRefPubMed
  23. ↵
    1. Jancke D,
    2. Chavane F,
    3. Naaman S,
    4. Grinvald A
    (2004) Imaging cortical correlates of illusion in early visual cortex. Nature 428:423–426. doi:10.1038/nature02396 pmid:15042090
    OpenUrlCrossRefPubMed
  24. ↵
    1. Jin J,
    2. Wang Y,
    3. Swadlow HA,
    4. Alonso JM
    (2011) Population receptive fields of ON and OFF thalamic inputs to an orientation column in visual cortex. Nat Neurosci 14:232–238. doi:10.1038/nn.2729 pmid:21217765
    OpenUrlCrossRefPubMed
  25. ↵
    1. Ju N,
    2. Jiang R,
    3. Macknik SL,
    4. Martinez-Conde S,
    5. Tang S
    (2018) Long-term all-optical interrogation of cortical neurons in awake-behaving nonhuman primates. PLoS Biol 16:e2005839. doi:10.1371/journal.pbio.2005839 pmid:30089111
    OpenUrlCrossRefPubMed
  26. ↵
    1. Ju N,
    2. Li Y,
    3. Liu F,
    4. Jiang H,
    5. Macknik SL,
    6. Martinez-Conde S,
    7. Tang S
    (2019) Spatiotemporal functional organization of excitatory synaptic inputs onto macaque V1 neurons. bioRxiv 558163. doi:10.1101/558163
    OpenUrlAbstract/FREE Full Text
  27. ↵
    1. Kleindienst T,
    2. Winnubst J,
    3. Roth-Alpermann C,
    4. Bonhoeffer T,
    5. Lohmann C
    (2011) Activity-dependent clustering of functional synaptic inputs on developing hippocampal dendrites. Neuron 72:1012–1024. doi:10.1016/j.neuron.2011.10.015 pmid:22196336
    OpenUrlCrossRefPubMed
  28. ↵
    1. Kremkow J,
    2. Jin J,
    3. Wang Y,
    4. Alonso JM
    (2016) Principles underlying sensory map topography in primary visual cortex. Nature 533:52–57. doi:10.1038/nature17936 pmid:27120164
    OpenUrlCrossRefPubMed
  29. ↵
    1. Kuang H,
    2. Wang PL,
    3. Tsien JZ
    (2009) Towards transgenic primates: what can we learn from mouse genetics? Sci China C Life Sci 52:506–514. doi:10.1007/s11427-009-0082-8 pmid:19557327
    OpenUrlCrossRefPubMed
  30. ↵
    1. Lee KS,
    2. Huang X,
    3. Fitzpatrick D
    (2016) Topology of ON and OFF inputs in visual cortex enables an invariant columnar architecture. Nature 533:90–94. doi:10.1038/nature17941 pmid:27120162
    OpenUrlCrossRefPubMed
  31. ↵
    1. Li M,
    2. Liu F,
    3. Jiang H,
    4. Lee TS,
    5. Tang S
    (2017) Long-term two-photon imaging in awake macaque monkey. Neuron 93:1049–1057.e3. doi:10.1016/j.neuron.2017.01.027 pmid:28215557
    OpenUrlCrossRefPubMed
  32. ↵
    1. Macknik SL,
    2. Haglund MM
    (1999) Optical images of visible and invisible percepts in the primary visual cortex of primates. Proc Natl Acad Sci U S A 96:15208–15210. doi:10.1073/pnas.96.26.15208 pmid:10611363
    OpenUrlAbstract/FREE Full Text
  33. ↵
    1. Macknik SL,
    2. Martinez-Conde S,
    3. Haglund MM
    (2000) The role of spatiotemporal edges in visibility and visual masking. Proc Natl Acad Sci U S A 97:7556–7560. doi:10.1073/pnas.110142097 pmid:10852945
    OpenUrlAbstract/FREE Full Text
  34. ↵
    1. Malach R,
    2. Tootell RB,
    3. Malonek D
    (1994) Relationship between orientation domains, cytochrome oxidase stripes, and intrinsic horizontal connections in squirrel monkey area V2. Cereb Cortex 4:151–165. doi:10.1093/cercor/4.2.151 pmid:8038566
    OpenUrlCrossRefPubMed
  35. ↵
    1. Meirovithz E,
    2. Ayzenshtat I,
    3. Werner-Reiss U,
    4. Shamir I,
    5. Slovin H
    (2012) Spatiotemporal effects of microsaccades on population activity in the visual cortex of monkeys during fixation. Cereb Cortex 22:294–307. doi:10.1093/cercor/bhr102 pmid:21653284
    OpenUrlCrossRefPubMed
  36. ↵
    1. Michel MM,
    2. Chen Y,
    3. Geisler WS,
    4. Seidemann E
    (2013) An illusion predicted by V1 population activity implicates cortical topography in shape perception. Nat Neurosci 16:1477–1483. doi:10.1038/nn.3517 pmid:24036915
    OpenUrlCrossRefPubMed
  37. ↵
    1. Nauhaus I,
    2. Nielsen KJ
    (2014) Building maps from maps in primary visual cortex. Curr Opin Neurobiol 24:1–6. doi:10.1016/j.conb.2013.08.007 pmid:24492071
    OpenUrlCrossRefPubMed
  38. ↵
    1. Nauhaus I,
    2. Nielsen KJ,
    3. Disney AA,
    4. Callaway EM
    (2012) Orthogonal micro-organization of orientation and spatial frequency in primate primary visual cortex. Nat Neurosci 15:1683–1690. doi:10.1038/nn.3255 pmid:23143516
    OpenUrlCrossRefPubMed
  39. ↵
    1. Nauhaus I,
    2. Nielsen KJ,
    3. Callaway EM
    (2016) Efficient receptive field tiling in primate V1. Neuron 91:893–904. doi:10.1016/j.neuron.2016.07.015 pmid:27499086
    OpenUrlCrossRefPubMed
  40. ↵
    1. Obermayer K,
    2. Blasdel GG
    (1993) Geometry of orientation and ocular dominance columns in monkey striate cortex. J Neurosci 13:4114–4129. doi:10.1523/JNEUROSCI.13-10-04114.1993 pmid:8410181
    OpenUrlAbstract/FREE Full Text
  41. ↵
    1. Okano H,
    2. Mitra P
    (2015) Brain-mapping projects using the common marmoset. Neurosci Res 93:3–7. doi:10.1016/j.neures.2014.08.014 pmid:25264372
    OpenUrlCrossRefPubMed
  42. ↵
    1. Omer DB,
    2. Hildesheim R,
    3. Grinvald A
    (2013) Temporally-structured acquisition of multidimensional optical imaging data facilitates visualization of elusive cortical representations in the behaving monkey. Neuroimage 82:237–251. doi:10.1016/j.neuroimage.2013.05.045 pmid:23689017
    OpenUrlCrossRefPubMed
  43. ↵
    1. Omer DB,
    2. Fekete T,
    3. Ulchin Y,
    4. Hildesheim R,
    5. Grinvald A
    (2019) Dynamic patterns of spontaneous ongoing activity in the visual cortex of anesthetized and awake monkeys are different. Cereb Cortex 29:1291–1304. doi:10.1093/cercor/bhy099 pmid:29718200
    OpenUrlCrossRefPubMed
  44. ↵
    1. Paraskevoudi N,
    2. Pezaris JS
    (2019) Eye movement compensation and spatial updating in visual prosthetics: mechanisms, limitations and future directions. Front Syst Neurosci 12:73. doi:10.3389/fnsys.2018.00073 pmid:30774585
    OpenUrlCrossRefPubMed
  45. ↵
    1. Petersen CC,
    2. Grinvald A,
    3. Sakmann B
    (2003) Spatiotemporal dynamics of sensory responses in layer 2/3 of rat barrel cortex measured in vivo by voltage-sensitive dye imaging combined with whole-cell voltage recordings and neuron reconstructions. J Neurosci 23:1298–1309. doi:10.1523/JNEUROSCI.23-04-01298.2003 pmid:12598618
    OpenUrlAbstract/FREE Full Text
  46. ↵
    1. Reynaud A,
    2. Takerkart S,
    3. Masson GS,
    4. Chavane F
    (2011) Linear model decomposition for voltage-sensitive dye imaging signals: application in awake behaving monkey. Neuroimage 54:1196–1210. doi:10.1016/j.neuroimage.2010.08.041 pmid:20800686
    OpenUrlCrossRefPubMed
  47. ↵
    1. Reynaud A,
    2. Masson GS,
    3. Chavane F
    (2012) Dynamics of local input normalization result from balanced short- and long-range intracortical interactions in area V1. J Neurosci 32:12558–12569. doi:10.1523/JNEUROSCI.1618-12.2012 pmid:22956845
    OpenUrlAbstract/FREE Full Text
  48. ↵
    1. Roe AW,
    2. Chelazzi L,
    3. Connor CE,
    4. Conway BR,
    5. Fujita I,
    6. Gallant JL,
    7. Lu H,
    8. Vanduffel W
    (2012) Toward a unified theory of visual area V4. Neuron 74:12–29. doi:10.1016/j.neuron.2012.03.011 pmid:22500626
    OpenUrlCrossRefPubMed
  49. ↵
    1. Salzberg BM,
    2. Davila HV,
    3. Cohen LB
    (1973) Optical recording of impulses in individual neurones of an invertebrate central nervous system. Nature 246:508–509. doi:10.1038/246508a0 pmid:4357630
    OpenUrlCrossRefPubMed
  50. ↵
    1. Scholl B,
    2. Wilson DE,
    3. Fitzpatrick D
    (2017) Local order within global disorder: synaptic architecture of visual space. Neuron 96:1127–1138.e4. doi:10.1016/j.neuron.2017.10.017 pmid:29103806
    OpenUrlCrossRefPubMed
  51. ↵
    1. Shoham D,
    2. Glaser DE,
    3. Arieli A,
    4. Kenet T,
    5. Wijnbergen C,
    6. Toledo Y,
    7. Hildesheim R,
    8. Grinvald A
    (1999) Imaging cortical dynamics at high spatial and temporal resolution with novel blue voltage-sensitive dyes. Neuron 24:791–802. doi:10.1016/S0896-6273(00)81027-2 pmid:10624943
    OpenUrlCrossRefPubMed
  52. ↵
    1. Slovin H,
    2. Arieli A,
    3. Hildesheim R,
    4. Grinvald A
    (2002) Long-term voltage-sensitive dye imaging reveals cortical dynamics in behaving monkeys. J Neurophysiol 88:3421–3438. doi:10.1152/jn.00194.2002 pmid:12466458
    OpenUrlCrossRefPubMed
  53. ↵
    1. Snyder BR,
    2. Chan AW
    (2018) Progress in developing transgenic monkey model for Huntington's disease. J Neural Transm 125:401–417. doi:10.1007/s00702-017-1803-y pmid:29127484
    OpenUrlCrossRefPubMed
  54. ↵
    1. Sterkin A,
    2. Lampl I,
    3. Ferster D,
    4. Grinvald A,
    5. Arieli A
    (1998) Real time optical imaging in cat visual cortex exhibits high similarity to intracellular activity. Neurosci Lett 51:S41.
    OpenUrl
  55. ↵
    1. Tanigawa H,
    2. Lu HD,
    3. Roe AW
    (2010) Functional organization for color and orientation in macaque V4. Nat Neurosci 13:1542–1548. doi:10.1038/nn.2676 pmid:21076422
    OpenUrlCrossRefPubMed
  56. ↵
    1. Tasaki I,
    2. Watanabe A,
    3. Sandlin R,
    4. Carnay L
    (1968) Changes in fluorescence, turbidity, and birefringence associated with nerve excitation. Proc Natl Acad Sci U S A 61:883–888. doi:10.1073/pnas.61.3.883 pmid:4301149
    OpenUrlFREE Full Text
  57. ↵
    1. Ts'o DY,
    2. Frostig RD,
    3. Lieke EE,
    4. Grinvald A
    (1990) Functional organization of primate visual cortex revealed by high resolution optical imaging. Science 249:417–420. doi:10.1126/science.2165630 pmid:2165630
    OpenUrlAbstract/FREE Full Text
  58. ↵
    1. Wang T,
    2. Ouzounov DG,
    3. Wu C,
    4. Horton NG,
    5. Zhang B,
    6. Wu CH,
    7. Zhang Y,
    8. Schnitzer MJ,
    9. Xu C
    (2018) Three-photon imaging of mouse brain structure and function through the intact skull. Nat Methods 15:789–792. doi:10.1038/s41592-018-0115-y pmid:30202059
    OpenUrlCrossRefPubMed
  59. ↵
    1. Wang Y,
    2. Fujita I,
    3. Murayama Y
    (2000) Neuronal mechanisms of selectivity for object features revealed by blocking inhibition in inferotemporal cortex. Nat Neurosci 3:807–813. doi:10.1038/77712 pmid:10903574
    OpenUrlCrossRefPubMed
  60. ↵
    1. Yang HH,
    2. St-Pierre F,
    3. Sun X,
    4. Ding X,
    5. Lin MZ,
    6. Clandinin TR
    (2016) Subcellular imaging of voltage and calcium signals reveals neural processing in vivo. Cell 166:245–257. doi:10.1016/j.cell.2016.05.031 pmid:27264607
    OpenUrlCrossRefPubMed
  61. ↵
    1. Zurawel G,
    2. Ayzenshtat I,
    3. Zweig S,
    4. Shapley R,
    5. Slovin H
    (2014) A contrast and surface code explains complex responses to black and white stimuli in V1. J Neurosci 34:14388–14402. doi:10.1523/JNEUROSCI.0848-14.2014 pmid:25339751
    OpenUrlAbstract/FREE Full Text
  62. ↵
    1. Zurawel G,
    2. Shamir I,
    3. Slovin H
    (2016) Reconstruction of shape contours from V1 activity at high resolution. Neuroimage 125:1005–1012. doi:10.1016/j.neuroimage.2015.10.072 pmid:26518630
    OpenUrlCrossRefPubMed
  63. ↵
    1. Zweig S,
    2. Zurawel G,
    3. Shapley R,
    4. Slovin H
    (2015) Representation of color surfaces in V1: edge enhancement and unfilled holes. J Neurosci 35:12103–12115. doi:10.1523/JNEUROSCI.1334-15.2015 pmid:26338322
    OpenUrlAbstract/FREE Full Text
Back to top

In this issue

The Journal of Neuroscience: 39 (42)
Journal of Neuroscience
Vol. 39, Issue 42
16 Oct 2019
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Advertising (PDF)
  • Ed Board (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Advanced Circuit and Cellular Imaging Methods in Nonhuman Primates
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Advanced Circuit and Cellular Imaging Methods in Nonhuman Primates
Stephen L. Macknik, Robert G. Alexander, Olivya Caballero, Jordi Chanovas, Kristina J. Nielsen, Nozomi Nishimura, Chris B. Schaffer, Hamutal Slovin, Amit Babayoff, Ravid Barak, Shiming Tang, Niansheng Ju, Azadeh Yazdan-Shahmorad, Jose-Manuel Alonso, Eugene Malinskiy, Susana Martinez-Conde
Journal of Neuroscience 16 October 2019, 39 (42) 8267-8274; DOI: 10.1523/JNEUROSCI.1168-19.2019

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Advanced Circuit and Cellular Imaging Methods in Nonhuman Primates
Stephen L. Macknik, Robert G. Alexander, Olivya Caballero, Jordi Chanovas, Kristina J. Nielsen, Nozomi Nishimura, Chris B. Schaffer, Hamutal Slovin, Amit Babayoff, Ravid Barak, Shiming Tang, Niansheng Ju, Azadeh Yazdan-Shahmorad, Jose-Manuel Alonso, Eugene Malinskiy, Susana Martinez-Conde
Journal of Neuroscience 16 October 2019, 39 (42) 8267-8274; DOI: 10.1523/JNEUROSCI.1168-19.2019
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • two-photon microscopy
  • voltage-sensitive dye imaging
  • adeno-associated virus
  • optogenetics
  • cortical mapping
  • prosthetic vision

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

  • Unraveling the Link between Olfactory Deficits and Neuropsychiatric Disorders
  • Cognitive-Affective Functions of the Cerebellum
  • Time for Memories
Show more Symposium and Mini-Symposium
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.