Abstract
Novel genetically encoded tools and advanced microscopy methods have revolutionized neural circuit analyses in insects and rodents over the last two decades. Whereas numerous technical hurdles originally barred these methodologies from success in nonhuman primates (NHPs), current research has started to overcome those barriers. In some cases, methodological advances developed with NHPs have even surpassed their precursors. One such advance includes new ultra-large imaging windows on NHP cortex, which are larger than the entire rodent brain and allow analysis unprecedented ultra-large-scale circuits. NHP imaging chambers now remain patent for periods longer than a mouse's lifespan, allowing for long-term all-optical interrogation of identified circuits and neurons over timeframes that are relevant to human cognitive development. Here we present some recent imaging advances brought forth by research teams using macaques and marmosets. These include technical developments in optogenetics; voltage-, calcium- and glutamate-sensitive dye imaging; two-photon and wide-field optical imaging; viral delivery; and genetic expression of indicators and light-activated proteins that result in the visualization of tens of thousands of identified cortical neurons in NHPs. We describe a subset of the many recent advances in circuit and cellular imaging tools in NHPs focusing here primarily on the research presented during the corresponding mini-symposium at the 2019 Society for Neuroscience annual meeting.
- two-photon microscopy
- voltage-sensitive dye imaging
- Adeno-Associated virus
- optogenetics
- cortical mapping
- prosthetic vision
Introduction
Nonhuman primates (NHPs) are the most relevant animal models of human brain function available. Rodent and insect models have grown enormously in utility in the last two decades, due in large part to successful advances in genetic neural circuit analysis tools. It is imperative, however, to translate the utility of these advances to human therapeutics, which will often require preclinical testing in NHP brains. NHPs are also likely to be the most suitable model for the study of many human-relevant brain functions, such as those involving foveal vision, fine manual motor control and hand-eye coordination, finger sensation, high-cognition, and many psychiatric and neurological diseases. The same neural circuit analysis tools that were developed in insect and rodent models are now beginning to translate successfully to NHPs, including powerful genetic tools (Kuang et al., 2009; Okano and Mitra, 2015; He and Huang, 2018; Snyder and Chan, 2018), providing novel and exciting paths to both basic discovery and preclinical testing.
Here we describe a fraction of the many recent advances in circuit and cellular imaging tools and findings in NHPs. The field of NHP imaging has expanded in the past few years, and here we focus on the recent research, and on the future research directions of participants in the corresponding mini-symposium hosted at the 2019 Annual Meeting of the Society for Neuroscience. Thus, we review work with voltage-sensitive dyes (Slovin laboratory), all-optical interrogation with two-photon imaging, using both dyes (Nielsen) and transfected viral constructs for all-optical interrogation (Tang, Macknik, and Martinez-Conde laboratories), and new hardware allowing for ultra-wide field-of-view imaging (Macknik and Martinez-Conde laboratories). In addition to these technical advances, we discuss remaining obstacles to NHP research and approaches to overcoming them.
Long-term monitoring and optogenetic manipulation of target neurons have been especially challenging in NHPs. The application of these methods to in-depth analyses of neural circuits has required the field to overcome multiple technical obstacles, from achieving adequate expression of genetic calcium indicators in NHPs when previously tested rodent methods failed to translate easily to primates, to designing imaging implants suitable for maintaining long-term brain health, to minimizing motion artifacts during imaging of brains at a spatial scale that is relevant to humans. Given the large size of functional maps in NHPs, such research has required the development of microscopes with larger fields-of-view, as well as three-photon imaging to achieve greater imaging depths (Horton et al., 2013; T. Wang et al., 2018). Great strides have been made in clearing several of these hurdles; thus, NHP imaging is now possible for longer durations, and with a greater field-of-view than ever before.
Using voltage-sensitive dye imaging to encode and reconstruct visual stimuli in area V1
Voltage-sensitive dye imaging (VSDI) is the original imaging technique for exploring neuronal population voltage dynamics in behaving monkeys (Slovin et al., 2002; Grinvald and Hildesheim, 2004; Fig. 1A). The main advantages of VSDI over other extant methods remain temporal resolution on the scale of milliseconds (i.e., faster than the calcium signals measured using two-photon imaging) with large fields-of-view (typically 1–4 cm2) at the mesoscale resolution (502–2002 μm2/pixel; >10,000 pixels/frame). Thus, the application of VSDI in behaving monkeys enables the study of the spatiotemporal patterns of cortical populations (rather than of single cells, which is enabled by two-photon imaging) by using organic voltage-sensitive dyes that transduce the membrane potential of neurons into fluorescence (Tasaki et al., 1968; Salzberg et al., 1973; Cohen et al., 1974; Shoham et al., 1999). The in vivo VSD signal is highly correlated with the membrane potential, emphasizing subthreshold fluctuation (Sterkin et al., 1998; Petersen et al., 2003) but also reflecting spiking activity (Jancke et al., 2004; Ayzenshtat et al., 2010; Reynaud et al., 2011; Chen et al., 2012). Several groups have used this method to investigate circuits at the scale of entire functional columns during cortical processing of visual stimuli in area V1 of behaving monkeys (Slovin et al., 2002; Reynaud et al., 2012; Michel et al., 2013; Omer et al., 2013, 2019). Importantly, VSDI can be used to explore cortical correlates of higher visual functions in behaving NHPs, including perceptual grouping and figure-ground segregation (Ayzenshtat et al., 2010, 2012; Gilad et al., 2013; Gilad and Slovin, 2015), spatial attention (Chen and Seidemann, 2012), and the influence of saccades and microsaccades on V1 neural responses (Meirovithz et al., 2012; Gilad et al., 2014, 2017). Recent studies have been begun using genetically encoded voltage indicators in vivo (Gong et al., 2015; Yang et al., 2016; Adam et al., 2019), but their application in NHPs remains a challenge. The remainder of this section focuses on discoveries made through the successful use of VSDI in behaving NHPs.
One of the visual system's main tasks is to combine edges and surfaces into perceptual groups, to create representations of single or multiple objects that are segregated from other objects and the background. The Slovin laboratory has applied VSDI to measure cortical responses to visual stimuli and analyze neural circuits in V1. They presented visual objects defined by either luminance or color while recording with VSDI in V1 of NHPs to test the hypothesis that visual perception of uniform surfaces is mediated by an isomorphic, filled-in representation (Zurawel et al., 2014, 2016; Zweig et al., 2015). Contrary to this hypothesis, the experiments revealed that neural responses to both color and luminance surfaces were similarly edge-enhanced in that VSDI pixels at the spatial locations occupied by edges showed greater voltage changes than those responding to the middle regions of the surfaces, appearing rapidly after stimulus onset. Within cortical regions corresponding to achromatic squares of 1° of visual angle and larger, the surface's edges were more strongly activated than its center. Following this early period, responses inside the surface's edges increased slowly, partially “filling-in” the V1 region corresponding to the square's center. Surprisingly, responses to color squares remained edge-enhanced throughout the stimulus and there was no filling-in of the center. These results imply that chromatic and achromatic surfaces are represented differently in V1 and that uniform filled-in representations of surfaces in V1 are not required for the perception of uniform color surfaces.
A different set of studies investigated the neural representation of an object (i.e., the figure and its background). Gilad et al. (2013, 2017) found that cortical responses mediating figure-ground segregation and contour integration comprise divergent responses, including both figure enhancement and background suppression. Further investigation of a more realistic natural scene with few objects suggested that separate objects are labeled by different response amplitudes (Gilad and Slovin, 2015).
Finally, to test whether V1 can be used for high-resolution readout of visual stimuli, Slovin group attempted to reconstruct, at sub-degree resolution, visual contours from the VSD signal that were evoked by simple shapes (Zurawel et al., 2016; Fig. 1B). By applying an inverse version of a simplified bottom-up model to neuronal responses, they were able to reconstruct shape contours that were highly similar to the original stimuli. These results, together with stimulus reconstruction at the single trial level (Zurawel et al., 2016), suggest that V1 can be an important constituent in the detailed internal representation of visual experiences, and lay down the basis for future cortical artificial vision.
Using dye-based two-photon imaging to investigate the fine-scale organization of area V4
Neural circuits in mid-level visual areas such as V4 implement key algorithms in visual processing: they transform the image-level representations of early visual stages into explicit, compact, and stable representations of objects and scenes found in late pathway stages. Previous extracellular recording studies have identified some of the fundamental V4 tuning functions. Crucially, in addition to tuning for orientation and spatial frequency as in earlier areas, V4 neurons are tuned for more complex features of contour fragments, such as their curvature and object-relative position (Roe et al., 2012). Yet, much remains to be learned about processing of shape in V4. Two-photon calcium imaging promises to be an important tool for this purpose, and more generally for the study of higher-level visual areas.
One area of investigation facilitated by two-photon imaging and calcium-activated fluorescent dyes is an in-depth exploration of the functional micro-organization of V4 and the stimulus space encoded by V4. In NHPs, orderly feature maps have been observed in a number of visual areas, including V1 (Blasdel and Salama, 1986; Ts'o et al., 1990; Bartfeld and Grinvald, 1992; Obermayer and Blasdel, 1993; Malach et al., 1994; Nauhaus et al., 2012, 2016) and higher-order areas like MT (Albright et al., 1984; DeAngelis and Newsome, 1999). These maps exist for feature dimensions central to processing in an area, such as orientation in V1, or motion direction in MT. In V4, maps for orientation and color have been reported (Tanigawa et al., 2010), but the functional organization of other tuning properties remains unknown. Two-photon imaging allows simultaneous measurements of tuning functions across many neurons with known spatial relationships, and it is therefore ideal for mapping functional organization. Because maps in primate visual cortex appear to exist for the central processing dimensions, V4 will likely contain orderly feature maps for contour shape parameters (Fig. 2). The precise layout of these maps will offer a number of important clues about V4 circuit organization: first, certain contour fragments are more likely to co-occur in natural shapes than others. If the organization of V4 feature maps reflects these statistical dependencies, it will indicate that this is mechanism by which how the visual system rapidly links common contour configurations. Second, feature maps may represent a fundamental substrate for input integration (Nauhaus and Nielsen, 2014): This is best studied in V1, for which a tight relationship between spatial clustering of inputs and emerging tuning properties has been observed (Jin et al., 2011). Feature maps might similarly aid integration of related inputs in V4. Last, as demonstrated by the precise alignment of V1 feature maps within each other and the retinotopic map (Nauhaus et al., 2016), they provide mechanisms for efficiently achieving complete coverage of multiple tuning dimensions, which presumably must occur in V4 just as it does in V1.
More generally, two-photon imaging promises to be a useful tool for studying the encoding of high-dimensional stimulus spaces encountered in higher-level visual areas. To adequately capture tuning functions in these areas, responses to many stimuli need to be measured. Yet, with traditional recording techniques, recording durations for single neurons has been relatively limited. Because chronic two-photon imaging allows tracking of individual identified neurons across sessions, the recording duration per neuron is drastically increased, allowing adequate sampling even of very large stimulus spaces.
Long-term all-optical interrogation of neural circuits in macaque visual cortex
Recent research has achieved high-quality two-photon imaging in awake behaving macaques, capturing images with many neurons over long periods of time (Macknik and Haglund, 1999; Macknik et al., 2000; Ju et al., 2018, 2019). These studies reveal that various two-photon imaging techniques can be effectively applied in different brain areas, with different molecular tools, to study different cognitive functions. One example is the work by Li et al. (2017), who achieved long-term two-photon imaging in awake behaving macaques and were thus able to monitor the activities of thousands of cortical neurons at single-cell resolution for up to several months. Similar approaches could be used to investigate the neuronal population coding of working memory in prefrontal cortex or the neural mechanisms of object recognition in inferior temporal cortex (IT; i.e., in a face patch revealed by two-photon imaging).
High-resolution dendritic imaging has also been accomplished in awake macaque monkeys: Ju et al. (2019) functionally mapped excitatory synaptic inputs onto V1 neuronal dendrites with high spatial and temporal resolution with an iGluSnFR (green glutamate-indicator) sensor. This study found that synaptic inputs near one another tended to share similar functional properties in macaque V1, consistent with previous findings of spatial clustering in lower-mammal cortex (Kleindienst et al., 2011; Iacaruso et al., 2017; Scholl et al., 2017; Fig. 3A). Critically, excitatory synaptic inputs were highly scattered in multidimensional feature space, providing a potential substrate for local feature integration on dendritic branches.
Recently, the Tang laboratory achieved coexpression of iGluSnFR together with RCaMP (red Calcium-indicator) in each individual neuron, enabling the simultaneous monitoring of excitatory synaptic inputs and somatic outputs of individual neurons (see Fig. 3B for coexpression in an IT neuron; Fig. 3C for a V1 neuron). Interestingly, single V1 neurons seemed to receive a mix of excitatory inputs (with various preferred spatial frequencies and colors, Fig. 3C), whereas soma output was not mixed, preferring a specific orientation and spatial frequency. This effect might be explained by GABAergic inhibition by interneurons (Y. Wang et al., 2000). Because the synaptic inputs onto a neuron represent the presynaptic neurons' outputs, and calcium-imaging of the dendritic activity reveals the local response, these dendritic imaging techniques might reveal the circuit computations in dendrites, providing a deeper understanding of dendritic integration.
Ongoing and future directions: toward creating a novel visual prosthetic
Recent and ongoing imaging advances in NHPs, including those described, open new paths for the development of cortical prosthetics that may restore foveal vision in blind patients. The Macknik and Martinez-Conde laboratories previously conducted the first ultra-wide-field optical imaging of a visual illusory response (Macknik and Haglund, 1999) and of a stationary object's edge in V1 of NHPs (Macknik et al., 2000). Using precisely targeted optogenetic activation, a cortical prosthetic might optically stimulate spatially localized lateral geniculate nucleus (LGN) synaptic boutons, transfected with light-sensitive proteins and projecting into V1 layer 4, in a pattern that mimics naturalistic visual input.
The LGN input layer to V1 is the only thalamic connection to the cortex in which the topographical map of connectivity is known with synaptic precision (Kremkow et al., 2016; Lee et al., 2016). In this retinotopic map, four input modules, one ON input and one OFF input originating from each of the two eyes, constitute all of the fundamental projections that give rise to Hubel and Wiesel's hypercolumn in V1 and beyond. Because each hypercolumn conveys all visual information in a given retinotopic region of the retino-geniculo-striatal pathway, one may theoretically control vision by controlling the inputs from the LGN. Further, because all long-range connections from the LGN to V1 are glutamatergic, optogenetic targeting of these inputs would be free from unwanted coactivation of inhibitory neurons (a common problem in electrode-based prosthetic devices, which cannot isolate excitatory from inhibitory activation and thus result in diminished contrast perception).
Because prosthetic devices can only succeed in driving naturalistic stimulation when they account for rapidly changing cortical activity and response conditions (Born et al., 2015; Berry et al., 2019; Paraskevoudi and Pezaris, 2019), a successful system might integrate a real-time cortical readout mechanism to continually assess and provide feedback to modify stimulation levels, just as the natural visual system does. Then, one should be able to read out the resultant activity from a multicolored array of bioluminescent V1 calcium responses with single-cell resolution from a one-photon camera, for real-time feedback control. Oculomotor effects could be accounted for by tracking eye movements and adjusting the correlated inputs from the camera's field-of-view in real-time (just as the natural retina does).
NHP imaging work in the Macknik and Martinez-Conde laboratories has been directed toward the development of such a device. The planned system, called the Optogenetic Brain System (OBServe), is designed to function by optimally activating visual responses in V1 from a coplanar LED array/video camera (Fig. 4).
The OBServe approach follows from the principle that if the LGN input modules are stimulated in the same pattern as natural vision, the recipient should perceive naturalistic prosthetic vision. Developing the necessary technologies for this approach will require precise maps of visual circuits, and long-duration recordings, which may be realized in the short-term, in light of recent imaging advances such as those described earlier. Yet, a significant challenge is that such visual cortical mapping will need to be developed for blind patients, whereas cortical mapping has only been achieved to date in sighted individuals through forward modeling (i.e., mapping responses to visual stimuli). To address this problem, the Macknik and Martinez-Conde laboratories have developed an inverse-modeling approach, inspired by research in the Slovin laboratory mentioned in Figure 1, to map V1 using prosthetic activation. The idea is that, because coactivated ON/OFF inputs within the same hypercolumn will null each other, one may establish which ON/OFF columns are corresponding pairs by activating them prosthetically to determine which inputs cancel each other. To test this hypothesis, precise forwardly modeled maps of cortical orientation, ocular dominance, ON/OFF and retinotopy will be created in NHPs (using the advanced two-photon imaging techniques developed in the Tang laboratory) and compared with the results from the inverse-mapping methods in the same tissue.
Implant for ultra-large field-of-view for mesoscopic imaging in primates
The first component of OBServe to present an engineering challenge was the cortical imaging chamber technology for NHPs. The Macknik and Martinez-Conde laboratories have designed and produced a printable PEEK imaging chamber that enables long-duration recordings with an ultra-large field-of-view (following from methods developed in the Tang laboratory for the relatively small fields-of-view necessary for microscopic two-photon circuit analysis; Li et al., 2017) required for OBServe testing. The PEEK chamber solves several obstacles endemic to mesoscopic imaging in the brain: (1) difficulty with positioning high-NA objectives near the brain, (2) creating a flat imaging window against the surface of the brain, (3) adjusting the imaging window in the face of changes in swelling and pressure in the brain, (4) preventing growth of dura and biofilms that obscure the imaging window, and (5) follow-on MRI imaging of the animal after implantation (Fig. 5).
The PEEK chamber has a 2 cm diameter window for NHPs that regulates pressure to optimize long-term patency, and uses a stable, strong, and thin design. For context, its field-of-view is larger than three entire mouse brains placed side-by-side. The pressure-regulating implant is mechanically modeled and stress-tested to achieve access to the brain by large objectives, with design features that allow manual repositioning of the imaging lens. A thin implant design was prioritized to optimize the distance between the objective and the brain. A strong radiolucent implant was created using PEEK plastic, a strong, thermoresistant and biostable material. This method also allows manual repositioning of the coverslip to create a flat imaging window.
The chamber implant design includes an engineered silicone mount designed to maintain even pressure of the imaging window on the brain's surface, despite changes when the brain swells or moves within the skull. The mechanical properties of the silicone are manipulated to closely resemble that of brain tissue to be more biomimetic than traditional platinum-cured silicone and act as a cushion for motion. The approach prevents increases in pressure that could lead to neurodegeneration, and at the same time prevents dural and biofilm undergrowth by blocking the migration of migratory cells. The dynamic pressure maintenance on the brain may be an important component of this method's success. Through the ultra-large field-of-view produced by this implant, the Macknik and Martinez-Conde laboratories have achieved two-photon image results in >60,000 neurons (the largest two-photon images obtained in any model to date; Chanovas et al., 2019). By regulating pressure while allowing a larger field-of-view, the chamber is expected to enhance recording window longevity and may prove to be a critical advance in NHP and human brain imaging. This implant is designed to facilitate any of the imaging techniques described in this review.
In summary, optogenetic, macroscopic, and microscopic all-optical interrogation techniques have proven successful for manipulating neuronal populations with high spatial and temporal fidelity. Recent advances have overcome many obstacles endemic to using these methods in NHPs. Improvements to hardware and methods have increased the duration of recordings to months or years; time periods relevant to the development of human neural circuits. The resulting images include large numbers of neurons across ultra-large fields-of-view, allowing circuits to be understood in much greater depth than before, within functional maps that are precise, in V1 also higher-level visual areas. Each of these advances has enabled and new studies providing information critical to human-relevant translational research, including a path toward the development of novel visual prosthetics.
Footnotes
This work was supported by the NSF Awards 1523614 and 1734887 to S.L.M. and S.M.C., and an NSF NeuroNex Technology Hub–Nemonic Award and NEI R01EY029420 to K.J.N.
The authors declare no competing financial interests.
- Correspondence should be addressed to Stephen L. Macknik at macknik{at}neuralcorrelate.com