Abstract
The overarching goal of the NIH BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative is to advance the understanding of healthy and diseased brain circuit function through technological innovation. Core principles for this goal include the validation and dissemination of the myriad innovative technologies, tools, methods, and resources emerging from BRAIN-funded research. Innovators, BRAIN funding agencies, and non-Federal partners are working together to develop strategies for making these products usable, available, and accessible to the scientific community. Here, we describe several early strategies for supporting the dissemination of BRAIN technologies. We aim to invigorate a dialogue with the neuroscience research and funding community, interdisciplinary collaborators, and trainees about the existing and future opportunities for cultivating groundbreaking research products into mature, integrated, and adaptable research systems. Along with the accompanying Society for Neuroscience 2019 Mini-Symposium, “BRAIN Initiative: Cutting-Edge Tools and Resources for the Community,” we spotlight the work of several BRAIN investigator teams who are making progress toward providing tools, technologies, and services for the neuroscience community. These tools access neural circuits at multiple levels of analysis, from subcellular composition to brain-wide network connectivity, including the following: integrated systems for EM- and florescence-based connectomics, advances in immunolabeling capabilities, and resources for recording and analyzing functional connectivity. Investigators describe how the resources they provide to the community will contribute to achieving the goals of the NIH BRAIN Initiative. Finally, in addition to celebrating the contributions of these BRAIN-funded investigators, the Mini-Symposium will illustrate the broader diversity of BRAIN Initiative investments in cutting-edge technologies and resources.
Introduction
The Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, now entering its sixth year of funding, is a collaborative endeavor between federal science funding agencies and private entities aiming to disentangle the complexity of the human brain (Mott et al., 2018). To accomplish this goal, Federal agencies, including the National Institutes of Health (NIH), have made substantial investments in the development and optimization of a broad range of innovative technologies and resources necessary for interrogating the dynamic brain at multiple spatial and temporal scales and resolutions. At this stage, a number of new tools have shown promise, and BRAIN investments are bound to yield many more. However, their scientific impact depends on their successful dissemination and adoption by the greater neuroscience community.
Two Working Groups of the Advisory Committee to the NIH Director (ACD) have provided sage input to guide the BRAIN Initiative: “BRAIN 2025” for the initial phase and “BRAIN 2.0” for the second phase. Both emphasize the need to support the process of moving new tools from the inventors' benchtops into laboratories that can use them in their pursuit of important scientific questions. In addition, the Neuroethics Working Group of the BRAIN Initiative developed a framework (Guiding Principles) for characterizing and disseminating the neuroethical implications of tools and technologies emerging from the BRAIN Initiative (Bianchi et al., 2018; Greely et al., 2018). All Working Groups have emphasized the importance of supporting the dissemination and access, commercialization, user training, translation, and integration of novel technologies for the use of the research community.
Initial investments from the NIH, National Science Foundation (NSF), and other BRAIN Initiative partners brought together engineers, chemists, and material scientists to address barriers to progress in neuroscience, yielding impressive technological advances—some funded by the BRAIN Initiative, others by other Federal or non-Federal sources. In addition to the several nearly user-ready tools highlighted in this review, examples of BRAIN-supported advances include the following: probes for recording and controlling the activity of neurons and neuronal ensembles [designer receptor exclusively activated by designer drugs (DREADDs); Roth, 2016], genetically encoded sensors for dopamine (Patriarchi et al., 2018; Sun et al., 2018) and calcium (Qian et al., 2019), advances in single-cell genomic profiling (drop-sequence; Macosko et al., 2015), microscopy (swept confocally aligned planar excitation microscopy; Vaadia et al., 2019), 3-photon microscopy (Yildirim et al., 2019), and human neuroimaging (Barandov et al., 2019; Wu et al., 2019). A neural decoder that converts human cortical activity into speech recently captured the interest of the scientific community and the public alike (Anumanchipalli et al., 2019).
Here and in the eponymous Mini-Symposium at the Society for Neuroscience 2019 meeting in Chicago, we describe several strategies the NIH has pursued to increase researcher access to the tools and technologies emerging from BRAIN-funded projects. We spotlight the work of BRAIN Initiative awardees working to develop and make available tools that allow the neuroscience community unprecedented access to knowledge of the brain.
BRAIN Initiative resource access and tool dissemination: challenges and first steps
The practice of disseminating new research findings is familiar ground to generations of scholars who have shared their work with colleagues and the public formally (through publications, meetings, and presentations) and informally (through collaborations, training, mentorship, and open dialogue). However, tool development and commercialization present much less familiar territory for most academic researchers. The scope of scientific training does not usually include the skills and domain knowledge necessary for optimization, dissemination, translation, and/or commercialization of innovative technologies. Nor does the academic incentive structure reward this work. User training, too, often occurs without formal support. Finally, the best-known funding mechanisms do not usually support the considerable human or financial resources and infrastructure needed to move an early prototype through the demanding stages of development toward a standardized, robust, end-user-friendly product. Indeed, the huge range of resource types that BRAIN investigators are generating (from software to wetware, from basic research tools for use in animal models to first-in-human devices and accompanying ethical frameworks) makes a single solution impossible. The BRAIN Working Group 2.0 considered these challenges in detail, highlighting early successes and outlining a strategy for devising new solutions in their June 2019 presentation to the ACD.
Many programs under the umbrella of the NIH BRAIN Initiative support the initial development and optimization of novel technologies across seven scientific high-priority areas identified in the BRAIN 2025 report (National Institutes of Health, 2014). Recognizing the challenges described above, NIH has initiated several additional collaborative programs to encourage dissemination of research tools beyond laboratories of origin. These include the following.
BRAIN Initiative public–private partnership program for implantable devices in humans
This program facilitates collaboration between manufacturers of cutting-edge neuroscience devices and investigators who can use these in early clinical trials or basic research in human participants. With advice from the Food and Drug Administration, NIH has developed template agreements, which have improved researcher access to information about devices not yet available to the public, and simplified the negotiation process around intellectual property and publication concerns. Consequently, investigators can more easily and readily incorporate the most cutting-edge devices into their research programs, accelerating the translation of tools and therapies to the market. Information about active partnerships can be found on the NIH BRAIN Initiative Public-Private Partnership website.
Small business programs for novel tools and technologies in neuroscience research
A specific percentage of the NIH extramural budget supports Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) grant awards, 3.2% and 0.45% respectively (3.65% total for Small Business programs as required by law). Accordingly, the BRAIN Initiative also allocates SBIR and STTR funding to support all phases of the research tool development process, from early development all the way through to optimization for dissemination and integration into standard research practices. These programs encourage collaboration between innovators in academia, and product and business experts in industry to move technologies to commercialization.
Research resource grants for technology integration and dissemination
After piloting a small-scope, short-duration NIH-based program for Technology Sharing and Propagation, NIH has put forth a more expansive and flexible program to support the gamut of activities (from tool customization and production to building of specialized service facilities and user training) beyond the scope of academic innovation. This research program accounts for strategic planning, prioritization, and project management. Diverse types of activities aimed at improving community access to mature tools and reagents fall under the umbrella of this program, including reagent and tool validation, iteration, distribution and production, instrumentation packages, services and facilities, and end-user training. The associated funding announcement will continue to accept applications for several more years.
These programs at NIH are part of a broader effort among the members of the BRAIN Initiative Alliance (BIA) to build infrastructure for expanding user access to innovative neuroscience research tools. For example, the NSF contributes to this goal through the NeuroNex and Brain Observatory programs. Other efforts at NIH and among the members of the BIA are supporting new infrastructures for centralizing information about accessing research tools, complemented by substantial ongoing investments into resources for data sharing (e.g., https://www.nwb.org/). The NIH and the BIA both maintain websites that serve as broad sources of information for the community, compiling resources, funding opportunities, awarded projects, publications, news, events, and other information relevant to the Initiative. As the BRAIN Initiative moves forward, these sites will expand to become portals to the wealth of resources emerging from BRAIN investments. Less formally, BIA members support the yearly BRAIN Initiative Investigators Meeting and assorted workshops that address resource development and dissemination to facilitate collaborations among BRAIN grantees and the broader research community.
Yet, these efforts are only the beginning. Recently, the BRAIN Working Group 2.0 urged frank, open dialogue with the scientific community and the public about strategies for dissemination of novel neurotechnologies and their integration and adoption throughout the neuroscience research enterprise within a robust ethical framework. This article and the Mini-Symposium present an opportunity to begin just such a dialogue.
The remainder of this review describes the work of several BRAIN Initiative investigator teams contributing to the core value of technology validation and dissemination. The tools these investigators plan to share with the community interrogate neural circuits at different levels of analysis. Some reveal minute details of subcellular structure and synaptic connectivity patterns, while others capture features of brain network dynamics. In addition to the tools themselves, these investigators are, importantly, incorporating user training and support into their plans. In juxtaposing these independent efforts, we take the first step toward integrating disparate research programs and approaches into flexible research systems—critical to discovering “how dynamic patterns of neural activity are transformed into cognition, emotion, perception, and action in health and disease” (BRAIN 2025, Goal 7; National Institutes of Health, 2014).
BRAIN Initiative investments in research tool dissemination
Renewable recombinant miniaturized antibodies for studying neuronal function (project information)
Antibodies and related affinity reagents have been a key tool for revealing and integrating knowledge about the brain at the subcellular, synaptic, cellular, and circuit levels. They are widely used in neuroscience research to label, isolate, and manipulate the function of specific target molecules in complex experimental preparations, including in vivo. Using antibodies in multiplexed immunofluorescence microscopy allows for proteomic-level analyses of brain cells in diverse species. This includes defining the neurons that comprise a circuit, the distribution and molecular characteristics of their dendritic and axonal processes, the physiologically distinct classes of synapses that mediate neurotransmission, and the molecular components of distinct subcellular compartments. Using antibodies to isolate macromolecular signaling complexes for subsequent proteomic-based analyses can provide information crucial to determining the specific molecular events that underlie normal neuronal signaling and its dysregulation in neurodevelopmental and adult neurological and psychiatric disorders. Using intracellularly expressed antibodies (intrabodies) allows the targeting of reporters to specific sites in neurons to more effectively monitor neuronal function, and can permit manipulation of normal function or reversal of dysfunction for experimental or therapeutic purposes (Lo et al., 2008; Greenfield, 2014; Marschall et al., 2015; Miersch and Sidhu, 2016).
J.S.T. and colleagues (Gong et al., 2016) have pursued a multipronged approach to develop, validate, and enhance community access to a diverse toolbox of renewable recombinant antibodies and affinity reagents for neuroscience. First, they have converted a substantial portion of their existing library of mouse monoclonal antibodies (mAbs) into recombinant form (Andrews et al., 2019). This ensures permanent archiving in and streamlined distribution from plasmid banks and allows for more reliable expression than conventionally generated mAbs. They have also engineered mAbs into forms more amenable to multiplex labeling applications (Manning et al., 2012), including miniaturized single-chain variable fragments (ScFvs; one-sixth the mass of the intact parent mAb). Second, the group has generated novel llama nanobodies (nAbs), which are nanoscale antibodies derived from atypical heavy chain-only antibodies (Dong et al., 2019). nAbs represent one of the smallest (∼15 kDa) known autonomous antigen binding domains. Finally, they have isolated small (∼14–18 kDa) designer proteins of high affinity and specificity, called designed ankyrin repeat proteins (DARPins), from highly diverse combinatorial libraries (Plückthun, 2015). Figure 1 illustrates these three classes of affinity reagents and examples of their use in neurons. Overall, the small size, solubility, and stability of ScFvs, nAbs and DARPins facilitate their functional expression in mammalian cells, allowing for their use as intrabodies (Staus et al., 2016; Guillard et al., 2017). Their small size also enhances the resolution of imaging obtained when they are used as immunolabels (Beghein and Gettemans, 2017; Traenkle and Rothbauer, 2017), and enhances their penetration into tissue, cells, and subcellular compartments, allowing for integration with correlative microscopy and whole-brain imaging approaches (Fang et al., 2018; Yamagata and Sanes, 2018). Similarly, miniaturized nanoscale immunolabeling reagents permit higher-resolution imaging of the molecular anatomy of neurons at the ultrastructural (immunogold-EM) and super-resolution scales.
Renewable affinity reagent resources and applications. To the left are schematics of the different classes of renewable affinity reagents developed by J.S.T. and colleagues (adapted from Helma et al., 2015). The right column shows examples of images obtained using nAbs as intrabodies or immunolabels (adapted from Dong et al., 2019). The top shows a cultured rat hippocampal neuron 9 d after infection with lentivirus encoding a YFP-tagged anti-Homer1 nAb (green) and immunolabeled for Homer1 in red and MAP2 (microtubule associated protein 2) in blue. Note the lack of accumulation of nAb in the soma (*) and its robust localization in dendrites. The middle panels show high-magnification images of two different (HC20 and HC87) YFP-tagged anti-Homer1 nAbs (green) and Homer1 immunolabeling (red) from a similar experiment, demonstrating the specificity of both intracellularly expressed nAbs by their specific accumulation at synapses. The bottom panel shows the hippocampal region from an adult rat brain section immunolabeled with an anti-Homer1 nAb (green) and an anti-Homer1 mAb (magenta). HcAb = Heavy chain antibodies.
Working with a consortium of experts in multiple imaging applications, this team will ultimately provide to the community a validated set of recombinant immunolabeling reagents that can be readily deployed with precision for high-resolution mapping of the components and structure of neural circuits. These reagents, in concert with the services of the connectomics facility described next, can enrich the resulting dataset with information about experimentally relevant molecular components of the neural circuit.
A facility to generate connectomic information (project information)
Connectomics is a nascent field of neuroscience aiming to provide a pathway for investigators to obtain complete descriptions of the brain circuits they study at synaptic resolution. Despite widespread interest in dissecting neural circuits, such descriptions have been nearly impossible to come by until recently due to multiple technical obstacles, not least of which is the challenge of bringing together the diverse expertise required to transform a volume of brain tissue into a densely annotated computational representation of the underlying neural circuits.
J.W.L. and colleagues (Morgan and Lichtman 2017) have been working to overcome these technical challenges to make connectomics a routine, accessible analytical tool for the community. They are developing a partially automated pipeline that begins with a three-dimensional sample of brain tissue and ends with a “digital tissue.” These digital tissues are volumetric datasets that contain all the histological elements in the brain sample including each nerve cell process and all the interconnecting synapses.
Figure 2 illustrates the overall process. They prepare a nervous system sample for electron microscopy, examine it with micro-computerized tomography to evaluate the stain quality, generate ultrathin (typically ∼30 nm) sections with a diamond knife ultramicrotome, pick up the sections on a tape using the ATUM (automatic tape-collecting ultra-microtome) method, put the tape as strips on silicon wafers, map the position of each section with optical microscopy (Hayworth et al., 2014), and automatically image the sections with scanning electron microscopy. Computational tools help stitch and align the sections. A computer-assisted tracing program (VAST; Berger et al., 2018) annotates and segments the neural circuitry data, which can then be used to obtain the wiring diagram, and then finally analyze the results. These data are also used as ground truth for machine-learning classifiers of neuronal processes and synapses using convolutional neural nets (Kasthuri et al., 2015; Morgan et al. 2016).
A partially automated electron microscopy connectomics pipeline. This schematic outlines the steps of the partially automated pipeline, from live imaging and sample fixation and sample fixation through to data output and analysis.
The group is developing ways to adapt or augment this pipeline for diverse experimental needs. For example, it is possible to implement correlated light and electron microscopy of the same samples using nanobody probes that allow molecular identification of cell types or other details without adversely affecting the quality of the ultrastructure (Fang et al., 2018). Breakthroughs in machine-learning tools for image segmentation, synapse identification, stitching, and alignment have made it possible to consider very large volumes and perhaps whole nervous systems in a range of species. With larger species and larger brains, however, the rate-limiting step becomes the speed of image acquisition. Multibeam electron microscopes that parallelize image acquisition can be incorporated into the pipeline to address this issue (Eberle et al., 2015). Lichtman and colleagues will provide training and guidance along with access to state-of-the-art facilities and technologies required to provide an investigator without expertise in the technical aspects of connectomics with a complete dataset and the means to interpret it.
Understanding synapses in the brain through three-dimensional electron microscopy (project information)
High-throughput electron microscopy can also be used to interrogate the subcellular structures that support synaptic function. Neuroscience has not yet characterized the variability of synaptic dimensions, connectivity, or subcellular content across cell types, brain regions, species, and taxa. Without this information, it is difficult to evaluate the adequacy of model systems for approximating the biology and function of the human brain. Current approaches to characterizing synaptic structure are limited by resolution, inefficient and insufficient data collection, and analysis bottlenecks, which limit the uptake and dissemination of data and its utility for building knowledge.
As part of the NeuroNex Technology Hub, K.M.H. and colleagues are collaborating with scientists and cyberinfrastructure experts at the University of Texas-Austin, the Salk Institute for Biological Studies, and the Texas Advanced Computing Center to address these limitations.
First, they are enhancing three-dimensional electron microscopy (3DEM) capabilities. Automated image collection on the scanning electron microscope operating in the transmission mode (tSEM) provides in-plane image resolution of ≤1 nm for large fields (∼2500 μm2) with minimal distortion and no montaging. When collected across serial sections, the image volumes are large enough to quantify synapses in local circuits while maintaining the nanoscale resolution needed to assess the function of subcellular components, local axons, and perisynaptic astroglia (Fig. 3, example image). Obliquely cut membranes can obscure functionally significant components of synapses, including extracellular space, nascent synaptic zones, smooth endoplasmic reticulum and spine apparatus, perisynaptic astroglia, filopodia, and local axon trajectories. K.M.H. and colleagues are addressing this issue by enhancing depth-dependent discrimination of membrane borders through improving axial resolution using conical tilt tomography combined with tSEM in thick (∼250 nm) serial sections. The resulting tomogram can be used to calculate virtual thin sections (<15 nm) while maintaining in-plane resolution (<2 nm). The enhanced resolution will increase data volume while streamlining image processing by producing more isotropic image volumes.
3DEM provides high-resolution synaptic structure data. High-resolution serial EM reconstruction of a dendrite with its spines (beige), axons (light blue) that form synapses (red) on the dendrite, and the internal mitochondrion (green) and smooth endoplasmic reticulum (yellow).
Second, the team is confronting the challenge of increasing data volume by integrating software tools to enhance registration, reconstruction, editing, and analysis of synaptic neuropil. They are building upon publicly available registration, tracing, and segmentation software (e.g., Reconstruct, SWiFT-IR, TrakEM2 and FIJI, IMOD, Flood-Filling Networks, Neuroglancer, and others) along with model building software (e.g., MCell, NeuroMorph, and Neuropil) to enhance analysis of the image data and subcellular modeling.
Finally, the team is building a data depot for 3DEM datasets. An on-line 3DEM Hub will host a knowledge discovery workspace for 3DEM software tools and automated workflows, and a learning center for tutorials and virtual workshops, built upon state-of-the-art high-performance computing resources.
Altogether, this work will allow the community to improve understanding of synapse structure, connectivity, and function across brain regions and species while making vast improvements in quantitative analysis of high-resolution neuronal structure data.
High-throughput, quantitative connectomics using fluorescence microscopy (project information)
High-resolution volumetric reconstructions of EM data described above enable a detailed exploration of synapse organization and connectivity. However, these methods cannot comprehensively label specific cell types, and only reveal connectivity within a specified volume of tissue. Moreover, volumetric EM analysis is not typically accessible to individual investigators without specialized facilities and dedicated image-processing resources. In contrast, fluorescence-based synapse detection is well suited for broad adoption by the neuroscience community in diverse experimental preparations. Although immunohistochemical approaches provide comprehensive coverage of tissue samples (i.e., all synapses of a given type can be detected), dense labeling can make it difficult to correctly assign synapses to specific presynaptic and postsynaptic partners, a key parameter in defining and quantitating circuit-specific alterations in brain function. Genetically encoded, fluorescent synaptic tags with cell type-specific expression provide an alternate method for synapse identification and volumetric analysis. Fluorescence-based molecular genetic methods take advantage of widely used transgenic mouse lines and fluorophores to facilitate accurate and automated analysis of large numbers of cells across many experimental conditions (Chen et al., 2012; Gross et al., 2013; Druckmann et al., 2014; Fortin et al., 2014; Choi et al., 2018). Sparse labeling of neurons allows unambiguous isolation of presynaptic and postsynaptic elements. In addition, multispectral labeling in brain tissue can help visualize the three-dimensional organization of synapses and glia, revealing principles of complex, multisynapse, and glial organization.
To meet the challenge of quantitative synaptic analysis, several teams have developed virally delivered molecular genetic reagents for pansynaptic fluorescent labeling of postsynaptic sites that can be used in conjunction with presynaptic cell fills to detect and quantitate specific synapse types and to align presynaptic and postsynaptic neurites in complex brain structures. These reagents include modular presynaptic and postsynaptic approaches like mGRASP (mammalian/enhanced GFP reconstitution across synaptic partners, Kim et al., 2011; Choi et al., 2018), fluorescently tagged synaptic scaffolding molecules such as PSD-95 and gephyrin (Chen et al., 2012; Fortin et al., 2014), and fibronectin-repeat intrabodies (Gross et al., 2013).
A.B., M.B., and colleagues have adapted a neuroligin-1-based transmembrane and cytoplasmic domain for synapse targeting, incorporating an extracellular fluorescence tag using either the fluorogen-activating protein (FAP; Szent-Gyorgyi et al., 2008) or yellow fluorescent protein (YFP) combined with a cell-filling dTomato fluorophore for alignment with presynaptic neurites (FAPpost or YFPpost; Fig. 4). Virus-mediated expression of FAPpost yields punctate labeling across the somatodendritic arbor of diverse neuronal subtypes, allowing for quantitation of specific inputs across a single postsynaptic neuron using commercially available image analysis software (Imaris). Immunostaining with synaptic markers shows that FAPpost puncta localize to both inhibitory and excitatory synapse types. Importantly, the expression of FAPpost does not alter the frequency or amplitude of miniature excitatory or inhibitory currents in neurons, suggesting that it does not drive abnormal synapse stabilization that may be a problem for mGRASP- or PSD-95-conjugated fluorophores (El-Husseini et al., 2000; Gross et al., 2013). Pyramidal neuron expression of FAPpost, combined with labeling of inhibitory neurons with Cre transgenic reporter lines, are revealing unexpected localization patterns of inhibitory inputs and how they change during experimental manipulations. In addition to developing, validating, and making available this molecular toolkit (catalog #105981, #105982, #105983, Addgene), the team is building a high-throughput analysis pipeline for the quantitative florescence-based detection of putative synapses, laying the groundwork for large-scale analysis of learning- and disease-dependent synaptic reorganization. Investigators at Carnegie Mellon and the University of Pittsburgh, in collaboration with the Pittsburgh Supercomputing Center, have established a data-sharing resource for brain-scale confocal microscopy datasets to enable robust and reliable data sharing and integration of the massive datasets generated by these and other tools supported by the BRAIN Initiative (e.g., in the BRAIN Initiative Cell Census Network).
High-throughput fluorescence-based detection and assignment of synaptic inputs using genetically encoded synaptic markers. A, Example dendritic segment from dtomato-filled CA1 pyramidal neuron postsynaptically labeled with FAPpost (pseudo-colored green). Scale bar, 2 μm. B, As in A but showing immunohistochemically labeled parvalbumin (PV) neurites (pseudo-colored pink). C, High-threshold surface rendering of postsynaptic puncta (green), with PV-associated puncta in red. D, Schematic of individual puncta rendered as spots. E, Example neuron with assigned PV puncta. Scale bar, 10 μm.
New tools for understanding distributed patterns of brain activity (project information)
The methodological advances described so far primarily concern structural analysis of fixed brain tissue. Many BRAIN Initiative investigators, including the remaining two teams to be discussed, are working to advance the study of the dynamics of the living brain.
The brain is a massively interconnected network of specialized circuits. Even primary sensory areas, once thought to support relatively simple, feedforward processing, are now known to participate in complex feedback loops. Two characteristics of a brain network make its study particularly challenging. First, its dynamics have a range of relevant timescales: sensory processing and decision-making can occur in milliseconds, but changes in activity patterns that support adaptation, learning, and related functions occur over the lifetime, on timescales ranging from seconds to years. Second, the network spans spatial scales: circuits consist not only of nearby neurons, but also of neurons distributed across multiple, distant brain areas.
In collaboration with the Neurotechnology Group at Lawrence Livermore National Laboratory, L.M.F. and colleagues designed and fabricated high-density, multichannel polymer recording arrays (LL-FL probes) to address these challenges. These are flexible and biocompatible devices that can be inserted into the brain surgically using a removable silicon shuttle (Felix et al., 2013). For chronic recordings, a multiprobe implant can stabilize and protect the devices to yield long-term, high-density distributed recordings in freely moving rats (Chung et al., 2019b; Fig. 5). More recently, the team has refined the insertion technique, incorporating 3D sharpened shuttles (Joo et al., 2019), which reduce the force required for insertion and can penetrate dura.
Long-lasting, high-quality recordings from polymer probes. Each main panel illustrates the recordings obtained on 1, 45, 154, or 283 d after implantation. Each subpanel shows the waveforms (mean ± SD) of a putative single neuron across the two to four electrode contacts on which that waveform was of highest amplitude. The geometry of the contacts relative to the probe is shown in the center of each main panel, and the colored bars indicate the specific electrode contacts. These recordings were taken from one 18-channel shank of a two-shank probe and illustrate that high-quality recordings can be obtained for many months.
These tools allow access to multiple relevant temporal and spatial scales. First, at millisecond timescale resolution, they yield high-quality single units recorded continuously over many months. Recording continuously for 10–11 d in three animals (Chung et al., 2019a), the team tracked the activity of 2232 units from rat medial prefrontal cortex (mPFC), each for at least 24 h. Of the 1150 units first detected in the initial few days of the 10–11 d period, 21% (247 of 1150) could be followed for >1 week. Second, these probes and the associated hardware and software (Chung et al., 2017) permit high-density sampling from multiple brain regions. To illustrate the potential for high-channel count recordings, the team recorded from eight 64-channel devices distributed across the following three brain areas: mPFC, orbitofrontal cortex, and ventral striatum. This yielded 375 well isolated single neurons across the three brain regions. Two devices optimized for local field potential recording were also implanted in hippocampus, allowing the identification of hippocampal sharp-wave ripple (SWR) events. Although SWRs are known to modulate firing rates in regions including mPFC (Joo and Frank, 2018), it is unknown whether the same SWR events engage coordinated patterns of activity across areas simultaneously. Having electrodes distributed across multiple regions permitted comparing responses in these regions to the same hippocampal events to address this question, which has implications for our understanding of memory processes. This would not have been possible without the combination of high-density recordings in individual regions and the ability to distribute those recordings across brain regions. Thus, these tools make it possible to ask new questions about how the brain changes over time and how brain regions interact to process information. The probes and associated software are being developed for dissemination to the community, and the team has started to collaborate with prospective early adopters, providing refined devices as well as training in device handling and surgical implantation through on-site visits and video conferences.
Group iterative multiple model estimation: a tool for multiscale model selection and unsupervised classification (project information)
Understanding relationships of activity within and across multiple brain regions and neural systems is an even greater challenge in human neuroscience. Network methods for the analysis of functional MRI data have proven fruitful with the current state of science dictating that the brain is best understood as coordinated activity across spatially disparate regions. However, a gap in computational methods exists for quantifying these relations. The following two primary computational problems exist: (1) quantifying person-specific estimates for how each brain region in a given brain network relates to overall network activity (i.e., how integral to the network is that region for that individual?); and (2) identifying consistent patterns of estimates that may exist—within and between brain networks—amid the heterogeneity in brain processes seen among human individuals. K.M.G. and colleagues are introducing a set of tools that build from group iterative multiple model estimation (GIMME; Gates and Molenaar, 2012), a model search procedure that reliably recovers sparse patterns of relations among brain areas in functional MRI data collected during the resting state and during task performance. The two primary developments are as follows: latent variable GIMME (LV-GIMME; Gates et al., 2019) and subgrouping GIMME (S-GIMME; Gates et al., 2017). Both are freely available within the R package gimme (Lane et al., 2019) with documentation available on-line (http://gimme.web.unc.edu/63-2/). The group has validated and freely distributed these statistical tools and software and will make the simulated and empirical validation data available once the initial papers have been accepted for publication. This will enable the community to interrogate the interactions within and between brain networks—across multiple sensory systems, internal states, and cognitive regions—that drive human behavior.
LV-GIMME enables the examination of brain processes as they occur across multiple spatial resolutions, allowing researchers to simultaneously investigate research questions at the voxel, region, and network levels. Numerous options for analyzing network activity are available, including principal component analysis and varied methods of factor analysis estimation. However, few existing methods take into account the known temporal dependencies in fMRI data (i.e., that activity at a prior time point can predict activity at a later time point). LV-GIMME uses a method that appropriately accounts for these dependencies by using a dynamic factor-analytic framework as well as an estimation procedure that is optimal for use with time series data (Fisher et al., 2019). The estimation approach, called model implied instrumental variable estimation (Bollen, 1996) is well established and has been shown by K.M.G. and colleagues to provide robust estimates even in the presence of model misspecifications that are likely to occur in fMRI data analysis (Bollen et al., 2018). Figure 6 depicts the dynamic factor model approach with relations among brain networks as well as relations capturing the extent to which specific brain regions (or voxels) relate to others within a given network.
LV-GIMME enables simultaneous quantification and examination of functional activity across multiple layers of spatial resolution. This figure depicts the dynamic relations among brain networks as well as relations among brain regions (or voxels) within a given network. “Contemporaneous” refers to relations between networks at a lag of zero; “lagged” refers to relations between networks at a lag of 1; “parameter” refers to the fact that between-network relations can be interpreted as regression coefficients, whereas for the within-network activity the parameters represent the degree to which a given brain region or voxel relates to other brain regions in a predefined brain network. Population is the simulated value and estimate is the average estimate obtained across a small simulation study.
During the model-building process, users can use unsupervised classification with S-GIMME to parse out subsets of individuals, identifying those who have similar brain processes. This method addresses unaccounted-for heterogeneity across individuals, which can decrease the statistical reliability of results from most commonly used methods of analysis, impacting replicability (Molenaar, 2004; Smith, 2012). The unsupervised classification method appropriately classified individuals according to the task they were conducting at multiple levels of spatial resolution. It must be stressed that this robust classification occurred using solely the brain time courses (i.e., variation in the BOLD signal across time) with no information provided regarding tasks or the individuals. This tool has already been used to classify individuals within predefined categories (Price et al., 2017a; McCormick et al., 2019), as well as identify differences in functional connectivity between typically developing controls and diagnostic categories (Price et al., 2017b; Henry et al., 2019).
These tools offer researchers an easy-to-use method for quantifying brain functional connectivity in new and robust ways. These novel methods are providing the scientific community with the means to achieve the audacious goals of the BRAIN Initiative, allowing us to peer into the human brain in action and expand our capacity to understand the diversity of brain phenotypes. As we develop new technologies, such as those that enable the classification of individuals based on readouts of their neural circuit activity, comes a responsibility to consider their medical and social implications becomes paramount. Neuroscientists engaging with neuroethicists can most effectively promote their responsible application and translation (Rommelfanger et al., 2018).
In conclusion, examples above represent only a small fraction of the constantly growing collection of BRAIN Initiative-supported innovations with applications that stretch from the most basic science to clinical diagnosis and treatment applications. A major goal of the next phase of BRAIN must be to enable access to these technologies through creative new mechanisms. The process of transforming a tool used by a single researcher into a turnkey device that can be deployed to any laboratory has been the purview of industry. NIH incentivizes this type of development toward commercialization through its small business grant mechanisms, connects academic users to next-generation devices through the BRAIN Initiative Public-Private Partnership program, and provides resources for alternative dissemination models through the BRAIN Initiative research resource grant program. In addition to encouraging tool development and the establishment of an infrastructure for disseminating them, the BRAIN Initiative supports the training of investigators in the skillful use of these sophisticated tools.
Analysis of the vast arrays of data produced by BRAIN Initiative teams also requires new computational and collaborative tools. Recognizing that expanding the technical toolbox for neuroscience will both require and drive a proliferation of novel datasets, the NIH has developed a new policy to encourage the preservation, sharing, and secondary analysis of data collected as part of BRAIN Initiative-funded projects. The policy requires BRAIN investigators to develop and implement data sharing plans using an infrastructure built and supported through the BRAIN Initiative as well. Continued investment into expanding access to novel tools, resources, and datasets, alongside greater support for efforts to integrate existing resources into flexible research systems, will pave the road toward greater fundamental insights into the workings of the human brain and transform the study, treatment, cure, and prevention of brain disorders.
The variety of tools we have described illuminate the key to the success of the BRAIN Initiative and of neuroscience as a field—an interdisciplinary outlook, in the broadest meaning of the concept. Critical to the continued success of this grand project are the many efforts that cultivate this interdisciplinarity: programs and commitment to expand and enhance training for emerging scientists, facilitate collaboration with experts across a wide swath of the sciences, enable data sharing and promote scientific integrity and rigor, support a diversity of voices in neuroscience, and integrate neuroethical considerations to ensure that neurotechnological innovation, development, and dissemination proceed in scientifically and ethically relevant and responsible directions.
Footnotes
This work was supported by NIH Grants U01-NS-103553 and UF1-NS-107667 to L.M.F.; The Kaufman Foundation, and NIH Grants R21-NS-092019 and NIH R01-MH-114103 to A.L.B. and M.P.B.; and NIH Grant R24-MH-114793 to M.P.B. and Simon Watkins (University of Pittsburgh, Pittsburgh, PA); NIH Grants R01-MH-104319 and R56-MH-095980, and National Science Foundation NeuroNex Grant 1707356 to K.M.H.; NIH Grants U01-NS-099714 and U24-NS-109113 to J.S.T.; NIH Grant U24-NS-109113 to J.W.L.; and NIH Grant R01-EB-022904 to K.M.G. We thank the research staff who have supported the research programs and resource development efforts of the contributing authors, as well as BRAIN Initiative program and administrative staff at the NIH, for their contributions to the BRAIN Initiative.
The authors declare no competing financial interests.
- Correspondence should be addressed to Walter Koroshetz at koroshetzw{at}ninds.nih.gov