Abstract
The brain is a complex system comprising a myriad of interacting neurons, posing significant challenges in understanding its structure, function, and dynamics. Network science has emerged as a powerful tool for studying such interconnected systems, offering a framework for integrating multiscale data and complexity. To date, network methods have significantly advanced functional imaging studies of the human brain and have facilitated the development of control theory-based applications for directing brain activity. Here, we discuss emerging frontiers for network neuroscience in the brain atlas era, addressing the challenges and opportunities in integrating multiple data streams for understanding the neural transitions from development to healthy function to disease. We underscore the importance of fostering interdisciplinary opportunities through workshops, conferences, and funding initiatives, such as supporting students and postdoctoral fellows with interests in both disciplines. By bringing together the network science and neuroscience communities, we can develop novel network-based methods tailored to neural circuits, paving the way toward a deeper understanding of the brain and its functions, as well as offering new challenges for network science.
Introduction
During the past two decades, network science has become a vital tool in the study of complex systems, offering a wide range of analytical and algorithmic techniques to explore the structure of a complex, interconnected system (Albert and Barabási, 2002; Newman, 2003). Previous reductionist approaches, built on decades of empirical research, have focused on the functioning of individual elements while neglecting how their interactions give rise to emergent aspects of organization. More recently, network approaches helped map out the interactions between molecules, cells, tissues, individuals, and organizations. It is becoming clear that network theory can aid neuroscience in understanding how distributed patterns of interactions create function, accounting for the complexity of integrated systems. At the same time, neuroscience introduces novel questions for network science, providing the potential for new tools and inquiries.
The brain, with its billions of cells connected by synapses, is the ultimate example of a complex system that cannot be understood through the study of individual components alone. In order to unveil the neural basis of complex behaviors and functions, such as perception, movement, cognition, memory, and emotion, we must acknowledge and catalog the interactions between the neurons, allowing us to integrate multiple levels of observations and apply diverse approaches, including computational and mathematical modeling (Fornito et al., 2015; Bassett et al., 2018; Pulvermüller et al., 2021). The goal of this perspective is to outline how network science provides valuable insights and methods that can greatly enhance our broader understanding of brain function. Toward this goal, The Kavli Foundation convened a workshop in which participants began to outline how recent network science techniques can contextualize the emerging wave of neuroscientific big data, focusing on three topics: neurodevelopment, functional brain data, and health and disease. Below, we summarize these discussion points and outline research frontiers in which the fields of network science and neuroscience can jointly benefit from defining common goals and language (Fig. 1).
Techniques
Neuroscience
In the recent past, technical and experimental advancements in neuroscience have enabled scientists to study the brain at increasingly finer scales, ranging from coarse circuit analysis to whole-animal, cellular-level neural recording, connectivity mapping, and genetic profiling. While previous techniques already necessitated the use of graph theoretical tools, recent data collection methods have started to offer a consistent stream of multimodal and high-quality connectomic reconstructions that make the use of network science a necessity. For example, while a connectome of Caenorhabditis elegans has been available since 1986 (White et al., 1986), recent advances in electron microscopy (Abbott et al., 2020) have produced whole-animal wiring information in Ciona Intestinalis (Ryan et al., 2016) and Platynereis dumerilii (Verasztó et al., 2020), as well as brain-wide connectivity maps for Drosophila at different stages of development (Eichler et al., 2017; Scheffer et al., 2020; Winding et al., 2023), along with partial connectomes for zebrafish (Hildebrand et al., 2017), mice (Bae et al., 2021), and humans (Shapson-Coe et al., 2021). Single-cell transcriptomics has also enabled rapid and diverse profiling of cellular identity in various animals, developmental stages, and brain health patterns (Zeng, 2022). Additional “bridge” techniques allow for rapid acquisition of multimodal datasets, such as spatial transcriptomics (Chen et al., 2019), promising physical and genetic information of cells from a single measurement. These datasets offer detailed connectivity and identity information about thousands, and soon millions, of neurons. To analyze these data, and to extract experimentally testable signals and hypotheses, we need to integrate all data points via the use of network science tools, which in turn will also necessitate further advancement of the current tools in network science to address emerging challenges.
To study functional properties of individual neurons and neural networks in the living brain, in vivo techniques, such as two-photon microscopy (Grienberger and Konnerth, 2012) and multiunit electrode recording (Steinmetz et al., 2018), provide rapid profiling of local and mesoscale neuronal activity and anatomy in animal models, revealing principles of circuit organization and dynamic coding underlying a variety of neural processes in sensory perception, movement control, decision-making, and behavior generation. However, presently, the application of these approaches is largely restricted to one or a few brain regions at a time. Technology advancement is needed to monitor neuronal activities across multiple brain regions and at high resolution, necessary to truly understand the dynamic interplay of the different components of the brain-wide circuits for brain function.
Currently, MRI is the primary technology for noninvasively recording functioning brain networks in the human brain, either by reconstructing white matter tracts using diffusion tensor imaging or by inferring axonal connectivity through the measurement of cytoarchitectural or morphometric similarity between brain regions. The analysis of brain-wide neural activity maps has already broadly showcased the insight network neuroscience can bring (Sporns and Betzel, 2016; Shine and Poldrack, 2018; Lurie et al., 2020; Suárez et al., 2020), and the field continues to have immense potential. For instance, the rapid growth and widespread usage of datasets, such as the Allen Human Brain Atlas (Hawrylycz et al., 2012), highlight the need for a wider range of human brain atlases that document gene expression and other molecular or cellular phenotypes that are commensurate with the structural phenotypes, such as volume and myelination. We must integrate multiple levels of analysis and apply diverse approaches, including computational and mathematical modeling, to successfully unravel the complexity of the brain networks and the role of its many interacting components. Future functional brain profiling methods must also account for the multiple cell identities and network features that define neuronal systems.
Common concepts in network neuroscience
Studying the brain requires moving across scales and modalities, and a set of shared terms provides a common reference frame. Structural connectivity refers to “ground-truth” physically instantiated networks, such as measured synaptic connections or axon tracts, while functional connectivity represents estimates of statistical dependencies between neural time courses. Similarly, dynamics of networks corresponds to changing of the connection topology, such as synaptic updates or pruning, while dynamics on networks refers to the neural activity patterns instantiated on top of the structural connectivity.
Network science
Traditionally, neural connectivity is modeled as a simple graph, formalizing the brain as a set of nodes (neurons) connected by links (synapses, gap junctions). Network science aims to go beyond the study of such simple graphs: starting from the adjacency matrix of the system, encoding who is connected to whom, network science offers a suite of tools to characterize local and large-scale structure, ranging from degree distributions to community structure, degree correlations, and even controllability, exploring our ability to guide the dynamics of the circuit. Yet, this “time-frozen” graph-based approach highly oversimplifies the true complexity of the brain, ignoring cell identities, signaling types, dynamics, and spatial and energetic constraints that shape this complex organ. Emerging approaches in network science offer a suite of tools to start capturing this rich complexity, helping us analyze the structural and functional brain data across scales (Betzel and Bassett, 2017).
For example, multiplex and multilayer networks provide a framework for understanding and describing cell–cell relationships and hierarchies, capturing the circuit motifs that can significantly impact the dynamical and topological properties of functional networks. Indeed, multiplex networks can represent multiple types of connections, such as synapses, gap junctions, neuromodulators, and circulating gut peptides, within a single formal framework (Bianconi, 2018; Presigny and De Vico Fallani, 2022). Triadic interactions, in which a node affects the interaction between two other nodes, can also be incorporated, capturing, for example, how glia can influence the synaptic signal between neurons (Sun et al., 2023). These triadic interactions can lead to the emergence of higher-order networks, often represented as hypergraphs or simplicial complexes (Battiston et al., 2020, 2021; Bianconi, 2021; Torres et al., 2021). Promise Theory furthers network analysis by incorporating complex agent modeling and conditional linkage, process interconnection language, and accounting for the functional and structural diversity of cells and their roles (Burgess, 2015, 2021).
Traditional network-based analyses of the brain have largely ignored the spatial component of multiscale datasets, such as geometry and morphology of neurons, treating them as point-like nodes rather than physical objects with length, volume, and a branching tree structure. At larger scales, numerous network models have attempted to incorporate the physical dimension or geometry of extended neural networks, through considerations of wiring economy (Markov et al., 2013; Horvát et al., 2016), metabolic cost, and conduction delays (Bullmore and Sporns, 2012). The emerging study of physical networks promises the tools to explore how the physicality and the spatial organization of the individual neurons and the noncrossing constraints affect the network structure of the whole brain (Pósfai et al., 2022). These approaches have the potential to address the metabolic cost of building and maintaining wiring, and incorporate the physical length of connections. There is a real need, for both network science and neuroscience, to go beyond simple connectivity information and incorporate the true physical nature of neurons, informed by weighing cell properties with their connections, allowing us to enrich our understanding of neuronal circuit operations.
Application Areas
Neurodevelopment
System neuroscience and genomics have relied on a fruitful collaboration between theory and experiment. However, a similar symbiosis has so far escaped neurodevelopment. Neurodevelopment has strong core principles, ripe for modeling, empowered by the recent availability of rich connectomics, genomics, and imaging datasets, from which computational and network-based analyses can unleash rich insights. For instance, in addition to whole-body behavior and neural recordings, C. elegans now has developmentally resolved connectomes and transcriptomes (Boeck et al., 2016; Witvliet et al., 2021), allowing for the integrated analysis of connectivity, genetics, activity, and behavior, inspiring the ongoing acquisition of similar datasets for larger organisms. The main use of network tools in brain science has so far been limited to the mapping and analysis of static network maps, ignoring the temporal scale of brain connectivity and especially the temporal aspects of brain activity (i.e., network dynamics). However, a key discovery of network science is that we must understand the regularities and rules governing the growth and assembly of networks (i.e., the evolving topology of their connectivity) to understand the origin of the empirically observed network characteristics (A. L. Barabási and Albert, 1999). Network science offers important tools to address this gap, and hence can provide a comprehensive quantitative framework to study and understand the temporal unfolding of neurodevelopment across species. It offers the formal language to describe, and then to analyze, how the emerging cell identity and its physical instantiation lead to the observed connectivity of the brain. Reciprocally, new insights from biological systems that first establish and then prune structured networks may inspire new network approaches (Woźniak et al., 2020).
One central question in this field is how neuron identity, captured by gene expression profiles, location, and shape, determines the wiring patterns of neurons and leads to stereotyped connectivity and behavior. Network models of neurodevelopmental principles are needed, therefore, to validate hypotheses and make predictions for future experiments. For instance, Roger Sperry's hypothesis that genetic compatibility drives neuronal connectivity, helped infer the protein interactions that underlie connectivity in C. elegans (D. L. Barabási and Barabási, 2020; Kovács et al., 2020) and in the Drosophila visual system (Kurmangaliyev et al., 2020). These models are most successful when they take into account the affordances of the niche in which organisms operate, including noise from data collection limitations and spatial restrictions, offering more accurate descriptions of the complex landscape of neuronal circuit construction.
Further work on cell migration, morphogenesis, and axon guidance can help unveil the temporospatial considerations that lead to specific circuit implementations and overall network assembly. For example, the preconfigured dynamics of the hippocampus have been shown to be influenced by factors, such as embryonic birthdate and neurogenesis rate (Huszár et al., 2022). Additionally, it is now known that certain network features, including heavy tailed degree distributions, modularity, and interconnected hubs, are present across species and scales in the brain (Towlson et al., 2013; Buzsáki and Mizuseki, 2014; van den Heuvel et al., 2016). One potential explanation for the conservation of these features is the existence of universal constraints on the brain's physical architecture that arise from the trade-off between the cost of development, physical constraints, and coding efficiency. In this context, it is likely that high-cost components, such as long-distance intermodular tracts, are topologically integrative to minimize the transmission time of signals between spatially distant brain regions (Bullmore and Sporns, 2012). Further research in this area has the potential to improve our understanding of the development and organization of the brain, with potential implications for the diagnosis and treatment of neurologic disorders, as we discuss later.
Brains are networks that do
In technological networks, such as the Internet or a computer chip, structure and function are carefully separated: information is encoded into the signal; hence, the role of the network is only to guarantee routing paths between nodes. In the brain, however, action potentials do not encode information in isolation. Instead, the brain relies on population coding, meaning that encoding is implemented by the patterns of signals generated by multiple physical networks of connections. Thus, monitoring and quantifying this network structure are critical for understanding how neuronal coding achieves information processing. This makes the structure of the network more than a propagation backbone; it becomes an integral part of the algorithm itself (Molnár et al., 2020). Thus, the connectome cannot be understood divorced from the context of the actions it performs. Hence, the modules, metrics, and generative processes that support robust representation need to be integrated with the structural representation.
While many recent studies have revealed ways in which task structures are reflected in the networks of neural representations (Chung and Abbott, 2021), little work has been done to elucidate how such representational geometries arise mechanistically and dynamically. Future research should aim to unveil how connectivity patterns at multiple temporal and spatial scales influence the population-level representational geometry, and how this leads to the implementation of behaviorally relevant task dynamics. For instance, hippocampal “cognitive maps” that support reasoning in different encountered task spaces have a natural extension to network formalism: each behavioral state can be a node, and possible transitions between states are edges (Muller, 1996; Eichenbaum and Cohen, 2004; Stachenfeld et al., 2017; George et al., 2021), a representation that can be extended to the challenge of inducing latent networks from sequential inputs (Raju et al., 2022). Further, we must account for the dual dynamics present in the brain: network connectivity defines the possible functions that can be supported. In the reverse direction, the functional dynamics of the network allows synapses to form and change, allowing dynamics (activity) to change the connectivity of the underlying networks (Papadimitriou et al., 2020). Important insights into brain function can be revealed when dynamics taking place at the node level (single neurons, brain regions) are integrated with dynamics taking place on links (synaptic signals, edge signals) (Faskowitz et al., 2022) or on higher-order motifs (Santoro et al., 2023), which are driven by the network cyclic structure and its higher-order topology (Millán et al., 2020). For instance, studying symmetrical complexes, such as automorphisms (Morone and Makse, 2019) and fibrations (Morone et al., 2020), in structural and functional neural connectivities, has succeeded in unveiling the building blocks for neural synchronization in the brain. Graph neural networks (Battaglia et al., 2018; Bronstein et al., 2021), which combine the benefits of network topology and machine learning, may also help us relate connectomically constrained graphs to the neural dynamics that take place over them.
For a brain to perform the numerous processes it supports, it is expected to simultaneously control the activity of the individual neurons, as well as the dynamics of individual circuits and ultimately the full network. This represents an enormously complex control task, as unveiled by recent advances in network controllability that merged the tools of control theory and network science (Liu and Barabási, 2016). These tools help us identify the nodes through which one can control a complex neural circuit, just like a car is controlled through three core mechanisms: the steering wheel, gas pedal, and brake. Recent work used network control to predict the function of individual neurons in the C. elegans connectome, leading to the discovery of new neurons involved in the control of locomotion, and offering direct falsifiable experimental confirmation of control principles (Yan et al., 2017). An alternate description of brain function requires a deeper understanding of the underlying control problems, which requires simultaneous profiling and understanding of network structure and dynamics (Tang and Bassett, 2018; Stiso et al., 2019).
Finally, machine learning methods have offered a unique approach for linking network structure to task performance (Chami et al., 2020; Veličković, 2023), allowing for rapid profiling of learning and behavior that can later inform how we query biological learning (Marblestone et al., 2016; Vu et al., 2018; Richards et al., 2019). To move forward, we must study the statistics of AI architecture's weight structures that offer high performance on complex tasks, helping identify powerful subnetworks, or “winning tickets,” responsible for the majority of the performance of a system (Frankle and Carbin, 2018). An alternative approach lies in identifying generative processes that produce highly performing networks. This is inspired by innate behaviors: animals arrive into the world with a set of evolution-tested preexisting dynamics, implying an optimized set of developmental processes that yield a fine-tuned functional connectome at birth (Zador, 2019; D. L. Barabási et al., 2023). This process, termed the “genomic bottleneck,” has the potential to greatly increase the flexibility and utility of AI systems (Koulakov et al., 2022). Indeed, developmentally inspired encodings of neural network weights have already shown high and stable performance on reinforcement learning, meta-learning, and transfer learning tasks (D. L. Barabási and Czégel, 2021; D. L. Barabási et al., 2022).
Further work in these directions would require streamlined integrations of powerful circuits identified in the connectome with machine learning systems. A major barrier lies in the complexity of the initial setup of tasks that the networks are asked to learn (Seshadhri et al., 2020), embedded in complex packages, such as the simulated physics environment of Mujoco (Todorov et al., 2012). It is also challenging to provide custom topologies or weights to current machine learning packages, thereby moving past the standard feedforward, layered architectures. Addressing these challenges would allow the network science toolkit to define a systematic search of network priors in machine learning, thereby modeling the neuroevolutionary processes and neurodevelopmental solutions responsible for biological intelligence.
Health and disease
The integration of connectivity and genetic data and their dynamic patterns is crucial for understanding the neural transitions from a healthy state to a disease state, particularly in the context of brain disorders, diseases, and mental illnesses, often rooted in the early years of life. Large-scale MRI datasets have allowed for the modeling of normative trajectories of brain development (Bethlehem et al., 2022); however, major opportunities remain for network science to reveal the causes and physiologies of brain disorders through population analyses.
In addition to the brain's own internal networks, the connections between the brain and other organs robustly affect neural development and function. Complex interactions have been revealed in the gut–brain axis, where microbiota can modulate immune and neural states, as well as in the brain's interaction with the reproductive system, driving intricate fluctuations in levels of sex hormones during puberty, menopause, and pregnancy (Andreano et al., 2018; Pritschet et al., 2020). Overall, the connections between the brain and other organs can have significant effects on neural development and function, highlighting the importance of a holistic exploration of neural networks together with the body (Buzsáki and Tingley, 2023).
Ultimately, to diagnose and treat disease, we must understand the temporally evolving complex interactions between genetic, disease, and drug networks and their impact on the connectome. Toward that goal, network neuroscience must partner with network medicine, which applies network science to subcellular interactions, aiming to diagnose, prevent, and treat diseases (A. L. Barabási et al., 2011). This need is reinforced by studies that have found that high degree hubs, located mainly in dorsolateral prefrontal, lateral and medial temporal, and cingulate areas of human cortex, are co-located with an enrichment of neurodevelopmental and neurotransmitter-related genes and implicated in the pathogenesis of schizophrenia (Morgan et al., 2019). Network medicine takes advantage of the structure of subcellular networks, as captured by experimentally mapped protein and noncoding interactions, to identify disease mechanisms, therapeutic targets, drug-repurposing opportunities, and biomarkers. In the case of brain diseases, mutations and other molecular changes that alter the subcellular networks within neurons and non-neuronal cells in turn affect the wiring and rewiring of the connectome and neural dynamics. Hence, effective interventions and treatments for brain disorders must confront the double network problem, accounting for the impact of changes in the subcellular network on connectivity and ultimately brain function.
Discussion
In conclusion, major funding directives, such as the public-private funding alliance of the U.S. BRAIN Initiative, have significantly advanced the development of technologies for studying the brain across temporal and spatial scales and measurement modalities. Yet, the massive amount of data produced and expected to emerge from these tools have created a complexity bottleneck. We need guiding frameworks to organize and conceptualize these data, leading to falsifiable hypotheses. Network science offers a natural match for this task, with the potential for integrating complexity across cell identities, signaling types, dynamics, and spatial and energetic constraints that shape brain development, function, and disease.
We need the joint engagement of network scientists and neuroscientists to develop novel network-based methods that address the unique priorities and challenges posed by brain research. Such approaches will need to account for the dynamic nature of connections in the brain, which are continually changing as a result of various factors, such as experience, aging, and disease, as well as incomplete or uncertain reconstructions of brain connectivity. Continued advances in neuroscience have opened up exciting possibilities for a deeper understanding of the brain and its function, and now require input from network science to fully capture the dynamics of this complex system with the goals of unlocking how neural identity, dynamics, behavior, and disease all link together.
These methodological advances can run parallel to ever-increasing efforts toward adoption of open-science practices, such as data and code sharing. Such efforts bring new challenges related to reproducibility, and have, in some cases, resulted in examples of findings that fail to replicate (Open Science Collaboration, 2015; Errington et al., 2021) or exhibit substantial variability attributable to software (Bowring et al., 2019; Botvinik-Nezer et al., 2020) or analysis teams (Botvinik-Nezer et al., 2020). As a discipline, neuroimaging has championed open-science initiatives, promoting practices including detailed methodological descriptions and sharing of data and code used to generate results in a publication (Nichols et al., 2016), and even multiverse analyses that consider all plausible analytical variations (Dafflon et al., 2022).
To achieve these goals, there is a need to facilitate greater interaction between the network science and neuroscience communities. A well-tested way is to offer interdisciplinary grants from public and private organizations, such as The Kavli Foundation, the National Institutes of Health, and the National Science Foundation, that focus on developing network tools for emerging neuroscience technologies and questions, as well as support for students and postdoctoral fellows with interests in both disciplines. These grants could also support workshops and conferences that bring together researchers from both fields, and provide funding for coursework in network neuroscience at the undergraduate and graduate levels. Actively fostering collaboration between these two fields will encourage the adaptation of novel network approaches to understanding biological data, a necessary step toward advancing our understanding of the brain in health and disease.
Footnotes
G.B. was supported by the Turing-Roche partnership and Royal Society IEC\NSFC\191147. H.M. was supported by National Institute of Biomedical Imaging and Bioengineering and National Institute of Mental Health through the National Institutes of Health BRAIN Initiative Grant R01EB028157. E.K.T. was supported by Government of Canada's New Frontiers in Research Fund NFRFE-2021-00420, and the Natural Sciences and Engineering Research Council of Canada reference number RGPIN-2021-02949. H.Z. was supported by National Institutes of Health BRAIN Initiative Grant U19MH114830. We thank The Kavli Foundation for organizing and supporting a convening in October 2022, “Network Science Meets Neuroscience,” which inspired this review; and Daria Koshkina for designing figures.
The authors declare no competing financial interests.
- Correspondence should be addressed to Dániel L. Barabási at danielbarabasi{at}gmail.com or György Buzsáki at Gyorgy.Buzsaki{at}nyulangone.org