Abstract
With the advent of volumetric EM techniques, large connectomic datasets are being created, providing neuroscience researchers with knowledge about the full connectivity of neural circuits under study. This allows for numerical simulation of detailed, biophysical models of each neuron participating in the circuit. However, these models typically include a large number of parameters, and insight into which of these are essential for circuit function is not readily obtained. Here, we review two mathematical strategies for gaining insight into connectomics data: linear dynamical systems analysis and matrix reordering techniques. Such analytical treatment can allow us to make predictions about time constants of information processing and functional subunits in large networks.
SIGNIFICANCE STATEMENT This viewpoint provides a concise overview on how to extract important insights from Connectomics data by mathematical methods. First, it explains how new dynamics and new time constants can evolve, simply through connectivity between neurons. These new time-constants can be far longer than the intrinsic membrane time-constants of the individual neurons. Second, it summarizes how structural motifs in the circuit can be discovered. Specifically, there are tools to decide whether or not a circuit is strictly feed-forward or whether feed-back connections exist. Only by reordering connectivity matrices can such motifs be made visible.
Introduction
The quest for understanding the function and dysfunction of organs, including the brain, has been classically subdivided into two disciplines, anatomy and physiology, which examine, respectively, structural and functional properties from separate but complementary perspectives. Of course, this historical dichotomy was never absolute as probably best evidenced by the neurobiological pioneer Santiago Ramon y Cajal (1911), who always studied anatomic structures in the light of current functional insights. Modern developments of functional imaging techniques (Kim and Schnitzer, 2022) even make such disciplinary and methodological distinctions virtually impossible. Nevertheless, our limited success in understanding brain function has often been excused by technological shortcomings that only allow studying reduced systems, preventing us from seeing the whole anatomic and physiological complexity at the same time. With the advent of connectomics (Denk and Horstmann, 2004; Lichtman and Denk, 2011; Abbott et al., 2020), these limitations are being overcome, and the anatomic connections of large brain regions and complete functional subcircuits (White et al., 1986; Briggman et al., 2011; Helmstaedter et al., 2013; Takemura et al., 2017; Shinomiya et al., 2019) have been revealed. At the same time, we witness simultaneous optical (Ghosh et al., 2011; Aimon et al., 2019; Kim and Schnitzer, 2022) and electrical (Buzsaki et al., 2015; Jun et al., 2017; Steinmetz et al., 2019; Allen et al., 2019; Gardner et al., 2022) recordings of many hundreds of neurons. Yet these spectacularly valuable datasets have not readily uncovered the remaining mysteries of neural function, but instead clearly revealed the next boundaries we need to cross: They are theoretical, data analytical, and mathematical.
Although we are far from well equipped to master all challenges imposed on us by new connectomic data and large-scale single-unit recordings, we do not need to start from scratch. Computational neuroscientists have explored tools to study activity in interacting neural networks over many decades (Herz et al., 2006) and have established a set of standard mathematical tools by which such data can be viewed (Dayan and Abbott, 2005). Since the methods encompass large-scale simulation techniques (Markram et al., 2015; Lazar et al., 2021), mean-field approaches (Nakagawa et al., 2013; Gerstner, 2000), and mathematical and graph-theoretical (Sporns et al., 2000; Rajan and Abbott, 2006) techniques, no single review is able to treat of all of those with due respect.
In this review, we will lay our main focus on dynamical systems approaches for linear networks, since they combine a number of advantages that make them attractive as a standard tool set to systems neuroscientists. First, they can be clearly related to underlying anatomic and physiological elements (see below). Second, they have been demonstrated to successfully explain physiological phenomena (Seung et al., 2000a,b; Usher and McClelland, 2001; Goldman et al., 2002; White et al., 2004). Third, the simplifications made in these models are obvious, and numerical simulations of a nonsimplified (i.e., nonlinear) system can be readily compared with the theoretical predictions. And finally, there exist established mathematical tools from a standard repertoire that is taught to physicists, computer scientists, engineers, chemists, and to an increasing extent to biologists as well.
Model design
For reasons of better illustration, we illustrate mathematical modeling for the small, but well-studied neural circuit of fly motion processing (Fig. 1A). At its core, it is composed of 15 genetically, anatomically, and physiologically characterized neurons that are all located within each of the 750 columns of the Drosophila optic lobe. The output neurons of the circuit, T4 and T5, are the primary, motion-sensitive neurons of the fly, which compute the direction of ON- (T4) and OFF- (T5) edges along one of the four cardinal directions (Maisak et al., 2013; for review, see Borst et al., 2020a). We also know about the type and response characteristics of the retinal input (R) that drives the circuit. All connections between the various circuit elements have been thoroughly quantified by volumetric EM studies (Shinomiya et al., 2019) and are summarized in a 16 × 16 matrix (Fig. 1B). The rows of this matrix collect all inputs to a specific cell, the columns all output from a specific cell. Since the retinal input cells do not receive input from the rest of the circuit, the first row is empty. The retinal cells are the only ones that receive external visual input, and thus are the only ones that receive a contribution from the input vector
Network architecture, connectivity matrix, and general layout of a recurrent network. A, Circuit diagram of the fly motion vision system. Retinal input (gray, only one is shown) arrives at laminar neurons (green), which project to medulla (red) and lobula (blue) neurons. From there, activity reaches the T4 (dark red) and T5 (dark blue) cells. In this system, the action of the neurotransmitters is known: arrows indicate excitation; discs indicate inhibition. B, The number of synapses between neuron types is summarized in a connectivity matrix. C, If one interprets the number of synapses between each pair of cells as the total synaptic strength of this connection, the matrix in B defines a recurrent neural network in which inputs from all presynaptic cells are collected at “dendrites” (horizontal bars). The retinal input I(t) provides additional external drive to each neuron. A, B, Reproduced from Borst et al. (2020b), with permission.
The dynamics of the membrane potentials Vi(t) relative to the resting potential of each neuron can then be computed by the differential equation as follows:
Here, we assume passive membrane properties only (i.e., without the participation of any voltage-activated ion channels). If we compare Equation 1 to the one of a first order low-pass filter as follows:
Before going into further mathematical detail, we should consider the general purpose of such connectomics-based network modeling. While in an ideal experimental world the connectivity matrix M would be fully and accurately determined, allowing for quantitative predictions of the system response to any time-dependent stimulus I(t), real connectomic data mostly provide only the number of connections or may only include part of the circuit elements. Even if the respective neurotransmitters are known, the real connection strength in terms of the postsynaptic potential amplitude, and particularly the time course, is mostly a matter of guessing, although synapse sizes are becoming increasingly available from a number of EM datasets, which allows inference about connection strength for different cell types (Motta et al., 2019). Furthermore, synaptic plasticity involving, for example, changes in presynaptic release dynamics or neuromodulation, as well as long-range synaptic inputs from nonimaged brain regions, is entirely out of scope. Thus, the matrix M is never fully experimentally constrained, which is exactly the reason why dynamical systems modeling is so crucial: It allows us to set further constraints on the matrix elements of M (the synaptic connection strengths) as well as on the parameters determining single-cell dynamics, by comparing between model outcomes and measurements. There are two general strategies for making this comparison: biophysical dynamical systems approaches and regression-based approaches from machine learning. In Linear models, we will treat biophysical approaches; in Matrix exegesis, we will provide a brief introduction to nondynamical graph-theoretical approaches; and regression-based approaches are briefly referred to in the Discussion.
Linear models
Linear approximations of dynamical systems allow for in-depth mathematical treatment. Moreover, unbiased information processing requires a linear regime (e.g., Atick, 1992), since only in such regimes are changes in the input always translated into proportional changes in the output; there is no information transfer if the membrane voltage Vj is below threshold or so large that it consistently evokes maximal response. As a note of caution, we would like to stress that linearization is a coarse approximation, which, of course, introduces inaccuracies and errors. One cannot expect quantitative agreement between a linear model and the true biological circuit dynamics. However, in addition to the advantage of analytical tractability we will exploit next, linearization allows for a reduction of parameters, combining multiple physiological properties into one effective weight, which greatly increases the feasibility of parameter fitting. Matrix entries of M thus not only reflect synaptic efficacy, but also the slope of the f-I curve of the input neuron, which depends on multiple morphological parameters as well as cellular conductances and their current modulatory states. Linear models thus always reflect an approximation to the current regime of operation of the circuit. Conversely, the real biophysical circuit may generally correspond to multiple linear models (for threshold linear neurons, see Curto et al., 2019), depending on its mode of operation. Linear models can hence make qualitative predictions about which regimes are possible in a circuit and, together with the connectivity matrix, constrain the contribution of individual neurons for each of those regimes.
Formally, the linear counterpart of the dynamical system from Equation 1 is obtained by setting f(V) = V. Since not all neurons possess the same membrane time constant τ, one may straightforwardly generalize the dynamical equation to multiple time constants by replacing τ in Equation 1 by a diagonal matrix T with the different time constants on the diagonal. This then leads to a system of linear differential equations in matrix form as follows:
Introducing the dynamical matrix
Analytical solution and eigen decomposition
The benefit of linearity in dynamical system analysis is that closed-form solutions can be found. For stationary systems (i.e., all synaptic weights are constant), these solutions are solely determined by the eigenvalues
Brief introduction to linear algebra. A, Matrix-vector-multiplication. The result y (green) of multiplying a matrix M with a vector x (red) is a linear combination of the columns of the matrix M. B, The difference vectors y – x (black) provide a graphical interpretation of the action of a matrix, which is illustrated for three example matrices: a rotational matrix, and an isotropic diagonal matrix. Vectors (red) for which the result y does not change directions compared with x are called eigenvectors. A 2 × 2 matrix has at most 2 eigenvectors. The constant λ, which determines the multiplicative scaling of the output y, is called eigenvalue. Eigenvalues are calculated by solving the equation
In the more relevant and interesting case, the network is driven by a non-zero input I(t). In the example of the fly visual system, this would correspond to the situation when the fly is visually stimulated and the photoreceptors thus become active. Then, a closed solution can also be found using the previously introduced elements as follows:
This equation reflects a convolution of the input T−1I(t) with the impulse response matrix
The solution of Equation 8 is exemplified in Figure 3A for a simple neural circuit that consists of two neurons connected in sequence with a feedback synapse of strength b. The circuit is stimulated via a 1-s-long depolarizing pulse to neuron 1. The voltage propagates from neuron 1 to neuron 2 as indicated by the delay. In the phase plane (V 1, V 2) (Fig. 3B), the initial stimulus drives the voltage to some state in the top right quadrant. After the stimulus is shut off, the trajectory follows the vector representation of the matrix and converges back to the origin along the direction of one of the eigenvectors (red). Both eigenvectors (red and blue) point toward the steady-state rendering the circuit stable.
Time course of voltage signals and phase space of a 2-neuron circuit. A, Analytical solutions of the membrane potentials of two neurons (light and dark gray) in a feedback circuit on a 1 s constant input to neuron 1. The dynamical matrix A includes both neuronal (τ1, τ2) and connectivity (a,b) parameters. Solutions were obtained for a = 1, b = 0.5, τ1 = 0.1 s, τ2 = 0.2 s. B, Graphical representation of the matrix from A (arrows) and its eigenvectors (red, blue). The trajectory V2 versus V1 from A (color dots) follows the vectors representing the matrix after the stimulus is switched off (1 s) and converges to the direction of the red eigenvector.
Eigenvalues as inverse time constants
The analytical solution in Equation 8 not only allows us to make qualitative statements about the global system behavior, but in some simple cases, it also allows us to derive formulas for the eigenvalues (Fig. 4) and thereby discuss the dependence of global circuit behavior on the connectivity parameters. In the model circuit from Figure 3, we vary the strength b of the feedback synapse while keeping a = 1 and find that an increased feedback prolongs the decay constant of the pulse response. In networks without feedback, the eigenvalues are always the inverse cellular time constants. Feedback generally generates new time constants and thereby allows for signal processing at time scales that go beyond the ones of the isolated elements. This is reflected by the fact that the eigenvalues now depend not only on the cellular time constants, but also on the connectivity parameters. The time scale of activity decay in the network may substantially exceed the single neuron time scales, extending over seconds to minutes and thereby provide a substrate of working memory (Seung, 1996, 2003). In the extreme case of b = 1, that is, if the feedback connection is as strong as the feedforward connection, one eigenvalue becomes 0; and, thus, the time constant goes to infinity (Fig. 4, blue curve). This case corresponds to a perfect integration of the input signal, as argued for the oculomotor system (Seung et al., 2000a; Goldman et al., 2002).
Feedback prolongs time constants. Responses of the same circuit as in Figure 3 to a 1 s constant input to neuron 1 for varying feedback strength b. The eigenvalues
Eigenvalues determine the input gain of network modes
Eigenvalues of A not only determine the time constants by which activity patterns rise and decay, they also define an “input resistance” (i.e., gain constant) of the pattern, equivalently to the relation between membrane time constant and input resistance of nerve membranes. For a constant input current (vector) I, the steady state solution from Equation 4 can be expressed in eigenmodes as follows:
Eigenvalues of the connectivity matrix thus allow us to translate intuitions from single-cell electrophysiology to the realm of activity patterns in networks.
Matrix exegesis
We have argued that feedback has profound effects on circuit computation, even for linear models. It, therefore, would be helpful to readily identify feedback from a circuit diagram (connectome) without simulations or analytical assessment of network dynamics. How would a connectivity matrix of a pure feedforward circuit differ from the one of a circuit that has feedback? In a pure feedforward network, the first neuron does not receive input from any of the following neurons; that is, the top row contains 0's only (Fig. 5, top right). The second neuron receives input from the first one only, so in row 2, there is only one entry in column 1. The third neuron receives input at most from neuron one and two, so maximum entries in row 3 are in columns 1 and 2. Thus, in general, in a pure feedforward circuit, the output of any neuron n has no effect on its own activity and only influences neurons m > n. This is reflected in its connectivity matrix M that has only entries below the diagonal. Such a matrix is said to be “lower triangular.” Unfortunately, however, neurons are numbered in an arbitrary way, so the connectivity matrix is scrambled. In order to see whether it is purely feedforward or not, the circuit elements need to be reordered. For systems without feedback, there always exists a reordering such that the matrix M has lower-triangular form (Fig. 5, top row).
Matrix reordering. The numbering of neurons in a circuit is arbitrary. Random ordering (left column) does not readily reveal feedforward and feedback structure in the connectivity matrix. It can, however, be revealed by appropriate reordering (right column).
True feedback connections are immune to such reordering of the connectivity matrix (Fig. 5, bottom row) and cannot be brought into lower-triangular form. A simple criterion to check whether a connectome is feedforward and the connectivity matrix M can be reordered into lower-triangular form is to test for nilpotence of M: Whenever there exists a finite integer power a, such that Ma = 0, the connectome is strictly feedforward. Physiologically, the condition Ma = 0 reflects the fact that any activity in a pure feedforward network inevitably dies out after a finite amount of time steps. Recurrent loops, however, can keep the network activity indefinitely, if the synapses are strong enough.
Evidently, connectomes are never strictly feedforward; however, approximate feedforward organization has been reported for subcircuits (e.g., Borst et al., 2020a), and even for some cell classes in the whole Caenorhabditis elegans connectome (Durbin, 1987). Approximate feedforwardness also seems reasonable given that, on a large scale, an animal needs to select its motor output depending on sensory input. Finding feedforward, or “almost” feedforward motifs in the connectivity matrix, hence, can reveal important insight into the local flow of information.
While for small circuits with few neurons (e.g., Fig. 5), reordering into lower-triangular form could be done by visual inspection and trial-and-error, such practices are intractable for larger connectomes: here, numerical tools are clearly required. A classical method from linear algebra that has been successfully applied to neural circuit connectivity (Goldman, 2009) is the Schur decomposition. This method uses the fact that, for any diagonalizable matrix A, there exists a unitary basis transformation matrix U (where
While Schur decomposition identifies feedforward motifs in networks, it has the downside that it only does so in a transformed neuron space, that is, it comes along with a transformation from activities of single neurons to activity of neuronal populations
A more direct interpretation in terms of single neurons, however, can be achieved by classical matrix reordering techniques (Robinson, 1951; for review, see Liiv, 2010; Behrisch et al., 2016), which determine index permutations such that the shape of the matrix optimizes some objective. In general, both row and column indices can be reordered independently, but for connectomic analyses only algorithms that use the same permutation for columns and rows (presynaptic and postsynaptic neurons) are meaningful. A standard objective used to find optimal permutations is bandwidth minimization (Leung et al., 1984), that is, keeping all matrix elements in as few bands as possible close to the diagonal. Such minimal bandwidth matrices readily allow translation into circuit schemes. Classical solutions are based on the Cuthill-McKee algorithm (Cuthill and McKee, 1969), improved versions of which (George, 1971; Gibbs et al., 1976) are available in standard numerics packages as Scipy (Virtanen et al., 2020) (Fig. 6A, top middle).
Matrix reordering methods. A, Left, Example of a matrix implementing a single feedforward sequence, which is distorted by a few random feedback and out-of-sequence feedforward components (top). Random index shuffling yields a scrambled version of the original matrix (bottom). Middle, Outcome of matrix reordering algorithms applied to the scrambled matrix based on bandwidth minimization (top, reverse Cuthill McKee) and propagating ranks (bottom). Right, Outcome of matrix reordering using the python package Tarjan (top), and a custom-modified version minimizing the upper triangular elements in loops (bottom). B, Application of the modified Tarjan procedure to a 66 × 66 random matrix (sparseness 0.15), the full connectome of the fly optic lobe (red represents hyperpolarizing connections), and a thresholded version of the optic lobe connectome only considering connections with >10 synapses. C, Modified Tarjan reordering of the fly subcircuit for ON-motion detection (top) and a reduced version with a threshold at 20 synapses (bottom). Saturation of colors (green represents excitatory; red represents inhibitory) illustrates synapse numbers (black). D, Top, Original connectome derived from chemical synapses of the entire worm hermaphrodite C. elegans (454 node) (Cook et al., 2019), reordered with the original (middle) and modified (right) Tarjan algorithm. Bottom, Same as in top, but only taking into account connections with 4 and more synapses (independent of neurotransmitter).
Although the problem is solved in principle, in a neuroscience context (Carmel et al., 2004; Seung, 2009; Varshney et al., 2011), the algorithms are numerically costly (Monien and Sudborough, 1985) and not very well feasible for large connectomic data. Therefore, several new approaches are actively being developed, including heuristics (Wang et al., 2014), kernel-based methods (Bollen et al., 2018), and deep neural networks (Watanabe and Suzuki, 2021), usually for independent column and row index permutations.
“Feedforwardness,” however, not only requires small bandwidth of the connectivity matrix, but also a serial arrangement of connections. The latter point is illustrated by the page-rank algorithm (Page et al., 1999) (Fig. 6A, bottom middle). This algorithm is used in Internet searches to quantify the importance of websites, with the most important one having the last index. While page rank graphs are computationally highly efficient and also tend to reduce upper triangle entries, they are not good in sorting feedforward dependencies in the less important (earlier) nodes.
A family of algorithms that both detect feedforwardness and reduce upper-triangle entries are flow-graph approaches. Developed in the 1970s for analyzing the command flow of computer programs to prove theorems of program termination, these approaches are based on a “depth-first search” (considered within the class of “topological sorting” algorithms) that reduces graphs into loops (Tarjan, 1972, 1974) (Fig. 6A, right). A flow graph-based algorithm even finds feedforward structures in random graphs (Fig. 6B, left) and also identifies sequential structure in the full connectome of the fly optic lobe (66 nodes) (Shinomiya et al., 2019) (Fig. 6B, middle). The detected order, however, still provides no immediate recipe for functional interpretation.
A simple strategy for extracting physiological insight from matrix reordering is to set a threshold on the synapse numbers. Only considering connections above that threshold effectively disregards weaker routes of information flow. Even a moderately low threshold of 10 synapses gives rise to a mostly feedforward matrix for the full connectome of the fly optic lobe (Fig. 6B, right), which allows identification of circuit modules that can be studied independently of the rest of the network. This idea is illustrated by the ON-motion subcircuit (Takemura et al., 2017), which comprises those cells that provide input to T4 cells. Figure 6C depicts two versions of this subcircuit: one including all connections (top) and one including only connections with >20 synapses (bottom). The reduced matrix identifies a subnetwork of recurrently connected Mi4 and Mi9 cells that drive no other neurons than T4 cells and receives input from L3 and L5 only. The thresholded matrix also identifies one large feedback loop via L1. Application of the Tarjan-derived reordering to the whole worm connectome (Cook et al., 2019) (Fig. 6D) yields consistent conclusions, in that thresholded connectivity is largely feedforward (originally observed in Durbin, 1987). Functional analysis using these identified submodules of course can post hoc be checked with simulations or eigenvalue analysis of the full connectome.
Of course, thresholding also changes the optimal sequence order of the neurons, but we would argue that stronger connections should also be assigned a larger weight in matrix reordering.
The practical feasibility of matrix reordering techniques strongly depends on how the algorithm scales with matrix size and thus the required computing resources. In the case of the original Tarjan algorithm (Tarjan, 1972, 1974) computing time was shown to efficiently scale with the number S of synapses as
Scaling of reordering algorithms. Top, Example of a random binary 60 × 60 matrix M with half of the edges filled in the lower and a quarter of edges filled in the top triangle (and no self-connections). After random index permutation (scrambled), we applied three reordering algorithms, the two graph-based methods from Figure 6A, and a brute-force approach (BFA) minimizing the entries in the top triangle as explained in the text. The performance of the reordering is quantified by three measures. The excess ratio in the top triangle (top) measures the increase of entries in the top triangle relative to the original matrix. The Order index measures the fraction of filled entries in the first subdiagonal in the bottom triangle and reveals the extent to which the algorithm has identified a closed path from a first to a last node. Finally, the CPU time used on a standard PC indicates the applicability of the algorithm to large connectomes. Bottom, Three performance measures as a function of node number (colors as on top). Solid lines indicate the median and the area of the 90th percentile obtained from 15 repetitions. Dashed black line on the right indicates the theoretical scaling law
Apart from CPU time, quantitative comparisons of algorithms require definition of quality measures for their outcome. In Figure 7, we choose two measures to compare different algorithms: the first measure (“recurrency”) is defined by the increase of entries in the upper triangle of the matrix ordered by the respective algorithm relative to the original matrix, the second measure (“feedforwardness”) is defined as the (normalized) number of matrix entries in the first subdiagonal. Optimizing for one does not necessarily optimize the other (Behrisch et al., 2020). Large feedforwardness, hence, may reflect a graph of many feedforwardly ordered loops, whereas low recurrency reduces feedback but does not necessarily identify the optimal order of information flow.
Discussion and Outlook
The connectivity matrix resulting from volumetric EM analysis contains much information about the functional properties of a neural circuit. Yet, these are not obvious at first sight. Linear algebra provides a lot of tools useful in two ways: (1) they allow for detecting functional substructures of a circuit (e.g., by separating feedforward from feedback connections); and (2) they allow for predicting time constants of particular elements within the circuit that arise from network architecture, and not from intrinsic membrane properties.
Open problems and success stories
Although linear models provide important tools to dissect specific network regimens, they are unarguably limited as they cannot explain transitions between these regimens as, for example, in multistability, or they ignore nonlinearities of spike generation (e.g., Lehnert et al., 2014) and synaptic transmission (e.g., Wienecke et al., 2018; Mishra et al., 2023). Some progress has been made in the development of analytical tools for threshold linear networks by dividing up the state space into multiple subspaces with linear dynamics (Curto et al., 2019; Santander et al., 2021). To our knowledge, however, these have not been applied to connectomics data. Tschopp et al. (2018) used a connectome-constrained (rectified linear) network model of the fly visual system to optimize the synaptic weights to specific computer vision tasks, nicely illustrating how EM connectivity data can be used as a seed to facilitate parameter estimation (for review, see Litwin-Kumar and Turaga, 2019). In a similar vein, Klinger et al. (2021) compared the synaptic connections resulting from simple network models to connectomic data of the neocortical column, using approximate Bayesian estimation to judge which of the models best explains the observed connectivity. Qualitative insight into the dynamical effects of connectivity parameters is less straightforward when directly optimizing parameters of nonlinear models. A promising approach is deep density estimators (Goncalves et al., 2020; Bittner et al., 2021), which predict distributions for parameters of mechanistic models when trained with activity data.
Large-scale connectomics data do not necessarily rely on single-neuron models but can use population models with smooth nonlinear activation functions. For example, dynamical mean field models (Deco et al., 2008; Deco and Jirsa, 2012) incorporate nonlinearities f that are microscopically derived from averaging activity of pools of spiking-neuron models (Brunel and Hakim, 1999), and the objective of fitting is to replicate the functional connectivity matrix (activity covariance) obtained from whole-brain fMRI. Thereby, constraining M as much as possible by functional connectivity turned out to be essential to achieve good fits (Cabral et al., 2017), indicating the general importance of the underlying large-scale network structure. By design, however, dynamical mean fields are not applicable to microscopic single-cell–based models, let alone for circuits consisting of graded response neurons, such as in the fly visual pathway.
Generalized linear models (GLMs), which have been successfully applied to small- and medium-scale circuits (Paninski, 2004; Pillow et al., 2008; Pho et al., 2018), could be considered as a compromise between brute-force nonlinear fitting and qualitative interpretability of connectivity parameters. In GLMs, the voltage variable V(t) arises from the input I in very much the same way as in the example from Equation 8. Therefore, qualitative analysis of eigenvalues of A can be compared with the best-fit solution obtained from mapping V(t) into a mean firing rate by a so-called inverse link function f(V). The inverse link function models the spiking nonlinearity and is determined by the firing statistics of the model. For Poisson noise, f is exponential and generally interpreted as the hazard function of the Poisson process. For graded neurons, however, there is so far no general agreement on the choice of noise model and inverse link function. Also, the application of GLMs to large connectomes is limited by numerical optimization in high-dimensional nonlinear parameter spaces.
In conclusion, although which single approach to connectome-based nonlinear modeling (if any) will become the gold standard remains to be determined, techniques from linear algebra can be advantageously applied to a simplified linear circuit derived from a well-constrained connectivity matrix. In combination with numerical simulations of a biophysically more realistic circuit (e.g., taking into account voltage-gated ion channels and further nonlinear characteristics of neural transmission), this approach can reveal deep insight into the particular function of a circuit and the microscopic biological mechanisms on which it is based.
Footnotes
We thank Lukas Groschner and Moritz Helmstaedter for carefully reading the manuscript.
The authors declare no competing financial interests.
- Correspondence should be addressed to Christian Leibold at christian.leibold{at}biologie.uni-freiburg.de