Abstract
Cortical neurons in vivo fire quite irregularly. Previous studies about the origin of such irregular neural dynamics have given rise to two major models: a balanced excitation and inhibition model, and a model of highly synchronized synaptic inputs. To elucidate the network mechanisms underlying synchronized synaptic inputs and account for irregular neural dynamics, we investigate a spatially extended, conductance-based spiking neural network model. We show that propagating wave patterns with complex dynamics emerge from the network model. These waves sweep past neurons, to which they provide highly synchronized synaptic inputs. On the other hand, these patterns only emerge from the network with balanced excitation and inhibition; our model therefore reconciles the two major models of irregular neural dynamics. We further demonstrate that the collective dynamics of propagating wave patterns provides a mechanistic explanation for a range of irregular neural dynamics, including the variability of spike timing, slow firing rate fluctuations, and correlated membrane potential fluctuations. In addition, in our model, the distributions of synaptic conductance and membrane potential are non-Gaussian, consistent with recent experimental data obtained using whole-cell recordings. Our work therefore relates the propagating waves that have been widely observed in the brain to irregular neural dynamics. These results demonstrate that neural firing activity, although appearing highly disordered at the single-neuron level, can form dynamical coherent structures, such as propagating waves at the population level.
- balanced excitation and inhibition
- cerebral cortex
- computer simulation
- cross-correlation
- propagating waves
- synchrony
Introduction
Cortical neurons in vivo fire highly irregularly. Understanding the origin of this irregularity has been a long-standing topic of interest in neuroscience (Softky and Koch, 1993; Shadlen and Newsome, 1994; van Vreeswijk and Sompolinsky, 1996; Stevens and Zador, 1998). One prevalent view in accounting for this irregularity is that excitatory and inhibitory synaptic currents received by cortical neurons are approximately balanced. In this balanced condition, membrane potential follows a random walk and crosses the spiking threshold at random times (Shadlen and Newsome, 1994, 1998). Importantly, it has been shown that the balanced condition emerges naturally in randomly coupled neural networks and that, in such balanced networks, different neurons asynchronously emit spikes (van Vreeswijk and Sompolinsky, 1996). Along with variability in their spike timing, cortical neurons show firing rate fluctuations (Churchland et al., 2010), which can be captured by a balanced network model with clustered connections (Litwin-Kumar and Doiron, 2012).
Another view proposed to account for the variability of cortical firing activity is that presynaptic population activity is highly synchronized, and such synchronized activity causes large fluctuations in synaptic inputs and the resultant variability of neural spike timing (Stevens and Zador, 1998). In this proposal, synchronized synaptic inputs cause membrane potential to exhibit large, occasional excursions in amplitude. Membrane potential therefore has a non-Gaussian distribution instead of the Gaussian one found in random walk models (Shadlen and Newsome, 1998). Such synchronized synaptic inputs and non-Gaussian membrane potential dynamics have been widely observed in cortical neurons (DeWeese and Zador, 2006; Tan et al., 2014). However, the network mechanisms that give rise to these dynamics remain unknown. Furthermore, whether and how the two leading views (i.e., the view of balanced excitation and inhibition and that of synchronized inputs) can be reconciled to gain a comprehensive understanding of cortical dynamics remain unresolved.
Here, we propose an alternative view on the basis of propagating wave patterns emerging from 2D, spatially extended spiking neural networks. As these wave patterns sweep past neurons, they generate highly synchronized synaptic inputs. Propagating wave patterns with complex dynamics only emerge when excitation and inhibition are balanced. Our model therefore reconciles the two leading views regarding the variability of neural dynamics. In addition, our model is able to account for a range of experimental observations, including the variability of spike timing, slow fluctuations of firing rates (Churchland et al., 2010), synchronized synaptic inputs with inhibition lagging behind excitation by several milliseconds (Okun and Lampl, 2008), and membrane potentials with non-Gaussian dynamics (DeWeese and Zador, 2006) and correlated fluctuations (Gentet et al., 2010).
Evidence of the presence of propagating waves in the brain has been rapidly accumulating (Rubino et al., 2006; Ferezou et al., 2007; Wu et al., 2008; Sato et al., 2012). Our work relates these widely observed population activity patterns to variable neuronal dynamics, indicating that structured activity patterns, such as propagating waves, at the level of neural circuits can emerge from the highly variable firing activity of individual neurons.
Materials and Methods
A spatially extended, conductance-based spiking neural circuit.
We consider a 2D network of N × N coupled, conductance-based integrate-and-fire neurons consisting of 80% excitatory and 20% inhibitory neurons. Both excitatory and inhibitory neurons are evenly spaced, and the spacing between inhibitory neurons is twice the spacing between excitatory neurons. We denote the voltage of a neuron at integer coordinates (i, j) at time t as Vij(t), with dynamics given by the following:
where the capacitance is C = 1 μF, the leak conductance is gL = 50 μS, and the reversal potentials are VL = −70 mV, VE = 0 mV, and VI = −80 mV for the leak, excitatory, and inhibitory conductance, respectively (Koch, 1999). If the voltage of a neuron reaches the threshold of VT = −55 mV, a spike is generated and the voltage is reset to the reset potential VR = −70 mV for a refractory period τref = 5 ms. The synaptic conductances are as follows:
where λ, λ′ = E,I are the excitatory and inhibitory conductances, respectively, and Ti′j′l is the time of the l-th spike emitted by the afferent neuron located at (i′, j′). The constant external inputs are FE = 15 μS and FI = 2 μS, but as shown in Results, our model is not sensitive to these values. The time course of the postsynaptic conductance is given by the following:
with rise times τrE = 0.5 ms and τrI = 0.5 ms, and decay times τdE = 2.0 ms and τdI = 7.0 ms. The denominator is a normalization factor such that
We use a network size of 300 × 300 but have found that the results are not sensitive to the size of the network. The network model is simulated by using the Euler method with a time step dt = 0.05 ms (Press et al., 2007). The initial membrane potentials are randomly chosen from values between the resting potential (VR = −70 mV) and the threshold potential (VT = −55 mV). Each trial is run for 7.5 s with the first 1.5 s excluded as transient time.
To compare our results with previous balanced networks with random coupling topology, each connection is randomly rewired to another neuron in the network. The reconnection is independent of distance between neurons, but self-connection and multiple connections from the same neuron are excluded. For this randomly rewired network, individual neurons receive a variable number of connections with the average number the same as the regular network. To investigate the effects of partial rewiring, each afferent excitatory connection is rewired with a probability P; for the randomly coupled network P = 100%.
To demonstrate the role of intrinsic network dynamics in generating variable neural activity, we mostly focus on a model with deterministic external input (i.e., Fλ in Eq. 2 is a constant). However, because noise is ubiquitous in the brain (Faisal et al., 2008), we also consider noisy inputs to individual neurons to demonstrate that propagating wave patterns can still emerge from our model. For this case, if a neuron is not in the refractory period, at each time step it fires with a certain probability, resulting in random spontaneous firing activity with a rate of 2 Hz, as found in real cortical neurons (Koch and Fuster, 1989). Noisy inputs are also used for the randomly connected network.
Data analysis.
Statistics of spike trains, firing rates, synaptic conductances, and membrane potentials of each neuron are calculated to compare our modeling study with experiments. To that end, we measure the coefficient of variation (CV) of interspike intervals, the Fano factor of spike counts, and the cross-correlation and excess kurtosis of synaptic conductance and membrane potential. All results are calculated by using 2400 randomly chosen excitatory neurons in each of 12 trials, unless otherwise specified.
Let 0 < tij1 < tij2 < … < tijL < T denote a train of L spikes that are emitted by the neuron at position (i, j) during a trial of duration T; then Xij1, Xij2,…, XijL−1 is the sequence of the L − 1 observed interspike intervals (ISIs) Xijl = tijl+1 − tijl where l indicates the l-th spike. Irregular spiking is expressed by a variable length of the ISIs. To quantify this variability, we calculate the CV of the ISIs of the neuron at (i, j): where <Xijl> and var(Xijl) are the mean and variance of the ISIs, respectively. A periodic process, such as the ticking of a clock, has a CV of 0; a Poisson process, in which the timing of one event is independent of any other, has a CV of 1. Multiple observations of neurons at different positions results in an ensemble of spike trains, each with its own CV. The CVs can be averaged to find the overall variability of ISIs.
The CV of ISIs signifies intratrial variability on a relatively short time scale on the order of the length of a typical ISI. By contrast, the trial-to-trial variability is measured by the Fano factor of the spike count Nij(t), which is the number of spikes emitted by a neuron within a fixed time interval from t to t + Δt. The Fano factor of a neuron at (i, j) is then: where <Nij(t)> and var(Nij(t)) are the mean and variance of the spike count across repeated trials with random initial conditions. The Fano factor of a homogeneous (with a constant rate of events) Poisson process is 1.
To measure the correlations between pairs of spiking neurons, we calculate the cross-correlation of their spike count. The normalized cross-correlation (cross-covariance) of the spike counts, Nij(t) and Ni′j′(t), of two neurons at (i, j) and (i′, j′) is calculated according to the following: where <Nij(tl)> and <Ni′j′(tl)> are the means of the spike counts, var(Nij(tl)) and var(Ni′j′(tl)) are the variances, L is the total number of samples in the time series, and tl = (l − 1)δt with δt = 1 ms. The number of spikes are calculated over Δt = 50 ms time windows. For i = i′, j = j′ (i.e., the auto-correlation), the two identical signals would have a value of CCij,i′j′(τ = 0) = 1 at zero time lag. Nonidentical traces result in values of CCij,i′j′(τ = 0) between −1 and 1, which indicates negative and positive correlations, respectively. At zero time lag, the measured value is identical to the Pearson product-moment correlation coefficient, which is a measure of the instantaneous correlation between two variables. The cross-correlation can be calculated for the membrane potential (Vij(t) and Vi′j′(t)) and synaptic conductance (gijE(t) and gi′j′E(t)) of pairs of neurons in the same way as described above.
To characterize the shape of the distribution of membrane potentials, we compute their excess kurtosis. For a neuron at position (i, j), this is given by the following: where <Vij(tl)> is the mean value of voltage, L is the total number of samples in the time series, and tl = (l − 1)δt with δt = 1 ms. The excess kurtosis (which we shall henceforth refer to as the kurtosis) is a measure of the shape of a distribution compared with a Gaussian; it is >0 for a distribution with more outliers (i.e., it is heavy-tailed) or values close to the mean, and <0 for distributions with more values between the mean and the outliers (i.e., it has heavy shoulders); for a Gaussian distribution, it is 0 (DeCarlo, 1997; DeWeese and Zador, 2006). Large, occasional deviations from the resting membrane potential lead to a heavy-tail distribution, and we use the kurtosis to quantify this, excluding the reset values of membrane potential that occur in the refractory period. The kurtosis of the synaptic conductance gijE(t) of each neuron is calculated in the same manner.
The spatiotemporal activity of our spatially extended network exhibits propagating wave patterns with complex dynamics, as described in Results. One of the salient features of these wave patterns is that they are localized, meaning that neurons that are spiking in a certain interval are adjacent, and individual patterns are clearly separated from each other. Based on this property, we devise an automatic method to identify these patterns. We first choose a time window of duration 5 ms to detect enough spikes that are adjacent to each other, and we then use a flood-fill algorithm to classify groups of adjacent neurons as one pattern (Burger and Burge, 2008).
There are two kinds of propagating wave pattern emerging from our network model, which are described in Results: patchy and crescent-shaped waves. A significant difference between them is that the center of the former contains random firing activity, which results in gaps (i.e., holes) between firing neurons, whereas the latter does not. This property can be quantified by the Euler characteristic, which is the difference between the number of connected regions and the number of their holes (Burger and Burge, 2008); thus, for the crescent-shaped waves, the Euler characteristic is 1; and for the patchy patterns, it is <1. The Euler characteristic is calculated by the “bweuler” routine in MATLAB (MathWorks) (Pratt, 1991) and can be used to reliably distinguish the two types of propagating wave patterns.
After identifying these propagating wave patterns automatically, we characterize their collective dynamics by calculating the center-of-mass position (IM(t),JM(t)) of each pattern: where iML(t) and jML(t) are the i and j positions of the Lth neuron that is firing at time t in the Mth pattern, and Mf is the total number of firing neurons within this pattern.
We then use the mean-squared displacement (MSD) to quantify the motion of each pattern: where τ is the time increment, and <> represents averaging over time. If a pattern travels ballistically, that is, it moves in the same direction at each time step, MSD(τ) ∝ tα where α = 2. If the motion of a pattern is normally diffusive (Brownian) (i.e., it is equally likely to move in any direction at each time step), then α = 1.
Results
Dynamic wave patterns emerging from balanced, spatially extended networks
The balance of a neuron often refers to the approximately equivalent amounts of inhibitory and excitatory synaptic inputs that it receives (van Vreeswijk and Sompolinsky, 1996, 1998; Salinas and Sejnowski, 2000; Xue et al., 2014). This can be measured by the ratio β = <c̅i̅j̅I̅> / <c̅i̅j̅E̅> where excitatory current cijE(t) = gijE(t)|Vij(t) − VE|, inhibitory current cijI(t) = gijI(t)|Vij(t) − VI|, <> indicates averaging over time excluding refractory periods, and the overbar represents averaging across neurons. When β ≈ 1 (i.e., excitatory and inhibitory inputs are approximately equilibrated), we refer to this as the balanced condition. When β ≈ 1, the randomly coupled network (see Materials and Methods) generates asynchronous dynamics without any structured patterns, as found in previous balanced networks (van Vreeswijk and Sompolinsky, 1996; Renart et al., 2010). In this balanced state, individual neurons fire irregularly (Fig. 1a), and the distribution of membrane potential is Gaussian.
In contrast, the balanced, spatially extended network generates spatiotemporal patterns with complex dynamics. As shown in Figure 1b, c, d, these patterns include several localized, crescent-shaped waves that move rapidly, and localized, patchy patterns that slowly wander around (outlined by the green boxes). The initiation sites and subsequent trajectories of both types of pattern are seemingly random, and the patterns interact when they approach each other; these interactions include repulsive collisions, and partial and full annihilations. Because both types of pattern can propagate over the network, we generally refer to them as propagating wave patterns. Notably, propagating wave patterns with random initiation sites and propagation directions have been observed in the spontaneous activity of visual cortex of rats (Han et al., 2008, their Fig. 2b) and in the sensorimotor cortex of behaving mice (Ferezou et al., 2007). Multiple wave patterns that sweep across the cortex have been found in the visual cortex of cats (Arieli et al., 1995, 1996), and the coexistence of waves and localized patchy patterns has been observed in the barrel cortex of rodents (Petersen et al., 2003).
When the network input is unbalanced (β ≤ 0.95) and disinhibited, wave patterns with more regular, global properties emerge. In particular, global ring or target waves occur periodically, with frequency proportional to the external input. Alternatively, depending on the initial conditions, the network sometimes develops plane waves that propagate in a preferred direction, or spiral waves, the centers of which can spread across the entire lattice (Fig. 1e). This is largely consistent with recordings in cortical slices and intact neocortex of rats: when neurons were disinhibited using bicuculline to block GABA receptors, alternative patterns of ring, plane, and spiral waves were observed (Huang et al., 2004, 2010).
To further illustrate that balanced excitation and inhibition are needed for the formation of propagating wave patterns with complex dynamics, we break the balanced condition in the network to study the changes in these patterns. For this purpose, the model is run for the usual transient time and then after 3 s, the inhibitory connection strength WI is altered to obtain either an excitation or an inhibition dominated case. For the excitation dominated case, the typical changes of these patterns after breaking the balance are shown in Figure 1f, g, h: the separation between these localized wave patterns decreases, and then nearby waves merge with each other; this process continues until eventually global plane waves are formed (Fig. 1h). To quantify such changes, we calculate the average width of the wave patterns identified by our algorithm (see Materials and Methods) as a function of time. Figure 1i shows that the size of the waves increases rapidly after breaking the balanced condition and continues to increase until all the patterns are merged into a plane wave. On the other hand, after breaking the balanced condition to obtain an inhibition dominated case, the wave patterns change from ones with relatively long range propagation to ones tending to disappear shortly after they are formed (i.e., they are short-lived due to the strong inhibition) (Fig. 1j). Based on this observation, we calculate the average propagation length (500 ms time windows) of each pattern as a function of time. Figure 1k shows that, as expected, the propagation length of the waves rapidly decreases after the balance is broken until it reaches a certain value that is quite small relative to the size of the network.
In our network model, the balanced state and the resultant propagating wave patterns emerge without fine tuning of parameters. For instance, changing the values of FE and FI still leads to the balanced state as long as FE is high enough relative to FI such that individual neurons can fire even without recurrent synaptic inputs. The population-averaged firing rates increase nearly linearly as a function of external input strength (FE), as has been found for randomly connected, balanced networks (van Vreeswijk and Sompolinsky, 1996). Additionally, when the neurons in our network receive stochastic inputs (see Materials and Methods), they still settle to a balanced state and the network is able to form propagating wave patterns (Fig. 1l), as found in the network with a constant external input. Moreover, the strengths of recurrent excitation (WE) and inhibition (WI) have a relatively broad range over which the balanced condition is preserved; for instance, given WE = 0.23, the network is balanced for 0.23 ≤ WI ≤ 0.35. In a recent study of neural field models (Rosenbaum and Doiron, 2014), it was found that, in the continuum limit, balanced firing rate solutions require that the spatial spread of external inputs be broader than that of recurrent excitation, which in turn should be greater than or equal to that of recurrent inhibition; however, this condition is not satisfied for finite-size networks, and crucially, propagating wave activity cannot be captured by the mean field models, as pointed out by Rosenbaum and Doiron (2014). In our study of the finite-size spiking neural network with emergent propagating wave patterns, the balanced condition with the ratio of average excitatory current and inhibitory current close to 1.0 is quite robust, and these wave patterns underlie variable firing activity of neurons as shown in the following section.
Spikes of neurons exhibit a doubly stochastic process
We next characterize the dynamics of the propagating wave patterns and show that these dynamics cause the doubly stochastic process of neural firing activity, with variability in spike timing and slow fluctuations in firing rates (Churchland et al., 2010, 2011). First, the center of mass of each pattern is calculated and tracked over time (see Materials and Methods). Figure 2a, b shows the typical trajectories of crescent-shaped waves and patchy patterns, respectively; both patterns appear to have random initiation sites and trajectories. They have distinct dynamics, with crescent-shaped waves propagating over long distances and patchy patterns wandering around a particular area. By calculating the displacement of centers of mass for each pattern, we find that the crescent-shaped waves have a mean speed of 2.0 ± 0.6 gridpoints/ms (Fig. 2c). We also change the connectivity density (i.e., the number of connections each neuron receives over a given distance) and find that the average speed of the waves increases monotonically as the density increases.
To further quantify the dynamics of these patterns, we calculate their MSD (see Materials and Methods). For the crescent-shaped waves, MSD appears to be linear on a log-log scale (Fig. 2d, red line), which implies that it is a power function of time increment τ such that MSD(τ) ∝ τα. By using nonlinear least-squares fitting via the Levenberg-Marquardt algorithm (Press et al., 2007), we obtain an exponent of α = 1.9. For Brownian motion, the MSD is a linear function of time (α = 1); when α ≠ 1, a diffusion process has a nonlinear relationship to time and is called anomalous diffusion. In particular, if 1 < α < 2, it is a superdiffusive process; and if 0 < α < 1, it is a subdiffusive process (Metzler and Klafter, 2000). This result therefore indicates that the crescent-shaped waves move in a superdiffusive manner. Anomalous superdiffusion indicates that the motion of these waves is not fully random as is the case for Brownian motion and that, instead, there are long-range correlations in their propagations (Metzler and Klafter, 2000). For the patchy patterns that wander around, MSD(τ) ∝ τα and α = 1.0 (using nonlinear least-squares fitting) for τ > 10 ms, indicating that these patterns move in a normally diffusive manner over long time scales and that they move much slower than crescent-shaped waves (Fig. 2d, blue line).
Figure 3a shows that neurons within the network exhibit dynamic transitions between low-activity and high-activity states. These transitions appear to coexist with variable spike timing of individual neurons, which is a characteristic feature of a doubly stochastic process (Churchland et al., 2011; Churchland and Abbott, 2012; Litwin-Kumar and Doiron, 2012). To quantify the variability of spike timing, we first calculate the CV of ISIs as outlined in Materials and Methods. The average value of CVs across all trials is 1.1 ± 0.1, which is in agreement with experimental studies (Softky and Koch, 1993; Holt et al., 1996). However, a high CV may reflect fluctuations in firing rates as well as variability in the timing of individual spikes. To quantify such fluctuations in firing rates, we calculate the spike count Fano factor (see Materials and Methods) for each neuron by using a fixed time window of Δt = 100 ms, which yields a mean value of 1.4 ± 0.5. Fano factors with values >1.0 have been observed in many cortical systems (Tolhurst et al., 1983; Britten et al., 1993; Churchland et al., 2011) and are evidence for the fluctuations of neural firing rates. The fluctuations of firing rates over long time scales are further demonstrated by calculating Fano factors as a function of time window. As shown in Figure 3b, Fano factor increases linearly as Δt increases; this is because of the transitions between high and low-activity states that happen over long time scales (Churchland et al., 2011). A recent modeling study of balanced networks with clustered connections has found similar doubly stochastic firing activity and transitions between high and low firing activity states (Litwin-Kumar and Doiron, 2012), in which such transitions are caused by spontaneous switching dynamics between different attractors. However, for our case, the dynamics of the propagating wave patterns cause these transitions; this mechanism is illustrated below.
Propagating wave patterns generate irregular spiking activity
We next consider the relationship between the spiking activity of individual neurons and the collective dynamics of propagating wave patterns. By tracking the motion of individual patterns, we observe that, when a crescent-shaped wave front moves toward a certain area, neurons in the area would receive synaptic inputs from neurons that are firing in the front, and their membrane potentials are depolarized such that they may subsequently fire. Because the wave moves in a superdiffusive way, it leaves that location rapidly, which means that neurons around that location fire no more than once. To verify this, we consider a 5 × 5 subgrid with 25 neurons, which is smaller than the area of a typical wave pattern and thus cannot contain more than one wave trajectory at a time. We then calculate over all trials the number of neurons within this grid that would fire when a wave passes by. The resultant distribution, shown in Figure 3c, has a mean value of 11.2, meaning that, when a wave passes by, only a fraction of neurons in the grid fire. The complex dynamics of the waves, namely, their random origins and trajectories (Fig. 2a), results in the irregular passage of these wave fronts around that location; this causes the high variability of the spike times indicated by the CV. Because the waves tend to leave each location rapidly and cause only a fraction of neurons in a grid to fire, their passage corresponds to the relatively low-activity state. This can be illustrated by choosing a neuron that has transitions between the low and high-activity states, and we find that, during the time periods when the neuron is not within a patchy pattern and crescent waves occasionally pass the location of the neuron, the mean firing rate of the neuron is 16 Hz. On the other hand, when a patchy pattern moves to this location, the pattern wanders around the location due to its normally diffusive dynamics, and drifts away slowly; this enables the neuron within that location to fire persistently, corresponding to the high-activity state as shown in Figure 3a. For the time periods when the example neuron is within a patchy pattern, we find that the mean firing rate is 137 Hz, and similar values can be found for other neurons, which are passed by patchy patterns. The low-activity state is therefore associated with crescent-shaped waves, whereas the transition to the high-activity state is caused by the approach of a patchy pattern.
Based on the above result, one would expect that the time scale of these transitions is closely related to the density of the patchy patterns. To test this prediction, we randomly rewire connections in the network with a certain probability P (see Materials and Methods). Despite the random connections, the localized patterns shown in Figure 1b–d can still emerge for P ≤ 10%. Figure 3d shows that, for the network with random connections, the proportion of patchy patterns relative to the number of crescent-shaped waves is greater than that in the original model; indeed, the proportion increases monotonically as the rewiring probability P increases. This increase in the proportion of patchy patterns results in an increase in Fano factors (Fig. 3e). This is because the greater the proportion of the patchy patterns, the greater the frequency of transitions between high and low-activity states, as shown in Figure 3f. The Fano factor asymptotically approaches a particular value, which corresponds to an absence of the crescent-shaped waves. Because the crescent-shaped waves are mostly responsible for the low-activity state, their absence means that neurons exhibit high firing rates or no firing at all; this results in greater fluctuations of firing rates.
To show that the propagating wave patterns emerging from the balanced network are essential for doubly stochastic firing activity, we calculate the Fano factors of the unbalanced network and the random network. The global ring, plane, and spiral waves (Fig. 1e,h), which occur in the unbalanced network, cannot cause variable spike timing. In this case, the Fano factors of firing activity are not only independent of time windows but also have very low values (Fig. 3b, green line). The randomly coupled network, which cannot generate any structured patterns (Fig. 1a), can produce spike count variability, but it does not exhibit slow fluctuations in firing rates; as shown in Figure 3b, whereas its Fano factor is close to 1, it is independent of time window.
Previous models of balanced networks have shown that averaged spike count correlations of pairs of cortical neurons form a wide distribution (i.e., with a high SD) with a mean value close to zero (Renart et al., 2010; Litwin-Kumar and Doiron, 2012). We calculate the correlation coefficients (see Materials and Methods) of spike counts of pairs of randomly selected neurons in the balanced, spatially extended network. Figure 4a shows a wide distribution of correlations with a mean of 0.0008 ± 0.0002, similar to that found in Litwin-Kumar and Doiron (2012). We also calculate the correlation coefficients as a function of distance (Fig. 4b): at relatively short distance, there are positive correlations that decrease as the distance increases, which is largely consistent with the distance-dependent neuronal correlations as observed in Smith and Kohn (2008) and Ecker et al. (2014). The correlations are negative at intermediate distances and approach zero when distance is large enough; this is consistent with the negative correlations found for the randomly selected neuron pairs (Fig. 4a) (Renart et al., 2010; Ecker et al., 2014). Negative correlations have also been observed by Vaadia et al. (1995).
As illustrated above, there are transitions between low- and high-activity states, with the latter being produced by the slowly propagating patchy patterns. That is, when a patchy pattern moves to a location in the network, it drifts away from the location slowly. This makes the neurons within the location maintain sustained firing states for a relatively long period of time, resembling the “up state” found in the cortex. To show whether the correlations depend on the states of neuron firing activity for nearby neurons, as observed in Ecker et al. (2014), we also calculate the distribution of spike count correlations when patchy patterns are over a neuron pair (high-activity state), and when they are not (low-activity state); these are then compared with those obtained over all trials. To do this, we classify the high firing state as those Δt = 50 ms intervals with spike counts corresponding to a firing rate of >50 Hz. The mean values of the spike count correlations are 0.66, 0.22, and 0.15 for all trials (Fig. 4c), for the case when both neurons are in a high firing state (Fig. 4d), and for the case when they are not (Fig. 4e), respectively. The mean value of the correlation coefficients of the high-activity state is significantly larger than that of the low-activity state (p < 0.001), and both are significantly smaller than that for all trials (p < 0.001). These results therefore indicate that, in our model, the spike count correlations depend on the states of neural firing activity.
In total, the results we have obtained show that the balanced, spatially extended network with emergent propagating wave patterns is able to capture salient characteristics of variable firing activity of cortical neurons, including their variable spike timing, slow fluctuations of firing rates, small spike count correlations of randomly selected neuron pairs, and state-dependent spike count correlations.
Synchronized synaptic inputs to individual neurons
In the input synchrony model proposed for explaining variable neural dynamics, synaptic inputs to individual neurons consist of large, brief, and occasional inputs or “bumpy” inputs, indicating the synchronous arrival of many spikes of presynaptic neurons (Stevens and Zador, 1998; Okun and Lampl, 2008). These kinds of synchronized bumpy synaptic inputs have been found in the cortex (Hasenstaub et al., 2005; DeWeese and Zador, 2006; Okun and Lampl, 2008; Poulet and Petersen, 2008; Tan et al., 2014), but the mechanisms that give rise to such inputs have not been studied. We now show that, in our balanced, spatially extended network, the emergent propagating wave patterns provide a network mechanism to generate such inputs.
Figure 5a shows the dynamics of the excitatory synaptic conductance received by individual neurons in our model, which are characterized by quiescent periods punctuated by large, brief, and occasional excursions in amplitude, with variable magnitudes and shapes. Figure 5b shows the distribution of the excitatory synaptic conductance gijE(t) received by each neuron over the whole time course of one trial; the distribution has a heavy tail. We use a linear-log plot because we can use it to directly compare with empirical observations in DeWeese and Zador (2006) and Hromádka et al. (2013), but it is evident that the distribution is heavy-tailed if the scale of both axes is linear. It is interesting to note that heterogeneous synaptic inputs with heavy-tail distributions have been found in the somatosensory cortex (Lefort et al., 2009). To quantify these non-Gaussian dynamics, we calculate the kurtosis for the distribution of the excitatory synaptic conductance of each neuron within the network (see Materials and Methods). We find that these values display a wide range of activity with a maximum kurtosis of 17.6 and a minimum kurtosis of −1.1. However, negative values of kurtosis are rare; most values are positive, with a population mean of 5.9 (SD 1.6). These kurtosis values are comparable with those implied by whole-cell recordings of auditory cortex in awake rats (Hromádka et al., 2013). Because the kurtosis is a measure of the proportion of samples within the peak and tail of a distribution compared with a normal distribution, the kurtosis is positive for heavy-tail distributions (see Materials and Methods). The negative values of kurtosis indicate that distributions of the inputs may occasionally be bimodal; such distributions may result from periods of quiescence punctuated by periods of sustained input.
We next illustrate that the dynamics of propagating wave patterns in our network provide a mechanistic explanation for the bumpy, synchronized synaptic inputs, as shown in Figure 5a. It is apparent that a wave front, which includes multiple spiking neurons, provides a source of synchronized input to any neurons that it approaches. However, the superdiffusive dynamics of the crescent-shaped waves means that such synchronized input is transient, as the wave will pass by and move out of range of the neuron rapidly. We can quantify the number of spikes participating in a bumpy synaptic input by considering a planar wave front approaching a “test” neuron. The maximum distance of excitatory connections is DE = 10; above this distance, the test neuron will not receive any recurrent input (see Materials and Methods). Therefore, we assume the initial displacement between the wave and the test neuron to be d = −DE. If the wave approaches the test neuron head-on, the neuron will continue to receive input until the wave passes out of range at d = DE. Because d is in units of neuron separation (i.e., gridpoints), the test neuron receives excitatory inputs from the wave front (i.e., at 2DE + 1 locations); this is the range over which the wave provides input to our test neuron as it approaches and then moves away; in other words, it is the length of the wave trajectory that affects the test neuron. The width of a typical wave front consists of Nw ≈ 19 (Fig. 1i) depolarized neurons. The number of neurons that could participate in a bump of synaptic input is the width multiplied by the length of the trajectory (i.e., Nw(2DE + 1) − 1), where the minus one excludes the test neuron from receiving input from itself. Excluding neurons, which are separated from the test neuron by a distance greater than DE, we find that 315 neurons can participate in the bump. This is similar to the number of synthetic excitatory postsynaptic potentials used to generate synchronized inputs to cortical neurons in brain slices (Stevens and Zador, 1998), but it is approximately an order of magnitude less than that found in vivo (DeWeese and Zador, 2006). This discrepancy is likely due to the scaling of our network (see Materials and Methods).
The model of a typical wave front approaching a test neuron can also be used to explain the shape of the bumps of synaptic conductance. Consider the wave front moving at the average speed of 2 gridpoints/ms (Fig. 2c) head-on toward a test neuron and then away from it. As excitatory coupling strength has an exponential dependence on distance (Eq. 4, see Materials and Methods), the amplitude of the excitatory inputs first increases and then decreases. The resultant time course of the input produced in this model is similar to the time course (full width at half-maximum is 4.4 ms) and amplitude (∼2000 μS) of the bumps of largest amplitude in the network model (Fig. 5c, solid line). Bumpy synaptic inputs in our network usually have diverse profiles, but such variations can arise from differences in the sizes of firing wave fronts and the complex trajectories of these waves, which often do not approach a neuron head-on but move within its proximity. In summary, the crescent-shaped waves with superdiffusive dynamics can produce bumpy, synchronized inputs with short durations, to individual neurons.
Bumpy synaptic inputs in the balanced, spatially extended network occasionally have longer durations. For instance, the bumpy synaptic conductance centered around 200 ms in Figure 5a has a relatively longer duration, during which the neuron receives synchronized inputs for a sustained period (i.e., up states), similar to intracellular recordings in the auditory cortex of awake rats (Hromádka et al., 2013). These inputs are caused by patchy patterns because, when these patterns move toward a particular neuron, they wander around the neuron and drift away from it slowly in a normally diffusive manner, providing relatively sustained synchronized inputs to the neuron. To further quantify the bumpy, synchronized synaptic inputs, we study their durations, which we define as the time intervals during which excitatory conductance exceeds some threshold. We choose this threshold to be the leak conductance (i.e., gijE(t) > gL), but as long as the threshold level is not too small, its precise value is not critical; choices near this value yield similar results. Using this definition, we classify bumpy inputs with long duration as those of duration >500 ms, as was used by Hromádka et al. (2013). We find that bumpy inputs of long duration are rare, comprising only 0.05% of all bumps. Furthermore, most time spent away from rest is spent in short bumps; only 1.2% of total simulation time is spent in bumpy inputs with duration >500 ms. Indeed, bumpy inputs of long duration constitute 3.6% of the duration of all bumpy, synchronized inputs, despite, on average, being approximately an order of magnitude longer than bumpy inputs of short duration (Fig. 5a). This is in agreement with experimental results, which suggest that sustained bumps (up states) in inputs are rare in the awake cortex (Hromádka et al., 2013).
Synaptic inputs are correlated and inhibition lags behind excitation
We now demonstrate that our model is also able to capture the cross-correlations of synaptic inputs to pairs of nearby neurons, as measured by using dual-cell recording in Okun and Lampl (2008). Figure 5d shows the typical excitatory (solid lines) and inhibitory (dashed lines) conductances received by two neurons separated by a small distance of 5 gridpoints, as a wave passes by; conductance is positive (Eq. 2 in our model). We calculate the cross-correlations of these inputs over pairs of neurons and over trials and find that there are large peak values of cross-correlation with 0.69 (SD 0.04) for excitatory-excitatory, 0.56 (SD 0.05) for inhibitory-excitatory, and 0.91 (SD 0.02) for inhibitory-inhibitory conductances, respectively (Fig. 5e). This is comparable with the cross-correlations of excitatory and inhibitory inputs found in nearby cortical neurons (Okun and Lampl, 2008; Gentet et al., 2010).
As shown in Figure 5d, excitatory inputs to the nearby neurons reach their peak values at almost the same time; this is also the case for the inhibitory inputs. In contrast, inhibition lags excitation significantly (p < 0.001). As in Okun and Lampl (2008), such lags between inhibition and excitation are then measured based on the peak values of their cross-correlations averaged over neuron pairs and trials; these cross-correlations attain their peak value at non-zero lag, which is 2.3 ± 0.4 ms. This lag is comparable to the lag of 3.8 ± 4.9 ms as found in Okun and Lampl (2008). We also calculate the cross-correlation of excitatory and inhibitory conductances arriving at the same neuron and obtain a lag of 2.2 ± 0.5 ms, comparable with the lag of 2.4 ± 3.6 ms reported by Wehr and Zador (2003) for such inputs received by the same neuron. Our model is therefore able to reproduce a characteristic feature observed in cortical neurons, namely, that inhibition lags behind excitation by several milliseconds. This supports the hypothesis that inhibition controls the integration time window of excitation, enabling neurons to act as coincidence detectors.
In our model, the bumpy excitatory synaptic inputs are always ahead of inhibitory inputs, as shown in Figure 5d, regardless of which direction the approaching wave moves; Figure 5f shows such inputs when there is a wave moving in the opposite direction to that in Figure 5d. This effect is a result of the distance-dependent coupling and the distinct synaptic time scales of excitatory and inhibitory dynamics (τdE = 2.0 ms and τdI = 7.0 ms in Eq. 3). To illustrate this, we consider the model mentioned earlier, of a propagating wave front consisting of Nw = 19 firing excitatory neurons, and extend it to include inhibitory neurons. Because inhibitory neurons only constitute 20% of the neurons in the network, there are half as many inhibitory neurons as there are excitatory neurons along both dimensions. Therefore, the firing wave front consists of ∼Nw/2 inhibitory neurons, which occur at d = −DI−DI −2, …, −1, 1, …, DI −2, DI, where DI = 15 is the spatial range of inhibitory neurons. Using this extended model, we obtain the time course of inhibitory input received by our test neuron, shown by Figure 5c (dashed line); this is similar to what is observed in our balanced network (Fig. 5d,f). When we calculate the cross-correlation between excitatory and inhibitory bumps, we find that it obtains its peak value at 3.9 ms (Fig. 5g, red line), similar to the lag found when a wave passes a neuron in the balanced, spatially extended network (Fig. 5g, black line). Based on the model of a wave front approaching a test neuron, we also find that the time lag remains the same for a wave moving in an opposite direction (i.e., starting at d = DI and ending at d = −DI). However, if we run the model with the same synaptic time scales (τdE = 2.0 ms and τdI = 2.0 ms), no lag exists in the cross-correlation function; if we run the model with the same distance-dependent coupling for excitatory and inhibitory neurons (Kij,i′j′E = Kij,i′j′I in Eq. 4), the lag is only 2.3 ms. These results therefore suggest that the lags between excitation and inhibition are dependent on the interplay of the synaptic time scales and the distance-dependent coupling but are independent of the propagation direction of the wave patterns. We also find that this lag is not evident for the balanced, randomly connected network, in which no wave patterns can form.
Membrane potential dynamics are non-Gaussian
In previous models with balanced excitation and inhibition, membrane potential follows a random walk and its distribution is Gaussian (Shadlen and Newsome, 1998); this is also the case for our randomly connected network, in which the membrane potential has a Gaussian distribution with a kurtosis near zero. However, for the balanced, spatially extended network, the synchronized bumpy inputs cause occasional large excursions of membrane potential from rest; this results in a heavy-tail distribution of membrane potential (Fig. 6a). This non-Gaussian dynamic is confirmed by calculating the kurtosis, which has a mean value over all neurons of 5.9 (SD 1.2), close to the kurtosis found in DeWeese and Zador (2006). The distribution of kurtosis values for the membrane potential of each neuron shows that there is a great diversity in membrane potential dynamics, with a maximum kurtosis of 11.8 and a minimum kurtosis of −0.3 (Fig. 6b). We find similar distributions for the synaptic inputs (Fig. 5b), which indicates that there are bumpy dynamics for both synaptic input and membrane potential. This coincides with the passing of waves, which causes highly synchronized synaptic inputs and a resultant sudden increase in membrane potential.
Finally, we consider the correlation properties of membrane potential. We calculate the auto-correlation of the membrane potential of each neuron and then calculate the average. As shown in Figure 6c, there are oscillations in auto-correlation as a function of time lag with a frequency of 33 Hz. Applying the same procedure to the excitatory synaptic inputs received by each neuron (Fig. 6d) yields a similar auto-correlation, with a characteristic frequency of 33 Hz. We also calculate the cross-correlation of the membrane potentials of nearby neurons. The averaged cross-correlation has the same frequency as the averaged auto-correlation but has a smaller amplitude of 0.5, as shown in Figure 6e; this is due to the dependence of excitatory connection strength upon distance (Eq. 4, see Materials and Methods). These correlated functions of membrane potential produced in our model are similar to those found in previous experiments of awake animals (Poulet and Petersen, 2008; Gentet et al., 2010). The frequency is, however, an order of magnitude greater than that found experimentally (Poulet and Petersen, 2008; Gentet et al., 2010), a discrepancy that is likely due to the scale of our network (see Materials and Methods). Suppose that the speed of the wave patterns is v and the spatial separation between them is l; then on average, the interval between waves approaching a certain area would be l/v. The corresponding frequency of synaptic inputs to the neurons around the area would therefore be v/l, meaning that increasing wave separations can potentially decrease the frequency. Indeed, when the network is scaled up to 600 × 600 with larger synaptic coupling ranges (DE = 20 and DI = 30 in Eq. 4), we find that the average frequency decreases from 33 Hz to 21 Hz.
Discussion
In this study, we have shown that propagating wave patterns with complex dynamics can emerge from spatially extended, spiking neural circuits with balanced excitation and inhibition. These patterns provide a mechanistic explanation for a range of neural dynamics, including the variability of spike timing, slow fluctuations of firing rates, and non-Gaussian dynamics of membrane potential. Additionally, these propagating wave patterns enable individual neurons to have highly synchronized synaptic inputs, as found previously (Stevens and Zador, 1998; DeWeese and Zador, 2006; Okun and Lampl, 2008). Our model therefore reconciles the model of balanced excitation and inhibition, and that of synchronized inputs, to account for irregular neural dynamics.
A unified account of irregular neural dynamics
Neural firing activity in vivo can be approximated as a doubly stochastic process (Churchland et al., 2011) because, in addition to the variability of spike timing, the firing rates of cortical neurons fluctuate over long time scales. A recent study, by extending conventional balanced networks with random uniform connections to those with clustered connections, is able to explain such doubly stochastic processes (Litwin-Kumar and Doiron, 2012). As in Litwin-Kumar and Doiron (2012), our model exhibits slow firing rate fluctuations while retaining the fast spiking variability. In Litwin-Kumar and Doiron (2012), the mechanism underlying the fluctuations of firing rates is spontaneous switching between two attractors, with one attractor representing the low-activity state and the other one representing the high-activity state. However, as we have demonstrated in our model, firing rate fluctuations are caused by the complex collective dynamics of propagating wave patterns, including their normally diffusive and superdiffusive dynamics. The dynamics of membrane potential, as found by whole-cell recordings, impose another set of constraints for theoretical accounts of variable neural activity, including their correlated fluctuations (Poulet and Petersen, 2008; Gentet et al., 2010) and non-Gaussian distributions (DeWeese and Zador, 2006; Hromádka et al., 2013; Tan et al., 2014). These important empirical observations have not been addressed in previous modeling studies but are captured by our model.
Our propagating wave-based model is able to reconcile the view of balanced excitation and inhibition and that of synchronized inputs to account for variable neural dynamics. On the one hand, propagating wave patterns with complex dynamics only emerge when excitatory and inhibitory synaptic currents are balanced. On the other hand, the emergent wave patterns provide individual neurons with large, synchronized inputs. In our model, when a propagating wave pattern moves toward a neuron, this neuron receives synaptic inputs from the firing neurons within the pattern. However, this pattern eventually leaves that neuron, resulting in a bump of large synaptic inputs. Because these patterns move in a seemingly random way, they produce synchronized, bumpy inputs that are randomly distributed in time for individual neurons. Such inputs have been synthesized and injected into cortical neurons in brain slices to account for the variability of spike timing (Stevens and Zador, 1998) and have also been found in the auditory cortex (DeWeese and Zador, 2006; Hromádka et al., 2013) and the somatosensory cortex of rats (Okun and Lampl, 2008). Aside from unraveling the network mechanism of these synchronized inputs to individual neurons, our propagating wave-based model can reproduce the cross-correlations of excitatory and inhibitory synaptic inputs to nearby neurons, and the time lag of several milliseconds between them, as observed in Okun and Lampl (2008).
In our model, the synchronized synaptic inputs cause non-Gaussian membrane potential dynamics, consisting of quiescent periods that are interrupted by short intervals of high-amplitude depolarization. Such non-Gaussian fluctuations of membrane potential have been observed in cortical neurons (DeWeese and Zador, 2006; Tan et al., 2014) but are inconsistent with the existing random walk models (Shadlen and Newsome, 1998) and the existing balanced network models (van Vreeswijk and Sompolinsky, 1998), which instead predict Gaussian dynamics of membrane potential. The main mechanism of the non-Gaussian dynamics of membrane potential in our model is the synchronized synaptic inputs with a heavy-tail distribution, indicating that synaptic inputs have heterogeneous magnitudes instead of homogeneous ones. Although the classical random walk model has served as a simple but powerful framework to understand the variability of neural dynamics, our study suggests a significant extension, in which balanced synaptic inputs with heterogeneous magnitudes should be considered.
Propagating wave patterns in balanced, spatially extended neural networks
The balanced networks studied here consist of conductance-based, integrate-and-fire neurons that are uniformly distributed over a 2D lattice. A key feature considered in these models is that the neurons, particularly excitatory neurons, receive inputs from neurons that are in physical proximity to them and that coupling strengths decrease as distance between neurons increases. Such a distance-dependent rule has been found in the connections of neurons and those of interareal networks; this coupling rule was recently hypothesized to be a general design principle applying over multiple scales of neural systems (Ercsey-Ravasz et al., 2013). Inhibitory coupling, however, is homogeneous, in accord with other modeling studies (Litwin-Kumar and Doiron, 2012); such homogeneous inhibitory connectivity, with less tuning specificity than that of excitatory neurons, has been reported in experimental studies (Fino and Yuste, 2011). We have mainly studied the intrinsic dynamics of propagating wave patterns that emerge from the network with regular structure and constant external inputs. However, we have found that such emergent dynamics are still preserved when some connections of the network are randomly rewired and when neurons receive noisy inputs. Another key feature in our network is that it is 2D; for the randomly coupled version of our model, we have shown the network only generates variable spike trains, but it cannot produce the non-Gaussian dynamics of membrane potential and highly synchronized synaptic inputs, mainly due to a lack of emergent propagating waves. We have studied similar networks with a ring structure and found no evidence of such wave patterns with complex dynamics, which suggests that 2D spatial extension is important for the emergence of complex propagating wave patterns. This is similar to other complex physical systems where such spatial extension is essential for the emergence of complex spatiotemporal patterns (Bak et al., 1988).
Our work links the propagating wave patterns that have been widely observed in the cortex (Rubino et al., 2006; Benucci et al., 2007; Han et al., 2008; Wu et al., 2008; Sato et al., 2012) with variable neural dynamics. As demonstrated in our study, propagating waves exhibit complex dynamics: they originate from random locations and propagate in a seemingly random way. If the network is out of balance in the excitation dominated state, the network exhibits more regular, global wave patterns, such as spiral waves, as have been found in pharmacologically manipulated, disinhibited neural circuits (Huang et al., 2010); if the network is in an inhibition dominated state, these patterns disappear shortly after they are formed, without any long-range propagation. In our study, we have shown that the collective dynamics of crescent-shaped propagating waves in the balanced network is anomalous superdiffusion, suggesting that there are long-range correlations in wave pattern dynamics. This prediction can be tested by analyzing propagating waves found in experiments in the same way as we have done in our modeling study.
Computational implications of propagating wave patterns
Our model shows that structured activity such as propagating wave patterns at the level of neural circuits can arise from highly variable firing activity of individual neurons, therefore bridging such seemingly contrasting dynamics at different neural levels. This property of spiking neural networks makes the cortex, a paradigmatic example of a complex system, analogous to other complex physical systems, such as turbulent fluids. In turbulence, coherent structures, such as vortices, similarly emerge from molecules that behave irregularly (Hussain, 1986). Our work, therefore, indicates that drawing on analogies between cortical circuits and turbulent fluids may bring novel insights into understanding complex spatiotemporal dynamics of the cortex (Wang, 2010).
What are the computational implications of the existence of propagating wave patterns in the cortex? Spiking waves, when propagating across neurons, would result in concerted synchronized volleys of activity from one subpopulation to the next, so that spike timing sequences can be formed. These sequences are an emergent property of our balanced networks rather than a product of propagation of synchronized volleys along feedforward networks, as in the framework of synfire chains (Abeles, 1991). Contrary to spike sequences propagating along feedforward chains with regular dynamics, the emergent spiking wave patterns in the balanced network have complex dynamics, including the presence of multiple, interacting waves distributed over space simultaneously. These dynamics may be necessary for neural systems to assemble different spiking sequences to achieve “neural syntax” (Buzsáki, 2010). The multiple, spatiotemporally distributed properties of localized propagating patterns also appear to be well suited for distributed parallel information processing; recently, it has been proposed that these patterns and their interactions can perform distributed dynamic computation in neural circuits (Gong and van Leeuwen, 2009). By showing that propagating wave patterns provide a unified account of a range of neural dynamics, our work suggests that it is important to explore their potential computational roles in the brain.
Notes
Supplemental material for this article is available at http://www.physics.usyd.edu.au/∼gong (Video of Propagating Wave Patterns). This material has not been peer reviewed.
Footnotes
We thank Prof. Paul Martin for helpful discussion and comments on the manuscript.
The authors declare no competing financial interests.
- Correspondence should be addressed to Dr. Pulin Gong, School of Physics, University of Sydney, NSW 2006, Australia. pulin.gong{at}sydney.edu.au