## Abstract

The preBötzinger Complex (preBötC) encodes inspiratory time as rhythmic bursts of activity underlying each breath. Spike synchronization throughout a sparsely connected preBötC microcircuit initiates bursts that ultimately drive the inspiratory motor patterns. Using minimal microcircuit models to explore burst initiation dynamics, we examined the variability in probability and latency to burst following exogenous stimulation of a small subset of neurons, mimicking experiments. Among various physiologically plausible graphs of 1000 excitatory neurons constructed using experimentally determined synaptic and connectivity parameters, directed Erdős-Rényi graphs with a broad (lognormal) distribution of synaptic weights best captured the experimentally observed dynamics. preBötC synchronization leading to bursts was regulated by the efferent connectivity of spiking neurons that are optimally tuned to amplify modest preinspiratory activity through input convergence. Using graph-theoretic and machine learning-based analyses, we found that input convergence of efferent connectivity at the next-nearest neighbor order was a strong predictor of incipient synchronization. Our analyses revealed a crucial role of synaptic heterogeneity in imparting exceptionally robust yet flexible preBötC attractor dynamics. Given the pervasiveness of lognormally distributed synaptic strengths throughout the nervous system, we postulate that these mechanisms represent a ubiquitous template for temporal processing and decision-making computational motifs.

**SIGNIFICANCE STATEMENT** Mammalian breathing is robust, virtually continuous throughout life, yet is inherently labile: to adapt to rapid metabolic shifts (e.g., fleeing a predator or chasing prey); for airway reflexes; and to enable nonventilatory behaviors (e.g., vocalization, breathholding, laughing). Canonical theoretical frameworks—based on pacemakers and intrinsic bursting—cannot account for the observed robustness and flexibility of the preBötzinger Complex rhythm. Experiments reveal that network synchronization is the key to initiate inspiratory bursts in each breathing cycle. We investigated preBötC synchronization dynamics using network models constructed with experimentally determined neuronal and synaptic parameters. We discovered that a fat-tailed (non-Gaussian) synaptic weight distribution—a manifestation of synaptic heterogeneity—augments neuronal synchronization and attractor dynamics in this vital rhythmogenic network, contributing to its extraordinary reliability and responsiveness.

- attractor dynamics
- breathing rhythm
- graph neural network
- motor systems
- preBötzinger Complex
- synchronization

## Introduction

The preBötzinger Complex (preBötC), the kernel of the breathing central pattern generator, produces inspiratory rhythm in mammals (Smith et al., 1991; Ashhad et al., 2022), for which emergent network mechanisms play an essential role (Pace et al., 2007a,b; Kam et al., 2013b; Wang et al., 2014; Feldman and Kam, 2015; Cui et al., 2016; Ashhad and Feldman, 2020; Ashhad et al., 2022). Inspiratory motor output emerges only when putatively rhythmogenic type I preBötC neurons (Rekling et al., 1996; Gray et al., 1999; Kam et al., 2013b; Feldman and Kam, 2015; Song et al., 2016; Del Negro et al., 2018; Ashhad and Feldman, 2020), initially firing at low frequency in the preinspiratory (preI) period, progressively synchronize to produce bursts of action potentials (APs) in type II preBötC output neurons (Gray et al., 1999; Tan et al., 2008; Picardo et al., 2013; Revill et al., 2015; Cui et al., 2016; Ashhad and Feldman, 2020) that ultimately drive the motor activity in each breathing cycle (Fig. 1*A–D*). Strikingly, preI period, which marks the period for initial synchronization of type 1 neurons, varies considerably in duration from cycle to cycle and across experiments in mice as follows: (1) 240 ± 10 ms, mean ± SEM; range, 63–588 ms, *in vitro* (Fig. 1*B*,*C*; Gray et al., 1999; Carroll and Ramirez, 2013; Baertsch et al., 2018; Ashhad and Feldman, 2020); and (2) 160 ± 32 ms (mean ± SEM; *in vivo* anesthetized rats; Guyenet and Wang, 2001; Cui et al., 2016). We postulate that this variability in preI duration is the consequence of the variability in preBötC synchronization dynamics (Ashhad and Feldman, 2020). Congruent with this variability, simultaneous activation of a small subset of preBötC inspiratory-modulated neurons (4–9 of ∼1000 neurons) in rhythmic slices can initiate a burst with high (>80%) reliability, albeit at significant delays (170–370 ms; Kam et al., 2013a; Fig. 1*E–H*). In these experiments, both the bursting probability and the latency time depend on the number of activated neurons; as the number of initially activated neurons increases, the mean latency to burst decreases. However, the latency varies in the following ways: (1) from trial to trial when the same neurons are stimulated, and (2) when different randomly selected subsets of the same number of neurons are stimulated. The combination of an exceptionally high probability of preBötC synchronization, leading to a burst, with the significant variability in the time to synchronize represent the twin traits of robustness and lability in this network. The robust yet labile output represents a critical feature of breathing, raising the challenge to decipher the underlying mechanisms. Furthermore, since the putative rhythmogenic type I neurons appear sparsely (13%) connected (Rekling et al., 2000; Ashhad and Feldman, 2020), we sought to determine how synchrony can reliably emerge from low levels of initial activity at the beginning of each cycle.

A powerful probe of network dynamics is determining its behavior in response to external perturbation (Forster and Pines, 2018). Thus, we sought to identify network features required to produce bursts consistent with experimentally observed preBötC connectivity and synchronization dynamics.

To investigate network properties, we constructed a minimalist model consisting of 1000 leaky integrate-and-fire (LIF) neurons with connectivity patterns and synaptic strengths consistent with experimental data. The dynamical properties of the LIF neurons were set to be consistent with the dynamics of (presumptively rhythmogenic) preBötC type I neurons (Rekling et al., 1996, 2000; Table 1). Next, we simulated the holographic uncaging experiments of Kam et al. (2013a), where targeted stimulation of ≤10 preBötC neurons could induce a burst, by stimulating a randomly selected set of ≤10 neurons (see Materials and Methods; Table 1).

By identifying plausible connectivity schemes with optimal fits, we found that preBötC synchronization was very sensitive to details of network connectivity. Only a small subset of biologically reasonable connectivity schemes [i.e., Erdős-Rényi (ER) graphs with lognormal distributions of synaptic weights] produced synchronization dynamics consistent with experiments. Importantly, we found experimentally determined lognormal distributions of synaptic weights (see Materials and Methods; Rekling et al., 2000), also present in other neuronal networks (Manabe et al., 1992; Lefort et al., 2009; Loewenstein et al., 2011; Buzsáki and Mizuseki, 2014), were crucial for network synchronization.

## Materials and Methods

#### Model

In each breathing cycle [starting at the end of the previous inspiratory burst (I-burst)], there emerges at a brief delay low-frequency synchronized firing of preBötC neurons (i.e., a burstlet; Kam et al., 2013b; Sun et al., 2019; Ashhad and Feldman, 2020; Kallurkar et al., 2020) that precedes and ultimately triggers, with high probability, each inspiratory burst (Fig. 1*B*,*C*). This is followed by a postburst refractoriness and the cycle repeats. The dynamics of network synchronization underlying burstlets and subsequent burst formation is the focus here. We do not consider the termination of network bursts or the transfer of bursts to the output targets of the preBötC. To explore preBötC synchronization dynamics leading to bursts, we constructed models consisting of 1000 excitatory neurons (within an order of magnitude of the number of inspiratory neurons in rodent preBötC) to simulate experiments where simultaneous stimulation of four to nine preBötC inspiratory neurons can induce I-bursts (Figs. 1, 2). Notably, these experiments (Kam et al., 2013a) were performed in rhythmic *in vitro* slices under high-excitability conditions (9 mm K^{+} in the bath solution), with targeted neurons excited by holographic uncaging of caged MNI-glutamate. Furthermore, preBötC rhythmicity is mediated by excitatory neurons as rhythmicity persists on pharmacological blockade of local inhibition, both *in vivo* and *in vitro* (Janczewski et al., 2013; Ashhad and Feldman, 2020). Thus, to best approximate these experimental conditions, and given that the rhythm persists in the absence of inhibition (Janczewski et al., 2013; Ashhad and Feldman, 2020), we did not incorporate inhibitory neurons in our models.

We modeled the preBötC by treating its constituent neurons as low-dimensional, nonlinear dynamical systems interacting on a quenched (and frequently random) directed graph, an approach pioneered in other neural networks (Gerstner, 1995; Ermentrout, 1998; Perin et al., 2011; Gal et al., 2019). In such studies, neuronal models fall into the following two distinct classes: (1) firing rate models that treat neuronal output in terms of a single firing rate variable but ignore the temporal structure of the underlying spike trains; or (2) spiking models that consider the timing of each spike in the network where neuronal interactions depend on the temporal coincidence of discrete EPSPs rather than on their rate-based, highly smoothed temporal summation (Bernander et al., 1994; Wang and Buzsáki, 1996; Diesmann et al., 1999; Lindsey et al., 2000; Kumar et al., 2010). Since firing-rate models cannot readily account for the observation that the simultaneous stimulation of a small subset of neurons (four to nine) can induce a global response (i.e., a network burst, at considerable delay; range, ∼60–400 ms) *in vitro* (Kam et al., 2013a), we focus solely on spiking models by incorporating the LIF models of preBötC type I neurons (see Materials and Methods).

#### Neuronal dynamics

In the LIF model for point neurons, the change of somatic potential *V _{i}* the

*i*th neuron at time

*t*is determined by the following (Gerstner, 1995):

*j*th to the

*i*th neuron, and the sum is over all neurons that synapse to the

*i*th neuron.

*j*th neuron. The synaptic weight

*j*th to the

*i*th neuron, while

For single spikes, these equations can be solved analytically. Assuming neuron *i* to be at its resting potential before the arrival of a spike, the solution for its potential (which is the waveform of the resulting EPSP) is the following:
*t*^{′} =

#### Choice of the physiological parameters

While synaptic latency, _{ij}, was taken directly from the experimental data (Rekling et al., 2000), we fit parameters τ_{m}, τ_{s}, and *W*_{ij} to obtain approximately the same EPSP profile as observed experimentally. The EPSP decay time constant of 20 ms is almost τ_{m} (τ_{m}≫τ_{s}). We then chose τ_{s} to fit the experimentally determined EPSP rise time (Rekling et al., 2000). The lognormal distribution for EPSP amplitudes was determined from published parameters (Rekling et al., 2000). Specifically, the unitary EPSP amplitude of inputs from preBötC type I neurons was distributed as 2.8 ± 1.5 mV (mean ± SD). The high variance of EPSP amplitude requires a heavy-tailed distribution to capture the entire range of EPSPs without incurring negative values (i.e., assuming a symmetric normal distribution in the 99.75% (mean ± 3SD) range of EPSPs would be −1.7 to 7.3 mV. Furthermore, this EPSP distribution is consistent with the biologically plausible, lognormal distributions of unitary EPSP amplitudes present in several brain regions (Buzsáki and Mizuseki, 2014). Thus, we modeled EPSP amplitudes by fitting the parameters of a lognormal distribution for synaptic weights *W _{ij}* to reflect the experimental range of unitary EPSP amplitudes. In terms of the model parameters, an EPSP amplitude of 2.8 mV required a

*Wij*of 300 mV/ms.

The deterministic generation of spikes by model neurons was controlled as follows. When the neuronal potential exceeded a threshold, *V**, it generated an action potential, and the potential dropped instantaneously back to *V*_{rest}, with the boundary condition that the neuron cannot fire during the refractory period after a previous spike (τ_{refractory} = 3 ms; Gerstner, 1995; Ermentrout, 1998; Ashhad and Feldman, 2020); the potential then continued to obey Equation 1. When, however, we allowed spontaneous stochastic firing (see Fig. 10 only), regardless of potential, spontaneous firing may occur because of inherent neuronal excitability. This baseline firing was modeled as a Poisson process with the frequency *f*_{noise} (0.5–2.0 Hz), creating a background uncorrelated network activity (Gray et al., 1999). We demonstrate the dynamics of the model in a network with three neurons (Fig. 2*C*, left).

#### Network structure

To complete our model definition, we specified the fixed network over which the neurons interact. The preBötC network topology does not appear to exhibit any anatomic and cytoarchitectural regularity, as distinguished from such structures as cortical columns or hippocampus. Consequently, we investigated a few classes of physiologically plausible networks. Specifically, we considered the following four distinct ensembles of random directed graphs: (1) ER (Erdos, 1959; Gilbert, 1959) directed graphs; (2) directed graphs with an increased number of directed 3-simplicies (small world network; see definition below); (3) localized; and (4) hierarchical graphs.

##### ER directed graphs.

The motivation to study ER graphs (Erdos, 1959; Gilbert, 1959) was to model random connectivity without any a priori assumption about preBötC network topology. In the ensemble of directed ER graphs, the probability for any two preBötC neurons to be unidirectionally connected was the same (i.e., *p* = 0.065 based on experimental data; Rekling et al., 2000). Bidirectional connections, which were not observed experimentally, were not forbidden, although their probability of occurrence *p*^{2} was negligible. Representative diagrams of these networks are shown in Figure 3.

##### Directed graphs with an increased number of directed 3-simplicies (small world network).

We also considered the preponderance of 3-simplices, where three neurons, A, B, and C, interconnect such that neuron A synapses onto both neurons B and C, while neuron B synapses to neuron C (Fig. 2*C*) in constructing our models. Notably, in small-world networks (Watts and Strogatz, 1998) and for some neuronal networks, including neocortex, the number of directed 3-simplices is significantly higher (Perin et al., 2011; Gal et al., 2019) than that expected in a typical ER network [*N _{s}* = (

*pN*)

^{3}]. Thus, in some models, we added 3-simplicies to ER networks to exceed the average number of simplices in the ER network by ∼11%.

##### Localized network.

The localized networks were constructed as follows: we placed 1000 neurons in a planar, square array with neighboring neurons separated by unit distance. We then added directed edges to form a network, such that the probability for two nodes to be connected decayed exponentially with distance (motivated by the fact that synaptic connections in some neural microcircuits decrease with interneuronal distance; Perin et al., 2011). The probability of a direct connection between neurons *i* and *j* was given by the following:
*d _{ij}* is the distance between them and λ is the mean connection length. The case

##### Hierarchical networks.

Finally, we considered the ensemble of (*c,q*)-hierarchical networks of *N* neurons with *c* central groups of at least *q* neurons each, and one large peripheral group containing almost all neurons of the network –*N* ≫*cq*; *q* was chosen such that *q* simultaneous EPSPs incident on a neuron are required to produce a spike. For example, each neuron in the peripheral group synapsed onto each neuron in the first central group; each neuron in the first central group synapsed onto each neuron in the second central group. Neurons in central groups were all-to-all connected, while neurons in the peripheral group were not connected to each other. Finally, each neuron from the last central group synapsed onto each neuron in the peripheral group. Figures 2*C* and 3*E* are examples with *c* = 2. We introduced more than one central groups to avoid bidirectional connections between pairs of neurons, as these have not been observed experimentally (Rekling et al., 2000). We considered such networks as they are most likely to produce a burst.

#### Simulations for stimulation of 1–10 preBötC neurons leading to network bursts

Stimulation (by holographic photolysis of caged glutamate) of four to nine preBötC neurons induces I-bursts through synchronization of preBötC rhythmogenic neurons with an approximately >80% success rate (Kam et al., 2013a). To model the experimental conditions in a connected population of 1000 LIF point neurons, we mimicked the effect of 9 mm [K^{+}] in the extracellular bathing solution on neuronal excitability by incorporating a constant depolarizing potential that put *V*_{rest} at approximately −60 mV while keeping the threshold for initiation of spikes at −48 mV; for the purposes of the model, this was represented by a +12 mV depolarization from baseline potential, redefined as 0 mV (Fig. 2*C*). We simulated the effects of experimental photostimulation by “depolarizing” chosen neurons to produce a similar pattern of spikes (Kam et al., 2013a). Specifically, to simulate the holographic photolysis experiments (Kam et al., 2013a), we initiated spiking in one to nine neurons in a connected population (Rekling et al., 2000) of 1000 LIF neurons (see Model subsection), at 25 ± 3 Hz per their experimentally characterized firing behavior (Kam et al., 2013a; Fig. 1*F*). We characterized the spiking pattern by the time between activation and the first spike τ_{delay}, the average period of spiking of the stimulated neurons, *T*_{laser}, the SD, Δ*t*, of the spiking period distribution, and the number of spikes produced *n*_{spikes}, all based on experimental measurements (Kam et al., 2013a). The parameters used for the model are listed Table 1. Figure 4 illustrates a typical process of burst generation in the model.

##### Seeding preBötC neurons with low-frequency uncorrelated firing to initiate network bursts

preBötC type I inspiratory neurons, which collectively are putatively rhythmogenic (Rekling et al., 1996; Gray et al., 1999; Picardo et al., 2013; Ashhad and Feldman, 2020), start firing *in vitro* at very low frequencies (∼0.5–1 Hz) in the late interburst interval, referred to as the preI period. Their activity leads to progressive network synchronization in the preI period that culminates in an inspiratory burst. We modeled this preI activity by initializing the neurons in the network to fire randomly at Poisson-distributed frequencies (mean range, 0.5–2.0 Hz), and to seed the network with low levels of uncorrelated activity, as observed preceding each burst in experiments (Rekling et al., 1996; Gray et al., 1999; Picardo et al., 2013; Ashhad and Feldman, 2020). After this initialization of neuronal activity, we computed the network firing rate in 5 ms bins as a measure of instantaneous synchrony in the network (Riehle et al., 1997). We chose this temporal window to capture correlated firing (i.e., synchrony) among neurons as this window (of coincidence) was too short for any neuron to spike more than once (their refractory period was set to 3 ms in all the models, a reasonable boundary condition based on experimental data; Table 1). Next, we computed average network activity from the firing rate computed in 5 ms bins, as a two-pass moving average, with a 25 ms moving window and 5 ms steps size.

#### Generalized senderness

To introduce more general network quantities related to the efferent synaptic connectivity and neuronal activity, we first denote the initial state of the network by the vector ** X_{(0)}**, where its

*i*th component is 1 if the

*i*th neuron was initially stimulated and 0 otherwise; the number of these stimulated neurons is

**, where**

*W**j*to neuron

*i*. The quenched random matrices of synaptic weights

**with matrix elements**

*W*With these definitions, matrix multiplication generates a new vector:

While this formally takes into account next-nearest and higher-order neighbors, using Equation 5 to investigate how the initial stimulus that propagates through the network implicitly assumes a linear weighting of neuronal activity propagating from one order to the next via a simple synaptic weight multiplication. In reality, the activation function of the neuron is nonlinear, having a threshold.

To account for the nonlinearity, we propose a nonlinear generalization of the above quantity:
*i*th power of the vector X (i.e., each component is independently raised to that power). Finally, to obtain a single scalar quantity, we sum over the elements of the vector X to obtain our definition of generalized senderness ** S** weighted by synaptic strength, as follows:

Similarly, we define an unweighted generalized senderness, ** W** to 1. Thus,

*n*counts the number of generations of neighbors that

**at the**

*X*_{(n)}*n*th generation. Higher exponents increase the effect of nonlinearity of the neuronal response function. In the following, we consider both low-order (small

*n*) and weakly nonlinear (small integer

*i*

_{k}for all 1<

*k*<

*n*) versions of generalized senderness. We address the question of which of these quantities is predictive for burst initiation using a machine learning-based filter.

#### Simulations and graph analysis

Network models were implemented in Python with NumPy. Simulations were performed on the Jupyter Notebook platform with integration time step of 50 µs. Graph statistics (Fig. 2) were analyzed in the Gephi (version 0.9.2) graph visualization platform (https://netbeans.apache.org). Data analysis was performed with custom-written software in IgorPro (version 7.08, WaveMetrics).

#### Experimental design and statistical analyses

Statistical analyses were performed using IgorPro (version 7.08; WaveMetrics). For statistical significance, testing normality was neither tested nor assumed; hence, nonparametric tests were used. For box-and-whisker plots, the center line represents the median; box limits represent the upper and lower quartiles, and whiskers represent the 90th and to the 10th percentile range.

#### Data availability

The codes for simulations generated in this study are available from the corresponding author on reasonable request. Further information and requests for resources and reagents should be directed to and will be fulfilled by the corresponding author J.L.F. (feldman{at}g.ucla.edu).

## Results

### Randomly connected networks of excitatory neurons replicated preBötC synchronization

We explored models to simulate the experimental result that stimulation of four to nine preBötC neurons can induce I-bursts (Figs. 1, 2; see also Materials and Methods). The principal measurable quantities following the exogenous stimulation of the preBötC (Kam et al., 2013a) are the number of stimulated neurons, the burst probability after stimulation, and, when bursts occur, the latency (i.e., delay between stimulation onset and bursting). In experiments, a burst corresponds approximately to the concurrent firing of the entire network (Fig. 1*B*, typical burst data). In simulations, a burst occurred when the potential averaged over all neurons was above the individual neuron firing threshold. Our results depended only weakly on this threshold because typically the entire network was active in a burst within a short temporal window of 2–3 ms after its onset (Figs. 4*A*,*B*, 5*A*). This criterion for bursting allowed us to distinguish bursts from burstlets, which are transient maxima (smaller than bursts) in the network population activity that subside rather than progressing to a burst (see Fig. 10). These burstlets resemble those seen in experiments (Fig. 1*B*, *).

We simulated a typical process of preBötC synchronization and burst generation on ER networks by activating seven randomly selected neurons (the experimental sweet spot in the range of four to nine neurons for burst activation) over five trials (Fig. 5, R1–R5). In five separate runs, the same set of seven randomly chosen neurons in the same network was activated. A key observation was that the burst activation time series replicated the experimental data (i.e., in this network), slight variations in the order and precise individual activation times (±5 ms, SD) of the same subset of neurons resulted in considerable variability in the evolution of activity. Bursts were not observed in every run (one failure/five runs), and when bursts were produced (four/five runs), there was considerable dispersion of delay times (99–195 ms; 145 ± 21 ms, mean ± SEM); these values were in the same range as experimental data (57–160 ms; mean ± SEM, 125 ± 23 ms; Kam et al., 2013a; compare Figs. 5*A*,*C*, 1*I*). Notably, no tuning of experimentally measured parameters was necessary.

In these representative trials, network activity growth because of spiking synchronization could be divided into two epochs. During the first epoch, activity growth was insignificant (Fig. 5*C*), insufficient to initiate a burst. In some cycles, spikes from activated neurons did not induce any substantial spiking in their downstream (postsynaptic) neurons (e.g., the first set of spikes for R1–R5 in Fig. 5*A*) Consequently, network activity decayed down to baseline once these neurons stopped firing in response to their initial stimulus (Fig. 5*B*,*C*, first set of spikes between 0 and 50 ms). With repeated activation of stimulated neurons, their spikes had higher coincidence in certain intervals because of the inherent jitter in activation times. This led to a suprathreshold summation of synaptic potentials in their postsynaptic neurons, activating a sufficiently large group of neurons to drive synchronization, which marked the second epoch. During this epoch, the transient network activity continued and spread synchronous activity that fueled a rapid and sudden amplification of network activity (i.e., a burst). At this point, the simulations were terminated (Fig. 4) for the continuous bursting regime when the simulation was not terminated). For a run that did not end with network synchronization (Fig. 5*A–C*, R2), the simulation trial ended with the last spike of initially activated neurons (Fig. 5*A*) and silencing of the entire network (Fig. 5*B*,*C*, red trial).

In experiments *in vitro*, one does not know the connections of the stimulated neurons. Accordingly, in simulations, we chose the stimulated neurons randomly with equal probabilities, using our measures of senderness ex post facto to understand why the stimulation of the same set of neurons did not always lead to the same bursting outcome (see below). To address the probabilistic nature of eliciting a burst, each trial with a given subset of stimulated neurons was performed five times (Fig. 6, each curve). The synchronization probability was defined as the ratio of the successful burst-inducing trial versus the total number of trials.

Almost all network topologies with physiologically constrained parameters we tested (see Materials and Methods) synchronized (i.e., showed significant spike-to-spike correlated activity between and among a large fraction of neurons; Kumar et al., 2010; Wang, 2010; Ratté et al., 2013) at various latencies (Fig. 6) that is a hallmark of preBötC inspiratory activity (Ashhad and Feldman, 2020; Fig. 1*C*,*E–I*). However, the ensembles of networks with differing connectomes demonstrated different patterns of behavior. For both localized (Fig. 6*A–C*) and hierarchical (Fig. 6*D–F*) networks, which had significant clustering (a measure of how strongly neurons are interconnected into groups; see Materials and Methods), the threshold number of stimulated neurons to synchronize the network with an 80% success rate ranged from two to four for the majority of the networks tested over multiple realizations (each realization representing a different network but with similar topological properties; Watts and Strogatz, 1998). We considered these networks to be too excitable since the minimum observed experimentally for >80% success was four stimulated neurons. Only randomly connected ER graphs could produce in response to activating four to nine neurons preBötC bursts with synchronization at a >80% success rate (Fig. 6*G–I*) at latencies comparable to those seen experimentally (Fig. 6*I*, gray box). We recall that in these networks, the outward connection from any neuron to other neurons was based on the experimentally measured probability of 0.065 and included an additional random factor in connection strength to reflect the experimentally determined variance (Rekling et al., 2000). Consequently, the threshold number of stimulated neurons required to reliably (>80%) synchronize a network varied among networks with the same overall parameters (Fig. 6*H*, the probabilities to synchronize; each curve is a different network with a different choice of activated neurons). Illustrative of the effects of spike timing, Figure 6, *C*, *F*, *I*, and *L*, shows the variability in time to synchronize (as error bars) within a given network when the same neurons were activated in random order across various trials. The range of mean latencies to synchronize different ER networks was 39–235 ms, inversely related to the number of stimulated neurons (Fig. 6*I*), similar to experiments (compare Figs. 6I, 1*H*,*I*; mean latency for the threshold number of stimulated neurons: model, 73 ± 3 to 184 ± 15 ms, mean ± SEM, across different network iterations; experiments, 255 ± 43 ms, mean ± SEM; Kam et al., 2013a; range, 170–370 ms). Interestingly, when we tweaked the connectivity in these ER networks to increase the number of directed triangular edges (i.e., 3-simplices; see Materials and Methods) by 11% (shifting network topology to small world like; Watts and Strogatz, 1998; Figs. 3, 6*J–L*), the threshold number of neurons required to reliably synchronize the network decreased to 2–4 in 5 of 10 different realizations (Fig. 6*K*) with decreased latency to synchronize (27 ± 0.6 to 116 ± 16 ms, mean ± SEM; Fig. 6*L*).

Thus, increasing the clustering coefficients decreased the number of neurons required for global synchronization (Fig. 6, compare *A*, *D*, *G*, *J*). Indeed, if the network was clustered, then it was more likely that neuronal activation confined to one cluster led to a burst, while activation of the same number of neurons in different clusters did not. The synchronization between activated neurons plays a marginal role in this picture. As a result, the bursting probability in these other networks rapidly jumps between 0 and 1. While this establishes that ER networks behave in ways consistent with experimental data, we cannot exclude the possibility that other network topologies not evaluated here could produce synchronization dynamics consistent with experimental data.

Similar reasoning rationalizes the observed latency time as the function of the number of activated neurons. When small numbers of neurons are stimulated, it is likely that the number of spiking neurons will initially grow slowly in the ER network, resulting in relatively long latency times when the number of initially stimulated neurons is small. Moreover, the latency time will decrease monotonically as the number of initially stimulated neurons is increased. In contrast, in clustered networks neuronal stimulation is more likely to lead to subsequent activation of clustered groups of neurons, which significantly reduces the latency time. Of course, outliers are possible [Fig. 6*C*, the red trace for the localized matrix (where the long delay time reflects many synchronization attempts between the initially stimulated neurons where only the last trial met with success)]. Since the experimental data suggest that the delay time smoothly varies as one changes the number of activated neurons, this observation further supports our inference that cluster-free networks (e.g., the ER) are necessary to reproduce consistently all of the dynamic data on synchronization and bursting.

Can these results illuminate the mechanistic underpinnings of robustness and lability of breathing? As noted above, in experiments, the latency to synchronize the preBötC network toward inspiratory bursts is highly variable (Carroll and Ramirez, 2013; Kam et al., 2013a,b; Cui et al., 2016; Del Negro et al., 2018; Ashhad and Feldman, 2020; Fig. 1*C*,*H*). Additionally, when the same neurons were stimulated across multiple trials, there was an occasional failure to induce bursts (Kam et al., 2013a). Similarly, across multiple simulation trials, activated neurons could induce or fail to induce a burst (Figs. 5*A–C*, 6). Moreover, when bursts occurred, multiple intervals of bouts of spiking from stimulated neurons were required, as spikes in initial intervals were unable to induce a burst. For instance, in Figure 5*A*, in trial R1 (black), the third set of spikes from the stimulated neurons at 100 ms recruited quiescent neurons, but activity failed to propagate (Fig. 5*A–C*, black box). However, in trial R4 (Fig. 5*A–C*, blue), spikes from the stimulated neurons induced a burst even when they were less synchronous than the ones in R1 at 100 ms (Fig. 5*B*,*C*, black boxes). Similarly, in trial R5 (Fig. 5*B*,*C*, purple) at 150 ms, the stimulated neurons recruited several of their postsynaptic neurons, but the network failed to synchronize fully (Fig. 5*B*,*C*, purple box). In the same trial at 200 ms, weaker synchrony among the stimulated neurons (compared with those at 150 ms) induced a burst. Based on the properties of synchronous propagation of pulse packets (Diesmann et al., 1999), with the same set of input neurons we expected that activity propagation should depend only on the initial synchrony of active neurons. In contrast, we observed a failure to produce bursts even when the synchrony among the stimulated neurons was higher than in some successful intervals, revealing that the local connectivity of spiking neurons with their postsynaptic partners convolves with the instantaneous network state to shape the flow of activity.

To better understand how activity fluctuations ultimately trigger a network burst, we computed a statistical topological parameter, *S*, associated with the efferent synapses of each neuron (Fig. 7; Materials and Methods). This parameter represents either the sum of all efferent synaptic weights with which a neuron connects to its postsynaptic neighbors (*S*_{weight}) or its total number of efferent projections (*S*_{synout}; Fig. 7*A–D*). We reasoned that a set of recruited neurons with more efferent synapses would have a higher probability for their outputs to converge onto their postsynaptic neurons, especially in a sparsely connected network like the preBötC. Thus, we hypothesized that a high *S* of recruited neurons was critical to induce a network synchronization trajectory following the initial activation of a few neurons.

To test this hypothesis, we plotted the generalized senderness, *S* (i.e., *S*_{weight} and *S*_{synout}), and a measure of synchrony (Diesmann et al., 1999), σ (*I*), the SD of spike time of active neurons in a given interval defined as follows: the first interval started with the first spike within the set of stimulated neurons and ended with their last spike or that of any recruited neurons (the spiking interval for each neuron was their experimentally measured interspike interval (ISI) ± jitter, 40 ± 5 ms; Figs. 1*E*,*F*, 8*A*). Specifically, in Figure 8*A*, the first interval, *I*_{1}, spanned the first set of stimulated spikes of seven randomly selected neurons (indicated by horizontal dashed lines). The next interval, *I*_{2}, began with a second stimulation of the same seven neurons (but in a different order) that recruited three postsynaptic neurons and ended with the last spike of one of these recruited neurons. Still, in this case, the collective activity of stimulated and recruited neurons at the end of *I*_{2} was insufficient to synchronize the network, as it is at the end of *I*_{3}. The dynamics changed in *I*_{4}, where the ongoing percolation of activity rapidly synchronized the network. For trials where the population did not synchronize, the last interval was capped at seven. Next, we plotted *S* versus σ (*I*_{n}); *n* = 1–7 to determine whether there was a threshold *S* for the network to synchronize (i.e., a minimum level of network activity where the emergent spiking induced by the stimulated neurons was sufficient to synchronize the network to burst). In Figure 5, *A* and *B*, synchronization to bursting occurred when the summed population activity exceeded 10 spikes in 5 ms bins (i.e., 2 kHz). To determine this threshold activity level more precisely, we computed the summed population activity in 1 ms nonoverlapping bins. Next, we divided the last interval (where synchronization and bursting occur; e.g., *I*_{4} in Fig. 8*A*) into two subintervals. First, the preburst interval that started with the first stimulated spike and encompassed the initial recruitment of quiescent neurons up to the 1 ms bin with two spikes and no decrease in the activity thereafter (Fig. 8*A*, *I*_{4A}). We reasoned that the preburst intervals contained a threshold number of neurons, which varied from trial to trial (see below) that committed the network toward synchronization. Notably, for all but one trial (i.e., R5), the first instance of instantaneous network activity with 2 spikes/ms in the preburst interval was always followed by a progressive increase in the activity leading to a burst. Second, the burst interval where the network fully synchronized to burst within 3–8 ms following the preburst interval (Fig. 8*A*, *I*_{4B}). A plot of *S* versus σ (*I*) revealed that the likelihood of induction of network synchronization was critically dependent on both *S* and σ (*I*). Specifically, when the synchrony among the stimulated (and recruited, if any) neurons was high [i.e., when σ (*I*) was low], but their *S*_{weight} and *S*_{synout} values were low, the network did not synchronize (Fig. 8*B*,*C*). Furthermore, across different trials, the preburst intervals (that mark the induction point of bursting) with lower σ (*I*) (i.e., higher synchrony), had lower *S* and vice versa, revealing the synergistic interactions between *S*, which determines input convergence, and σ (I), which determines input coincidence, that propel the network toward a synchronizing trajectory. For successful trials, the network trajectory converged at the minimum σ (*I*) and maximum *S* in the burst interval, which corresponds to the synchronized state of the network. These results revealed attractor dynamics in the preBötC model network (in the *S*−σ parameter space) that resemble the dynamics of synchrony propagation in a feedforward network (Diesmann et al., 1999).

Notably, the dependence of network synchronization on *S* and σ was similar for both *S*_{weight} and *S*_{synout}, where the *S*_{synout}–σ parameter space (Fig. 8*C*) captured the salient network attractor dynamics. This revealed that, for the preBötC model of networks with lognormal weights, successful bursting depended on the number and strength of efferent projections of initially active neurons, which determined the propagation of network activity via input convergence and coincident detection. As coincidence detection in the network was only possible with sufficient convergence of presynaptic inputs, the trajectory of *S*_{weight} (a determinant for the network coincidence detection) followed the trajectory of input convergence determined by *S*_{synout} (Fig. 8, compare *B*, *C*). We further explored the dependence of network synchronization on *S* by using as a case study the failure of a network to burst even when the stimulated neurons produced relatively high synchronous activity compared with those that induced bursts. As noted above (Fig. 8*B*,*C*), R5 recruited neurons at 150 ms and, in turn, increased synchrony (decreased σ; corresponding to the fourth circle from the bottom marked as *I*_{4}, Fig. 8*B*,*C*, purple); however, they could not sustain and amplify the activity to generate a burst. The total number of efferent synapses of these neurons was below the network mean (Fig. 8*D*,*E*). We classified these neurons as bad senders, insofar as their failure to induce a burst was because of insufficient convergence onto their downstream neurons. Notably, when sufficiently good senders (i.e., neurons with greater than the mean number of efferent synapses) were recruited in the penultimate intervals, synchronous recruitment of only 16–20 neurons in addition to the 7 stimulated neurons could propel the network on a synchronizing trajectory (Fig. 8*D*,*E*).

Occasionally, good senders recruited in certain intervals (e.g., R2 at 150 ms, R1 at 100 ms; Fig. 8*D*,*E*) failed to sustain/amplify activity further because of lack of sufficient synchrony (i.e., high σ in these intervals). Concordantly, a combined measure of synchrony and senderness in a single parameter (i.e., *S*/σ) fully recapitulates the dependence of network bursting on *S* and σ (Fig. 8*F*,*G*). Specifically, *S*/σ was low for intervals where neurons with high *S* were active but did not sufficiently amplify network activity toward bursting. For intervals that did induce bursting, *S*/σ was always higher than in unsuccessful intervals (Fig. 8*F*,*G*). Given the critical dependence of network synchronization on these two parameters, we sought to determine the range of *S*–σ parameter space (Fig. 8*B*,*C*) that favors network synchronization. Thus, we performed additional simulations with stimulation of up to 10 neurons with a range of their spike time jitter [40–100 ms mean spiking period with 5–8 ms jitter (SD)] that resulted in a wide range of *S* and σ, then analyzed the network trajectories corresponding to these parametric ranges (Fig. 8*H*,*I*). This mapped the *S*–σ parameter space of this network close to its tipping points of synchronization (i.e., the bifurcation points), where the network trajectories commit to synchronized bursting. Notably, one such trial revealed that a minute shift in the *S*–σ parameter space, resulting from recruitment of only one additional neuron and a 10% decrease in σ of 0.5 ms between two successive intervals, marked one of these bifurcation points (Fig. 8*H*,*I*, arrowheads), which further illuminated the network attractor dynamics (Diesmann et al., 1999; Vogels et al., 2005; Kumar et al., 2010).

### Interactions between synaptic heterogeneity and neuronal nonlinearity regulates network synchronization

The senderness analysis above revealed a prominent role of heterogeneity in synaptic connectivity in sculpting network dynamics. As this heterogeneity is implemented through ER graphs, we asked whether the reliability of an ER network in replicating experiments is solely an intrinsic property of the ER graph, or do the experimentally determined synaptic weights have an essential role? Since the nonlinearity of neuronal spiking and coincidence detection in the network appear to augment the impact of synaptic heterogeneity, we explored the implications of this essential nonlinearity and its interactions with the synaptic heterogeneity on the collective microcircuit dynamics in two ways.

First, we assessed the impact of heterogeneity in synaptic strengths by comparing the dynamics of the network with two different distributions of synaptic weights: lognormal and uniform. If the synaptic heterogeneity is relevant for determining bursting, there should be a significant impact of a change in the synaptic weight distribution. ER networks (Fig. 9) with a uniform distribution of synaptic weights—fixed as the mean of the lognormal distribution—were significantly less likely to produce a burst. We needed to stimulate more than double the number of initially activated neurons in comparison with the lognormal one. Strikingly, this was true even when most of the weights in each lognormal network (61% in the network in Fig. 5*A*) were less than the mean of the distribution. Thus, the main drive for network synchronization was because of those neurons with anomalously strong synaptic connections in the “fat tail” of the lognormal distribution and was not related to the mean synaptic weight. We reason that the impact of the fat tail of the lognormal distribution is that a substantial fraction of synaptic weights is significantly higher than average (i.e., with 5–10 mV EPSP amplitude where the average is 2.8 mV; Fig. 9*G*), making their recipient neurons more likely to reach spike threshold for (near) coincident EPSPs (Fig. 2*C*, left). Thus, fat-tailed (e.g., lognormally distributed) networks are better coincidence detectors (Fig. 9*C*,*D*). Here, coincidence detection is implied phenomenologically (Lisman, 1997), not accounting for various other neuronal and synaptic mechanisms contributing to intrinsic neuronal coincidence detection properties (Egger et al., 1999; Abbott and Nelson, 2000; Song et al., 2000).

Given the profound effect of synaptic heterogeneity on network dynamics, we further explored the emergent signal-processing properties imparted by lognormally distributed synaptic strengths by simulating the neuronal response to 50 inputs whose input synaptic weights were either lognormally or uniformly distributed. We analyzed 10 such all-to-one networks where 20% (10 of 50 randomly selected) of the synapses were activated at various Poisson-distributed frequencies; this ensured a broad range of synaptic weight activation within the lognormal distribution (Fig. 9*A–G*). For mean synaptic stimulation frequencies ranging from 10 to 200 Hz, neurons with lognormal inputs had a greater dynamic range of output frequencies (Fig. 9*E*) and spike probability (i.e., total output spikes/total input spikes; Fig. 9*F*) compared with neurons with uniform inputs. This larger dynamic range implies that a neuron with lognormal inputs could better differentiate among its presynaptic partners for the same input statistics (i.e., synaptic input frequency), compared with ones with uniformly distributed inputs. Notably, the weight asymmetry ensures high response reliability for all neurons; this is reflected in the upper tail of their response distributions (Fig. 9*E*,*F*). Together, these experiments revealed that heterogeneities in the synaptic strength play a critical role in shaping network dynamics.

Second, we compared the predictive efficacy of linear (*i _{k}* = 1 for all

*k*in Eq. 7) and nonlinear senderness variables (Materials and Methods, Generalized senderness subsection). If the nonlinear measures of senderness (

*i*> 1 for some

_{k}*k*in Eq. 7) are uniformly more predictive of global networks synchronization leading to a burst, then senderness incorporates information about the choice of initially activated neurons and the network connectivity into a single scalar quantity. If nonlinear senderness possesses more information about the dynamics of the system, we see another confirmation of the crucial role of neuronal nonlinearities for synchronization and the initiation of bursting.

To assess and compare the predictive power of the various linear and nonlinear measures of senderness, we used the CatBoost gradient boosting machine learning algorithm (MLA; Prokhorenkova et al., 2017) as follows: we trained the model to predict whether a burst will occur based solely on the value of a particular subset of generalized senderness quantities

All linear measures of senderness, or senderness where information enters into the next-order neighbors linearly (i.e., *i _{1}* = 1), were uniformly poor predictors. Specifically, the worst predictor among the nonlinear senderness measures generated a 66% accuracy for the classification algorithm, while the best predictor among the linear senderness measures provided a less accurate 59% success rate (Table 2). Among nonlinear quantities, the one that had information about next-nearest order neighbors had slightly better prediction accuracy, but the difference was not as pronounced as that between all the linear and all the nonlinear senderness quantities. We infer that the nonlinear properties of the neuron activation function are indeed a crucial determinant of system dynamics.

These results gave us an additional tool to set conditions on effective network architectures. We note that (with the exception of *S*_{weight(1)}^{4}), all of the more predictive senderness measures involve next-nearest neighbor couplings. The inclusion of higher-order neighbors, at least up to third order, did not improve the predictive power of senderness. We hypothesize that network motifs involving high senderness (i.e., many and high synaptic weight efferent connections of next-nearest neighbors) greatly enhanced the efficacy of the exogenously stimulated neurons to produce a global synchronization event (i.e., a burst). In particular, ER networks naturally generate the requisite number of such motifs to reproduce the sensitivity seen in experiments.

### ER network with lognormal weights reliably reproduced preBötC bursts and burstlets

We have considered a purely deterministic model in which the only source of initial spiking activity was because of external stimulation. Yet, the *in vitro* slice containing the preBötC produces bursts in the absence of such external activation. Can the model reproduce network dynamics under similar conditions that produce endogenous preBötC burstlets and bursts (Fig. 1*B*)? To explore self-driven bursting, we observed that, even during the late interburst intervals, neurons are not silent, but fire at ∼0.5–2 Hz rate. Thus, we tested the ability of the ER network to synchronize by initializing model neurons to fire randomly at 0.5–2.0 Hz, thus seeding the network with low levels of activity, as observed preceding each burst in experiments (Rekling et al., 1996; Gray et al., 1999; Picardo et al., 2013; Ashhad and Feldman, 2020; compare Fig. 1*D*; see Materials and Methods). We modeled this spontaneous firing as a random Poisson process, fixing only the mean firing rate (see Materials and Methods). This distribution assures that there are no temporal correlations in spikes between neurons.

When all 1000 neurons were isolated (i.e., their synaptic weights were set to zero) and initialized to fire at a Poisson distributed mean frequency of 0.5 Hz, network activity (i.e., the summed rate of all neuronal spikes) fluctuated at ∼488 ± 96 Hz (mean ± SD; Fig. 10*A*, black trace). This was close to the expected 500 Hz (i.e., 0.5 Hz/neuron × 1000 neurons). When these neurons were then connected with uniform synaptic weights as in Fig. 9, network activity (504 ± 115 Hz, mean ± SD) was not significantly different from that with the synapses silent [*p* = 0.16; Kolmogorov–Smirnov (KS) test; Fig. 10*A*, red traces]. Changing the synaptic weights to a lognormal distribution resulted in two typical network behaviors. In one typical behavior, network activity increased to 594 ± 181 Hz (mean ± SD), significantly higher than the networks with silent synapses (*p* = 5.1 × 10^{−7}, KS test) or uniform weights (*p* = 4.4 × 10^{−4}, KS test; Fig. 10*A*, blue trace). More importantly, there were fluctuations of network activity that crossed mean activity with silenced synapses + 3 SDs (Fig. 10*A*, top dashed line) that were qualitatively similar in shape to experimentally observed burstlets (Kam et al., 2013b; Sun et al., 2019; Ashhad and Feldman, 2020; Kallurkar et al., 2020). In a second typical behavior, this network fully synchronized to produce a burst (Fig. 10*A*, compare blue traces, green traces). This ability of this network to self-organize into burstlets and bursts is also reflected in the leftward shift in its ISI histogram (Fig. 10*B*). Specifically, the ISI distribution for the burstlet-only trial (1.6 ± 1.6 ms, mean ± SD; Fig. 10*A*, blue trace) was significantly different from the network with silent synapses at 2.0 ± 1.9 ms (mean ± SD; Fig. 10*B*, compare blue traces, black traces; *p* = 0.002, KS test). Notably, ER networks with uniform weights did not synchronize (Fig. 10*A*, red trace). Consequently, their ISI histograms were not significantly different from those with silent synapses (with ISI at 1.93 ± 1.93 ms, mean ± SD; Fig. 10*B*, red trace; *p* = 0.2, KS test). The ISI distribution with uniform synaptic weights was also significantly different from the network with the lognormal weights (*p* = 0.02, KS test).

To incorporate the possibility that not all neurons in preBötC generate spikes without synaptic stimulation, we allowed only a fraction of them (randomly chosen on each network) to fire stochastically, while the rest could fire only as a result of synaptic stimulation. At a higher Poisson distributed firing rate of 2 Hz, seeding activity in only 20% (i.e., 200 of 1000 neurons) in an ER network with lognormal weights was sufficient to induce synchronization leading to a burst (Fig. 10*C*); these networks did not burst when the activity was initialized in 19% of neurons. In contrast, at a Poisson-distributed firing rate of 0.5 Hz, the network required activity in >90% of neurons to generate a burst but failed to generate a burst when initializing ≤90% of neurons (Fig. 10*C*).

We now return to our basic question of how the network synaptic heterogeneity influences both the reliability of the network to burst in response to endogenous stochastic activity and the dependence of bursting on the mean firing rate. By comparing the dynamics of networks with synaptic weight distributions that were lognormal (Fig. 10*D*, red traces) versus those that were uniform (Fig. 10*D*, black traces), the former was clearly more sensitive to endogenous spiking. This property held for networks in which all neurons (Fig. 10*D*) or only a fraction (60%; Fig. 10*F*) fired stochastically. Since there was no exogenous stimulation to set an initial time, we could not compute the latency to burst, but we could compute the time to burst for a quiescent network. The results are shown in Figure 10, *E* and *G*, for all stochastically firing neuron networks (Fig. 10*E*) and networks containing only a subset of stochastically firing neurons (Fig. 10*G*). Networks with lognormal synaptic weights generated bursts at lower frequencies of initial neuronal firing (0.50–0.75 Hz) compared with those with uniform weight distribution (0.75–1.00 Hz; Fig. 10*D*). Furthermore, the range of time to burst for the minimum frequency at which the networks synchronized, with either uniform or lognormal synaptic weights, overlapped (lognormal weights: range = 485–1096 ms at 0.5 Hz; uniform weights: range = 4–9 to 1376 ms at 0.75 Hz; *p* = 0.75, Wilcoxon rank-sum test; Fig. 10*E*). As the fraction of randomly firing neurons was decreased to 60% (i.e., 600 of 1000 neurons), the frequency of neuronal firing required to reliably induce bursts increased by 0.25 Hz for ER networks with lognormal weights (i.e., from 0.75 Hz for 1000 neurons to 1.00 Hz for 600 neurons). In contrast, for networks with uniform weights, this increase was higher, at 0.5 Hz (from 1.0 Hz for 1000 neurons to 1.5 Hz for 600 neurons; Fig. 10*F*,*G*). Thus, showing, once again, that such lognormal ER neurons are significantly more sensitive to endogenous spiking. These networks are both easier to entrain to an external signal and more sensitive to internal spiking, making them highly robust bursting circuits.

Comparing the network dynamics of systems in which differing fractions of the network are capable of spontaneous stochastic activity, as the number of spontaneously firing neurons is reduced, the network generates more burstlets before producing a true burst (Fig. 10*H*). Eventually, when that fraction fell below 40%, neither burstlets nor bursts were observed. If the lognormal synaptic weight distribution was replaced with a uniform one (Fig. 10*I*), the same trends persisted, but generically more burstlets occurred before a burst. Furthermore, when the fraction of spontaneously firing neurons fell below 80%, the network did not burst. This shows, once again, that the network is less amenable to global synchronization when the synaptic weight distribution does not have a fat tail.

The synchronization dynamics of spontaneously firing networks followed a similar trajectory in the *S*–σ parameter space as the ones with exogenous stimulation (compare Figs. 8*H*,*I*, 10*J–M*). During successive burstlets, the *S*–σ trajectories oscillated at the lower boundary of the *S*–σ parameter space as the network alternated between partial synchronization and desynchronization. Notably, when the fraction of randomly firing neurons decreased, the threshold *S* (*S*_{weight} and *S*_{synout}) to synchronize the network increased (Fig. 10*J*,*K*, compare red traces, yellow traces; 50% and 70% randomly firing neurons, respectively, black trace for 100% randomly firing neurons). This increase in threshold *S* with a decrease in the fraction of randomly firing neurons—even when σ was similar to or smaller than the condition with 100% randomly firing neurons—revealed modulation of the activity bifurcation trajectories by the state of “excitability” in the network (Kumar et al., 2010; Hahn et al., 2019). Together, these results highlight the attractor dynamics of network synchronization where low-frequency fluctuating network activity exhibits a tipping point, defined in the *S*–σ parameter space, for the global network synchronization.

## Discussion

The rhythmogenic dynamics of the preBötC exhibits the following two essential components: (1) emergence of synchrony-driven inspiratory bursts (Feldman and Kam, 2015; Ashhad and Feldman, 2020), followed by (2) an activity-dependent refractory period reflecting a global decrease in the network excitability (Del Negro et al., 2009, 2018; Kam et al., 2013a; Zoccal et al., 2014). Here, we focused on synchronization dynamics by constructing a simplified quantitative model that illuminates the effect of connectivity of the underlying network. We looked for the following three emergent features: (1) global spiking synchronization (i.e., bursting) in response to exogenous stimulation of <1% of the network; (2) a reproducible distribution of latency between stimulus onset and induced burst; and (3) stochastic spiking initiated in a few neurons in a quiescent network, capable of inducing bursts and burstlets, replicating experiments *in vitro*.

Akin to most neural microcircuits, a key element of the preBötC functionality lies in its connectome, about which we know little of the detailed structure, also common for most microcircuits. This lack of detail presents an intriguing challenge for modeling: if one can identify statistical properties of ensembles of connectomes that have dynamic properties as seen in relevant biological networks, one can predict measurable features (e.g., voltage dynamics of individual neurons, collective output of the circuit itself) that are the consequence of hard to measure network structure. Indeed, with a foundation of reliable data on the mean number of synaptic connections with significant variability (Rekling et al., 2000; Ashhad and Feldman, 2020), system dynamics were extremely sensitive to both the particulars of the connectome and to its distribution of synaptic weights.

To better identify preBötC connectomes consistent with experimental data, we considered a variety of circuits based on previous work on the brain microcircuits (Perin et al., 2011; Gal et al., 2019). To reduce the number of possible models, we used neuronal parameters consistent with the spiking behavior of type I preBötC neurons (Rekling et al., 1996; Gray et al., 1999). In such physiologically plausible networks, we found that both the burst probability and latency to bursting as a function of the number of stimulated neurons were quantitatively consistent with experimental data only for ER networks. For chosen synaptic parameters (that are taken from the physiological range), ER networks with uniform synaptic weight distributions were significantly less sensitive to external stimulation than those observed experimentally. ER networks with lognormal weight distributions, however, had an increased sensitivity consistent with the experiment. In *in vitro* experiments, endogenous rhythmic bursts interspersed with burstlets are observed (Kam et al., 2013b; Ashhad and Feldman, 2020; Kallurkar et al., 2020). In our ER models, when we included temporally uncorrelated spiking of some or all of the constituent neurons, we could evoke bursts as well as burstlets (i.e., periods of incipient spiking synchronization across the network that reached low levels of network activity above noise before decaying). By comparing the ER and small-world graphs containing only a slightly higher density of 3-simplices, we found that dynamics was a sensitive measure of the statistical properties of the underlying network. Given this sensitivity, we conclude that, among the plausible network ensembles explored, only networks with a low degree of clustering with a fat-tailed synaptic weight distribution can effectively reproduce the broad range of dynamical properties of synchronized bursting in the preBötC.

Burstlets represent a salient feature of preBötC rhythmogenesis (Kam et al., 2013b; Feldman and Kam, 2015; Del Negro et al., 2018; Sun et al., 2019). In our models, burstlets result from incomplete synchronization and subsequent desynchronization of the network, suggesting that for burstlet termination, inhibition—feedback or otherwise—is not obligatory, though inhibition could play a role in modulating synchronization dynamics (Kumar et al., 2010). In contrast to inhibition-dependent burstlet termination, lognormally distributed synaptic weights control both the amplification and dampening of network activity, thereby contributing to oscillatory dynamics. Specifically, lognormally distributed synaptic weights resulted in a broader fluctuating range for neuronal responses, underlying dynamic robustness and lability (Fig. 9*E*,*F*). Lognormal connectivity promotes synchronization by enhancing the synaptic efficacy to coincident input while allowing for the dampening of network activity by suboptimal neuronal firing when impinging synaptic inputs are weaker than the network average. This imparts a dynamical property to the local network motifs constituted by the spiking neurons. Furthermore, the ability of the network to desynchronize to baseline activity levels suggests that the preBötC burstlet may have different termination mechanisms compared with those of bursts, which may also or instead rely on other synaptic and intrinsic neuronal mechanisms (Del Negro et al., 2005, 2018; Pace et al., 2007a; Rubin et al., 2009; Krey et al., 2010; Janczewski et al., 2013; Kam et al., 2013b; Kottick and Del Negro, 2015; Baertsch et al., 2018; Ashhad and Feldman, 2020).

We investigated the mechanisms by which certain exogenous stimulation events were more likely to elicit a burst by exploring features of the network structure and those of different sets of stimulated neurons. For this analysis, we defined a scalar measure, senderness, which incorporates both the particulars of the overall network structure and the connectivity of stimulated neurons. To evaluate the predictive power of senderness, we used an MLA to classify simulations into those that led to a burst and those that did not. To make the classification problem as difficult as possible, we chose an ensemble of networks for which the bursting probability was ∼50%. Passing different values of senderness as input parameters for MLA, we found that only nonlinear measures of senderness using the connectivity of both the initially stimulated neurons and their nearest neighbors were the uniquely best predictor. Thus, we infer the following: (1) the nonlinearity of senderness assures that the fat tail of the synaptic weight distribution dominates the network response; and (2) input convergence at second-order neighbors plays an important role in determining whether or not a burst results. These findings suggest that the most predictive measure of senderness incorporates the most information about the subsequent network dynamics after stimulation. Furthermore, an MLA-enabled search for predictive measures of senderness was a useful tool to extract important features of network topology and interactions associated with emergent collective phenomenon.

Why consider a network-based synchronization model of preBötC rhythmogenesis when much simpler mechanisms (e.g., pacemakers and/or limit cycles) can be parameterized to fit limited subsets of experimental data? Our justification is that a network-based mechanism captures the physiological boundary conditions and experimental data in ways that previous models cannot (or did not) account for or overlook, as follows. (1) A network-based burstlet/burst rhythmogenic mechanism imbues the system with lability that other models do not. Mammalian breathing responds rapidly to real world perturbations (e.g., rapid increase in ventilation in anticipation of increased metabolism at the onset of exercise (exercise hyperpnea), changing the breathing pattern for essential reflexes (e.g., sneezing and coughing) or coordinating the onset and maintenance of oral–facial behaviors, such as phonation, swallowing, laughing, crying, and vocalization. Our modeling revealed that fat-tailed synaptic weight distribution in a sparsely connected network renders preBötC dynamics robust yet labile in response, with the necessary sensitivity to external inputs. (2) Most preBötC models function by elevating extracellular [K^{+}] to depolarize neurons, thereby activating pacemaker/regenerative conductances to induce intrinsic bursting (Bacak et al., 2016; Phillips et al., 2019, 2022; Phillips and Rubin, 2022); the ensemble activity of interconnected intrinsic bursters represents the network output. Yet, intrinsic bursting is not essential for preBötC rhythm (Kam et al., 2013b; Sun et al., 2019; Ashhad and Feldman, 2020). preBötC rhythm persists on pharmacological blockade of regenerative and burst-inducing currents such as the persistent Na^{+} currents, and the Ca^{2+}-activated cation nonspecific currents *in vitro* (Del Negro et al., 2002b, 2005). Furthermore, *in vivo* extracellular [K^{+}] remains low and preBötC neurons are not depolarized to the same extent as in *in vitro* experiments; thus, insights gained from these models fail to explain the rhythmogenic mechanism in intact animals. Contrary to the intrinsic bursting models, the excitation–inhibition balance in the network—rather than neuronal depolarization—regulates preBötC synchronization and rhythm (Carroll and Ramirez, 2013; Ashhad and Feldman, 2020) by modulating the efficacy of spike transmission in the network (Ashhad and Feldman, 2020). A reliable model must be extendable to explain network dynamics under *in vivo* conditions, and across various operational modes and experimental preparations. Significantly, our model can recapitulate network bursting from initially stochastic neuronal firing at low rates—even in the absence of depolarization-induced intrinsic bursting—a scenario closely reflecting preinspiratory activity *in vivo* (Guyenet and Wang, 2001; Cui et al., 2016) and *in vitro* (Sun et al., 2019; Ashhad and Feldman, 2020).

An important consideration in evaluating the utility of our spike timing-based model is that the majority of preBötC models are firing rate based (Del Negro et al., 2002a; Schwab et al., 2010; Bacak et al., 2016; Ausborn et al., 2018; Phillips et al., 2019; Bibireata et al., 2020; Hao et al., 2021; Phillips et al., 2022; Phillips and Rubin, 2022). Rate-based models appear incompatible with recent findings suggesting that inspiratory burst initiation is a consequence of progressive short-latency synchronization of neuronal spiking within the rhythmogenic microcircuit (Kam et al., 2013b; Ashhad and Feldman, 2020). In contrast to the intrinsic bursting-dependent rate-based models where membrane potential (*V*_{m}) changes are significantly low-pass filtered (Phillips et al., 2019; 2022; Phillips and Rubin, 2022), synaptic input-driven higher-frequency *V*_{m} fluctuations induce firing in preBötC output neurons (Ashhad and Feldman, 2020) and phrenic motor neurons (Parkis et al., 2003). Such fluctuation-based firing regimes manifest rhythmogenesis as an emergent network property (Petersen and Berg, 2016). Our spike timing-based model captures the salient features of a fluctuation-driven firing pattern and inspiratory burst/burstlet generation (Figs. 9, 10). Of importance in understanding the mechanism, we show that the network synchronization is a consequence of efficient coincidence detection in the network.

Our model is limited in scope to understanding the onset of preBötC synchronization and inspiratory burstlet/burst generation. We do not incorporate synaptic inhibition because of a lack of connectivity data between preBötC excitatory and inhibitory neurons. Furthermore, although our model lacks biophysical implementation of preBötC regenerative conductances (e.g., persistent Na conductance), we incorporate their contribution to baseline network activity, phenomenologically, by modeling stochastic firing of model neurons at low frequencies (Fig. 10). Thus, our study neither rejects nor undermines the role of regenerative conductances in shaping network dynamics. Rather, we posit that preBötC dynamics is an emergent network property where intrinsic and synaptic properties interact in a nontrivial manner. Our model provides a dynamical framework—of network synchronization—over which interactive roles of neuronal intrinsic, synaptic, and topological properties can now be studied and tested. Moreover, our results show that the observed fluctuations of the isolated system and its response to small perturbations (i.e., stimulation of <1% of the network) provide important constraints on the connectivity of the neuronal network. A further experimental test of the model, in particular the concept of slenderness, would determine the presence of network motifs of first-order and second-order nearest neighbors as a necessary prelude to global network synchronization.

Robust activity propagation and oscillatory dynamics are present in several brain networks with lognormally distributed synaptic strengths (Song et al., 2005; Ikegaya et al., 2013; Buzsáki and Mizuseki, 2014; Omura et al., 2015; Uzan et al., 2018). Furthermore, biological heterogeneities in neuronal networks are ubiquitous and play critical roles in sculpting their dynamics and physiologically relevant computations (Renart et al., 2003; Grashow et al., 2010; Gjorgjieva et al., 2016; Mishra and Narayanan, 2019, 2020, 2021; Rathour and Narayanan, 2019; Goaillard and Marder, 2021; Mittal and Narayanan, 2021; Roy and Narayanan, 2021). Based on this study, we postulate that the pervasiveness of synaptic heterogeneity throughout the mammalian nervous system, which manifests as a heavy-tailed synaptic strength distribution (Buzsáki and Mizuseki, 2014), provides a universal mechanism for balancing the requirements of response reliability and input sensitivity underlying vital microcircuit operations.

## Footnotes

↵

^{‡}In the course of final preparation of the manuscript, our friend, mentor and colleague Alex Levine passed away.This work was supported by National Institutes of Health/National Heart, Lung, and Blood Institute Grant 1R35-HL-I35779 to J.L.F.; a Bhaumik Institute Fellowship, a Holmes-Peccei Fellowship, and Dissertation Year Fellowship to V.M.S.; and National Science Foundation Grant NSF-DMR1709785 to A.J.L. and V.M.S. We thank Dr. Kaiwen Kam for sharing previously published experimental data and Professors D.V. Buonomano and Larry Abbott, and Dr. Divyansh Mittal for critical reading and insightful comments on earlier versions of this manuscript.

The authors declare no competing financial interests.

- Correspondence should be addressed to Jack L. Feldman at feldman{at}g.ucla.edu