Abstract
Cortical neurons are typically driven by thousands of synaptic inputs. The arrival of a spike from one input may or may not be correlated with the arrival of other spikes from different inputs. How does this interdependence alter the probability that the postsynaptic neuron will fire? We constructed a simple random walk model in which the membrane potential of a target neuron fluctuates stochastically, driven by excitatory and inhibitory spikes arriving at random times. An analytic expression was derived for the mean output firing rate as a function of the firing rates and pairwise correlations of the inputs. This stochastic model made three quantitative predictions. (1) Correlations between pairs of excitatory or inhibitory inputs increase the fluctuations in synaptic drive, whereas correlations between excitatory–inhibitory pairs decrease them. (2) When excitation and inhibition are fully balanced (the mean net synaptic drive is zero), firing is caused by the fluctuations only. (3) In the balanced case, firing is irregular. These theoretical predictions were in excellent agreement with simulations of an integrateandfire neuron that included multiple conductances and received hundreds of synaptic inputs. The results show that, in the balanced regime, weak correlations caused by signals shared among inputs may have a multiplicative effect on the inputoutput rate curve of a postsynaptic neuron, i.e. they may regulate its gain; in the unbalanced regime, correlations may increase firing probability mainly around threshold, when output rate is low; and in all cases correlations are expected to increase the variability of the output spike train.
 randomwalk
 integrateandfire
 computer simulation
 spike synchrony
 oscillations
 crosscorrelation
 balanced inhibition
 cerebral cortex
The output of a typical cortical neuron depends on the activity of a large number of synaptic inputs—several thousands of them, as estimated by anatomical techniques (White, 1989; Braitenberg and Shüz, 1997). What kind of response should be expected from a postsynaptic neuron driven by so many inputs? Answering this question in detail requires a deep understanding of dendritic integration, synaptic function, and spike generation mechanisms; but, given the large numbers commonly involved, as a first approximation it is natural to cast the problem in statistical terms. The strategy then is to compute the output responses of a model neuron (or their statistics), given a set of driving inputs with known statistical properties. These inputs may be either independent or temporally correlated. In the latter case, spikes from different input neurons arrive close together in time more often or less often than expected by chance.
In general, the situation with independent inputs is easier to analyze, and for many applications it is probably a good approximation. However, there are at least three reasons why the effects of correlations on single cells should be fully characterized. First, correlations in spike counts have indeed been observed (Gawne and Richmond, 1993; Zohary et al., 1994; Salinas et al., 2000) and, based on the convergent connectivity of the cortex (White, 1989; Braitenberg and Schüz, 1997), they must be ubiquitous (Shadlen and Newsome, 1998; Bair et al., 1999). Second, such correlations may alter the coding capacity of a neuronal population (Gawne and Richmond, 1993; Zohary et al., 1994; Abbott and Dayan, 1999). Third, synchrony and oscillations, two forms of correlated activity that have been intensely studied, may also be important for information encoding (DeCharms and Merzenich, 1995; Riehle et al., 1997; Dan et al., 1998) or for other aspects of cortical function (Engel et al., 1992; Singer and Gray, 1995). This paper, however, does not focus on the possible higherlevel functional roles of coordinated spike firing; instead, it addresses a more elementary problem: how does a typical cortical neuron react to synaptic inputs that are correlated, compared to synaptic inputs that are uncorrelated?
This problem has been investigated in the past (Bernander et al., 1994; Murthy and Fetz, 1994; Shadlen and Newsome, 1998), but the model neurons used earlier have often been examined with limited sets of parameters, and sometimes in regimes outside the normal operating range of cortical neurons; for instance, some studies have ignored the effects of inhibition. This study attempts to provide a broad framework within which the impact of input correlations on a single postsynaptic neuron can be better understood. Using a simple theoretical model, the mean firing rate of a postsynaptic neuron is solved as a function of the firing rates and pairwise correlations of its excitatory and inhibitory inputs. This model also provides qualitative insight on how correlations affect output variability. The analytic expressions are then compared to computer simulations of a conductancebased model neuron with more realistic dynamics. We find that correlations affect both the firing rate and variability of the output and that the strength and details of these effects depend strongly on the balance between excitation and inhibition.
MATERIALS AND METHODS
A theoretical model with random walk dynamics.Consider a simple stochastic model neuron in which an incoming excitatory spike increases the membrane potential by an amount Δ_{E}, and each incoming inhibitory spike decreases the membrane potential by an amount Δ_{I}. These voltage steps are fixed, and are independent of the input statistics. In the absence of synaptic input, the voltage of the model neuron, termed V , decreases by a fixed amount d in each time step, but there is a fixed minimum V_{rest} below which the voltage cannot be driven, even if inhibition is strong. The d term makes the voltage decrease linearly with time toward V_{rest}. Because of leak currents, membrane potentials of real neurons actually relax exponentially to their rest values, but approximating this with a linear term may be reasonable when V remains relatively far from rest. In addition, whenever the voltage exceeds a threshold V_{θ}, an action potential is fired, and the voltage is instantaneously reset to the value V_{reset}. Given specific values for these six parameters, the output of the model neuron will be entirely determined by the statistics of the inputs. The advantage of these simple dynamics is that, if the input statistics are known and certain simplifying assumptions are made, then the output firing rate may be computed analytically, revealing the explicit dependence on the input statistics. This is shown in the following sections.
The analysis follows in the tradition of classic results from the theory of stochastic processes (Ricciardi, 1977;Tuckwell, 1989; Risken, 1996). Many of the previous studies that applied these random walk methods to the problem of synaptic integration were aimed at understanding, in terms of a simple mechanistic explanation, how spike firing in a neuron is triggered by the stochastic fluctuations of its membrane potential (Tuckwell, 1988; Smith, 1992). In other studies the goal was to develop models that could account in detail for the measured firing statistics of real neurons (Gerstein and Mandelbrot, 1964; Shinomoto et al., 1999). As shown below, this framework is also of heuristic value to the problem of input correlations and their impact on firing probability (Feng and Brown, 2000).
RESULTS
Changes in voltage modeled as random walk steps
According to the above description, at each time step Δt the voltage jumps by an amount:
Given that in each time step N changes by a random amount, N (and therefore V ) is equivalent to the net displacement of a onedimensional random walk process with drift in which there is a reflecting barrier at one end and an absorbing barrier at the other. What we want to know is the average number of steps ν that it takes for N to go from reset to threshold. This is the same as asking how much time it typically takes for V to go from V_{reset} to V_{θ}. The total amount of time will be:
Equations 6 and 7 are useful for three reasons. First, they are valid for small and large ς (small or large relative to the distance from rest to threshold), second, they combine μ and ς seamlessly, in the sense that cases with and without drift also fall under the same formulation, and third, the approximations are best when the underlying distribution P(n) is Gaussian but they are quite good even when the distribution is very different. Other theoretical models are usually restricted in one or more of these ways (Gerstein and Mandelbrot, 1964; Tuckwell, 1988; Smith, 1992). The rest of the paper examines the behavior of these expressions: first, as functions of μ and ς, second, as functions of the mean firing rate and variability of the input spike trains, which determine μ and ς, and finally, in comparison to simulations of a more realistic, conductancebased model.
Robustness of the random walk approximations
A crucial assumption underlying the above results was that the full probability distribution of n could be represented by its mean and SD. How good is this approximation? We explored this through computer simulations in which, at each time step, n was drawn from a specified distribution, using a random number generator (Press et al., 1992). Each simulation cycle started with N = N_{reset}. Then, in each step, the update rule N → N + n was applied until N reached the threshold value, in which case the total number of steps elapsed was saved, and a new cycle was started. This was repeated 5000 times, after which the average number of steps ν was obtained. For the results shown in Figure1 c–h, N_{θ} = 40, N_{reset} = 20 , and ς varies along the x axes. The different curves in Figure 1 c–h correspond to different values of μ. The insets depict the type of distribution function for n used in the corresponding panels. The dots indicate the simulation results, and the continuous lines in Figure 1 c are the analytic approximations given by Equations 6 and 7; these are the same regardless of the distribution. The analytic results are most accurate when n is distributed in a Gaussian fashion, but the random walk approximation is qualitatively accurate when the distribution of n is uniform (Fig. 1 d), and even when it is sharply skewed (Fig. 1 e). The approximations are good even when ς is almost as large as N_{θ}.
Through these simulations we also investigated what happens when N relaxes exponentially to its rest value. In this case the simulations proceeded exactly as described above, except that the update rule for N was N → hN + n , where h is a constant <1 ( h = 1 is the original case without exponential decay). This is equivalent to having a leak term proportional to −V in Equation 1 instead of the constant d . We found that the shapes of the resulting curves were very similar to those obtained using the linear decay term that contributes to μ (Eq. 2), except that they corresponded to more negative values of μ. For instance, the results of a simulation with h = 0.95 and μ = 0 were almost identical to the results obtained with h = 1 and μ = −1 . Therefore, the exact shape of the distribution of n and the precise way in which V relaxes to rest do not affect the results qualitatively.
Two output modes: mean excitatory drive versus fluctuations
The dynamics of the output neuron may be understood intuitively in the two limits mentioned before, when the drift is positive and much larger than the fluctuations, and when the drift is zero (Troyer and Miller, 1997). If the net drive is positive and ς is close to 0 (Gerstein and Mandelbrot, 1964; Tuckwell, 1988; Usher et al., 1994; Koch, 1999), Equation 6 is reduced to
Previous stochastic models arrived at the above expression regarding μ as the sole contributor to the mean firing rate (Gerstein and Mandelbrot, 1964; Tuckwell, 1988;Usher et al., 1994). In these models the fluctuations were considered so small relative to the distance from reset to threshold, that, in the absence of drift, it took an infinite amount of time for V to reach threshold. In the present model, however, fluctuations are not infinitesimal (Feynman et al., 1963) so, when μ = 0 ,
When the postsynaptic neuron is driven by fluctuations, the interspike interval distribution of the evoked spike trains is expected to be wide, because it follows an entirely stochastic process. As shown in Figure 1 b, individual trajectories of N are widely different—they are also independent, and this produces highly variable interspike intervals. The two dynamical modes described by Equations 8 and 9 are thus distinct.
Figure 1 f–h quantifies the variability of the interspike intervals produced by the simulations. The y axes indicate the coefficient of variation of the interspike interval distribution, or CV_{ISI}. This is equal to the SD of the interspike intervals divided by their mean and is shown as a function of ς using the same results used in Figure 1 c–e. The plots confirm the intuitive picture discussed in the previous paragraphs: when ς is large in relation to μ, the coefficient of variation is close to 1, as expected from a Poisson process. On the other hand, as ς approaches 0, μ becomes relatively large, and the variability in the interspike intervals decreases sharply (Fig.1 f–h, red dots). This drop in variability has been viewed as support for a large ς in real cortical neurons, that is, as evidence of a balance between excitation and inhibition (Shadlen and Newsome, 1994; Troyer and Miller, 1997).
Impact of input correlations
Now we quantify how the relative magnitudes of the fluctuations and the mean of the total synaptic drive may change according to the synaptic input statistics.
Assume that the model neuron receives M_{E} and M_{I} excitatory and inhibitory inputs, respectively. We denote the number of spikes fired by excitatory input j in a time step Δt as n_{E}
^{j}; analogously, n_{I}
^{k} corresponds to the number of spikes fired by inhibitory neuron k . Recalling that n_{E} and n_{I} are the total numbers of excitatory and inhibitory spikes, Equation 2 can be written as:
To compute the variance of n , we need to specify the variance of the individual inputs as well as their pairwise correlations. The variances in the spike counts of single excitatory and inhibitory inputs are represented by s_{E}
^{2}and s_{I}
^{2}, such that:
The second key element of Equation 20 is that correlations between inhibitory inputs have the same effect as correlations between excitatory inputs, whereas correlations between excitatory and inhibitory inputs have the opposite effect. Synchronous inhibition is an effective way to increase variance, but an inhibitory spike that comes close in time to an excitatory one counteracts it, reducing variance. Thus, the three individual correlation terms could have relatively large values but still cancel out to produce practically no effect. This is what happened in simulation studies by Shadlen and Newsome (1998). They did not detect any changes in output rate when inputs were highly correlated because their choice of parameters was such that the three terms cancelled out exactly. Of course, in this situation any change in the balance between positive and negative correlation terms will produce a large change in ς^{2}.
At some point of the input–output rate curve, even an unbalanced neuron with β much <1 will be affected by correlations, as described by Equations 9 and 18. The negative term caused by leakage in Equation13 is independent of input rate and of β. Therefore, whatever the balance of the neuron, there will be a positive value of r_{E} for which μ = 0 and ς^{2} > 0 . Around such value, the membrane voltage will have zero drift, but the neuron will be able to fire, driven exclusively by input fluctuations. Thus, there will always be a range of values of r_{E} such that the target neuron fires according to the zerodrift classic random walk dynamics. In this range, correlations are expected to have the effects just described.
Input–output rate relationships predicted by the theory
Here and in the rest of the paper we explore the relative effects of the three correlation terms. For the sake of simplicity, we illustrate three cases: (1) ρ_{EE} > 0, ρ_{II} = ρ_{EI} = 0 , (2) ρ_{II} > 0, ρ_{EE} = ρ_{EI} = 0 , and (3) ρ_{EE} = ρ_{II} = ρ_{EI} > 0 . However, the reader should keep in mind that it is the final weighted sum of the three terms that determines ς^{2}, and that the first two cases are also representative of the situation in which all the terms are greater than zero but the final sum is also greater than zero. For instance, suppose that Equation 20 applies, that ρ_{EE} and ρ_{II} are positive and equal, and that ρ_{EI} is also positive but smaller than the other terms. In this case what counts is ρ_{EE} + ρ_{II} − 2ρ_{EI}, so this situation would be indistinguishable from cases 1 or 2 above. Notice also that, in general, case 3, in which all correlations are identical, does not automatically lead to an exact cancellation, because the three terms have different coefficients in Equation 19. As with the balance between excitation and inhibition, it is hard to assess what the real biological situation is; the selected cases are meant to illustrate a range of possibilities.
Figure 2 illustrates the results derived in the previous section for two cases with different relative contributions of μ and ς to the output rate. In this figure the expressions for μ and ς derived above using the binomial approximation (Equations 13 and 19) were used to compute the firing rate of the output neuron, as given by Equations 6 and 7. A total of 1000 active inputs were considered, 20% of which were inhibitory. The percentage of inhibitory neurons alters the input–output rate curve that results with uncorrelated inputs, whereas the total number of neurons modifies the weight of the correlation terms. Inhibitory neurons fired at 1.7 times the rate of excitatory ones. The voltage decay was set to d = 0.3 mV; this corresponds to a decrease in voltage of 0.3 mV/msec, because Δt = 1 msec. This value is comparable to the 0.49 mV decrease that occurs in 1 msec when the voltage starts 10 mV above rest and relaxes exponentially with a 20 msec time constant. The difference between resting potential and threshold was 20 mV, with the reset voltage falling halfway in between. Finally, the remaining parameters were chosen in two ways, to obtain results for balanced and unbalanced neurons, but in all cases the size of the individual excitatory depolarization Δ_{E} was chosen to produce an output firing rate of ∼75 spikes/sec at an input rate r_{E} of 100 spikes/sec (Shadlen and Newsome, 1995, 1998).
Figure 2 a shows the output firing rate as a function of input rate r_{E} for a balanced neuron, for which β_{RW} = 1 (Eq. 13). Here the neuron is driven exclusively by the fluctuations of its inputs, and a fixed amount of net correlation has a multiplicative effect on the firing rate curve, as expected from Equation 19. Figure 2 b shows similar curves for a case in which β_{RW} = 0.34 ; for these curves the ratio Δ_{I}/Δ_{E} was modified to obtain an unbalanced neuron that on average received more excitation than inhibition. The input–output rate curve obtained with independent inputs recovers the thresholdlinear function typically used in modeling work (Hertz et al., 1991; Abbott, 1994; Koch, 1999). In this case correlations no longer have a multiplicative effect on the r_{out}versus r_{E} curve, and the fractional change in output rate caused by a given amount of correlation is much smaller than for a balanced neuron. However, a net excess of excitatory correlations still increases the output rate significantly, especially around threshold (Kenyon et al., 1990; Bernander et al., 1991). This is the point around which the neuron is driven almost exclusively by fluctuations (Bell et al., 1995).
According to this simple stochastic model, the synaptic input that drives a postsynaptic neuron may be thought of as having two components, a mean component, which depends on the net balance between excitation and inhibition, plus another component that represents the fluctuations around the mean, and both components may drive the recipient neuron to fire. The fluctuations depend strongly on the correlations between input spike trains, so it is through their effect on the fluctuations that input correlations may greatly enhance the resulting output firing rate. Such fluctuations may be the main driving force around threshold. The next section explores the validity of these conclusions using more realistic model neurons and computer simulations.
Simulations of a conductancebased integrateandfire neuron
Results in this section are based on simulations of an integrateandfire neuron model receiving 160 excitatory and 40 inhibitory inputs with Poisson statistics at given mean rates. The amplitudes of the synaptic conductances were varied so that balanced and unbalanced situations could be studied and compared to the predictions from the stochastic model and to Figure 2.
In the random walk model discussed above, input correlations were synonymous with synchrony, because they referred exclusively to the chances of two input spikes arriving in the same time slice Δt . We will show that the results apply to correlations in a wider sense, that is, to situations in which the probability of firing of one input is not independent of the probabilities of the rest of the inputs. We will consider two ways of generating correlated activity, through the equivalent of shared connections and through oscillations in the instantaneous firing rate of the inputs.
Description of the model and parameters
The conductancebased integrateandfire model we use is similar to the one described by Troyer and Miller (1997)(Knight, 1972; Tuckwell, 1988;Shadlen and Newsome, 1998; Koch, 1999). The main difference is that we included a mechanism that reproduces the spike rate adaptation typical of most excitatory cortical neurons (McCormick et al., 1985). Subthreshold currents are included, but currents that generate spikes are not. The membrane voltage V(t) changes in time according to the differential equation:
The conductance changes underlying spike rate adaptation are implemented as follows. Whenever V exceeds V_{θ} and a postsynaptic spike is elicited, the potassium conductance increases instantaneously by an amount Δg_{SRA}. The flow of potassium ions tends to hyperpolarize the cell and slows down the firing. The change in conductance decays exponentially toward zero with a time constant τ_{SRA},
The intrinsic model parameters, those independent of synaptic input, were tuned to approximate the neurophysiological measurements ofMcCormick et al. (1985, their Fig. 1C,D; see also Troyer and Miller, 1997). The following values are used: E_{L} = −74 mV, E_{K} = −80 mV, V_{θ} = −54 mV, V_{reset} = −60 mV, τ_{m} = 20 msec, τ_{refrac} = 1.72 msec, τ_{SRA} = 100 msec, and
Expressions similar to Equation 24 are used to model the conductance changes caused by excitatory synaptic inputs. When an excitatory spike arrives, g_{AMPA} increases by Δg_{AMPA}. This increase is fast, so a single exponential describing the subsequent decay is sufficient in this case too,
This model neuron does not include any intrinsic sources of noise. In fact, the synapses themselves do not contribute any noise either, because all excitatory or inhibitory spikes cause the same conductance change (the effect of synaptic and intrinsic variability is explored in a separate section below). This allows us to study the impact of input variability in isolation from other noise sources. The model neuron is driven by M_{E} excitatory and M_{I} inhibitory inputs, and each input provides an individual spike train. The mean spike rates are the same for all excitatory and inhibitory inputs; these are r_{E}and r_{I}, respectively. In all simulations, we assume that these rates are proportional, so that r_{I} = αr_{E}, with α being the constant of proportionality. These rates are constant, except when an explicit time dependence is indicated.
The balance of the neuron, β, refers to the ratio between the mean amount of inhibition and excitation that it receives. We measure it asTroyer and Miller (1997) did. This quantity depends on the relative numbers of excitatory and inhibitory inputs, the relative magnitudes of their firing rates, and the relative impacts of excitatory and inhibitory spikes on the postsynaptic voltage. To compute the latter, one should take into account the total changes in conductance integrated over time and the driving forces, so that:
The results shown below are based on simulations that included M_{E} = 160 excitatory and M_{I} = 40 inhibitory inputs. In addition, separate simulations confirmed that the results still hold when the numbers of inputs are increased (data not shown). In these runs the ratio M_{I}/M_{E} was kept constant, and maximal conductance changes were modified accordingly, so that the balance and gain of the neuron remained approximately the same as with the standard numbers of neurons. For the rest of the parameters, the following values are used: α = 1.7, E_{AMPA} = 0 mV, E_{Cl} = −61 mV, τ_{AMPA} = 5 msec, τ_{GABA}
^{(1)} = 5.6 msec, and τ_{GABA}
^{(2)} = 0.285 msec. These numbers are the same in all simulations. For the amplitudes of the conductance changes, two sets of values are considered. In the balanced condition we use
Calculation of crosscorrelation histograms
In the simulations, a number of excitatory and inhibitory inputs drive a conductancebased integrateandfire neuron. We used two methods to generate temporal dependencies between inputs, common drive and temporal comodulations in rate (Brody, 1999). In both cases we computed crosscorrelation histograms (Perkel et al., 1967; Fetz et al., 1991; Nelson et al., 1992; Brody, 1999) to visualize the resulting dependencies between pairs of input spike trains. For this, we recorded the firing times of two of the inputs, a and b , for a period of time of length T_{max}. Simulation parameters did not change during this time, so the firing statistics of a and b were the same throughout. We use the notation S_{a}(t) = Σ_{j} δ (t − t_{j}
^{a}) to indicate the spike train of input a , where δ is the Dirac delta function, and t_{j}
^{a} is the time at which input a fired spike j . The crosscorrelation histogram evaluated at time lag τ can be written as:
Generating correlations via common drive
To generate correlated spike trains to be used as inputs to the integrateandfire neuron, we used a procedure similar to the one used by Shadlen and Newsome (1998) but somewhat simpler, because it takes advantage of the results in Figure 1. For each of the input lines that impinge on the model postsynaptic neuron, there is a random walk variable that generates the spikes of that line. Variable N_{i} corresponds to input i and behaves exactly as the model in Figure 1
c (blue points), having no net drift and producing a spike whenever N_{i}exceeds a threshold. All random walk variables have the same parameters. At each time step, a set of M_{pool}independent Gaussian random samples with zero mean and unit variance are drawn, and each N_{i} is updated by adding exactly M_{in} of those samples, weighted by an overall gain factor g_{in}. The sum of Gaussian samples acts as the net input to the random walk units. This is implemented through a matrix multiplication:
With this method we generated the 160 excitatory and 40 inhibitory inputs that arrived at the integrateandfire model neuron. In the simulations, φ_{E} refers to the fraction of samples that each excitatory input draws from the common pool, according to Equation 34 and, similarly, φ_{I}refers the fraction of samples that each inhibitory neuron draws from the pool. A value of φ_{I} = 0 means that inhibitory spike trains are independent of each other and of all excitatory spike trains and, analogously, φ_{E} = 0 means that excitatory spike trains are all independent. Simultaneous nonzero values for φ_{E} and φ_{I} correspond to excitatory and inhibitory neurons sharing the same common pool of Gaussian samples, thus being correlated between themselves and across each other's type as well. Notice that these correlated spike trains have a CV_{ISI} of approximately one independently of the fraction of shared samples. This is because these neurons are always random walkers drawing their steps from Gaussian distributions with zero mean, as illustrated in Figure 1.
Impact of correlations generated by common drive
We tested how the output firing rate changed as a function of the fractions of common drive φ_{E} and φ_{I}. Figure 4shows examples of the output spike trains produced by the balanced integrateandfire model. In each panel, the plots below the spike rasters are examples of crosscorrelation histograms (Perkel et al., 1967; Brody, 1999) between pairs of input spike trains. The height of the crosscorrelation at time lag τ indicates the probability of recording a spike from one neuron between τ − Δτ and τ + Δτ milliseconds after (or before, for negative τ) a spike from another neuron. These histograms have been normalized so that 1 represents the probability expected based on the mean firing rates when the two neurons are independent and follow Poisson statistics. When pairs of input neurons share some of the Gaussian samples, they become correlated—the chances of recording a spike from one neuron are higher around the times when the other neuron has fired. The peaked crosscorrelograms shown in the figure were obtained with φ = 0.1 , which is roughly comparable with the measured probability that two nearby pyramidal neurons are connected (Braitenberg and Schüz, 1997). Larger peaks, reflecting stronger synchronization, are common in correlograms constructed from experimental data (Fetz et al., 1991;Nelson et al., 1992).
In Figure 4, three histograms are shown below each spike raster. They show the mean correlation between excitatory–excitatory (EE), inhibitory–inhibitory (II), and excitatory–inhibitory (EI) input pairs that drove the output neuron under each condition. These statistical dependencies did not change during the periods in which the shown output spikes were generated. Figure 4 a shows a spike train evoked by uncorrelated inputs whose crosscorrelations are, consequently, flat. In Figure 4 b, only excitatory neurons are correlated, which is evidenced by the peak in the corresponding EE crosscorrelation. The small φ_{E} of 0.1 that was used makes the output rate increase by ∼60%. As shown in Figure4 c, correlations between inhibitory neurons also increase the rate, but the effect is less strong. Correlations in excitatory inputs make a bigger difference because of the parameters chosen. As can be seen from Equations 19 and 20, excitatory or inhibitory inputs may have the largest weight in determining ς^{2}, depending on these choices. With the parameters used in Figure 4, when all neurons are equally correlated the increase in rate practically disappears, as illustrated in Figure 4 d. However, Figure4 e shows that a large increase in output rate is again seen when all neurons are correlated but excitatory–excitatory pairs are most strongly correlated. Thus, as expected from Equation 19 or 20, it is the balance between the three correlation terms that determines the final contribution of the fluctuations to the output firing rate.
The examples in Figure 4 also show that input correlations can increase the variability of the output neuron. This is most obvious in Figure 4,b and e, which has CV_{ISI}values of 1.5, much larger than the 1.1 obtained with uncorrelated inputs. In these cases correlations tend to produce more bursts of spikes. With correlations present only between inhibitory neurons (Fig.4 c) or when all neurons are equally correlated (Fig.4 d), the CV_{ISI} still increases slightly, to 1.3, although in the latter case the output rate does not change. Thus, correlations in the input generated through common drive may lead to an increase in output firing rate but, in general, also produce more irregular firing.
Figure 5 expands these results by varying systematically either the input rates or the correlations. Figure5 a shows the output firing rate as a function of input rate for uncorrelated inputs, excitatory–excitatory correlations only, inhibitory–inhibitory correlations only, and all possible pairs equally correlated. Figure 5 a should be compared with Figure2 a, which depicts the analogous curves expected from the random walk model. To compare the two sets of results, parameter Δ_{E} was scaled so that the analytic curves with uncorrelated inputs would have approximately the same gain as in the simulation, and the value of 0.004 used in Figure 2 a for the correlation coefficients was chosen to match the effects seen in Figure5 a. The curves obtained with the conductancebased model are in excellent agreement with the analytical predictions from the random walk equations, in spite of the numerous details that distinguish the two models. Figure 5 c shows how the output rate changes as a function of correlation strength for a fixed input rate, r_{E} = 40 spikes/sec. As expected, higher correlations have stronger effects. For a balanced neuron, even a weak correlation between its excitatory inputs may have a large impact on the output; a fraction of shared samples of ∼0.15 (φ_{E} = 0.15) is enough to double the output firing rate.
Figure 5, b and d, shows that correlations also increase the variability of the output neuron, as measured by the CV_{ISI}. For the balanced condition, the increase is larger at low output rates. Notice that, even in the absence of correlations, CV_{ISI} increases with increasing input rate, as shown by the continuous line in Figure 5b . With high input rates or with modest correlations, output variability may easily increase past a CV_{ISI} of ≥1. This constitutes a major difference between the random walk description and the conductancebased model. When the changes in membrane potential are described by a random walk, the neuron is memoryless: what happens in one Δt has no influence on what happens in the next and, similarly, one interspike interval has no relation to the next. This leads to values of CV_{ISI} ≤1, if there is some net drift. Conductance changes, however, are not instantaneous so, for instance, several synchronous excitatory spikes may produce a burst of output spikes instead of the single spike expected from the random walk model. Thus, conductances allow much greater variability in the output spike train than instantaneous “adding” of spikes.
It is well known that refractoriness also affects spike train variability (Koch, 1999). The simulations in Figures 4and 5 were obtained with synaptic time constants of ∼5 msec and a refractory period of 1.72 msec. We also ran simulations in which all synaptic time constants were divided by a factor of 2.5, and the maximal conductances were increased to produce the same gain and balance. This made the refractory period much larger relative to the timescale of unitary changes in postsynaptic conductance. The only difference observed in these simulations was that all CV_{ISI} values were smaller than those of Figures4 and 5 by ∼0.2; otherwise, correlations had the same effects on rate and caused the same relative increases in CV_{ISI}.
The simulation results presented so far were obtained with a balanced neuron. However, it is not certain whether values of β as large as 1 are within the physiologically plausible range (Berman et al., 1991; Douglas and Martin, 1998; Stevens and Zador, 1998). Therefore, it was important to investigate whether the results were also valid with an unbalanced neuron with low β. The corresponding simulation results are shown in Figures 6 and 7, in the same format as Figures 4 and 5 but for an integrateandfire model with β = 0.45 . In viewing Figure 6 it is important to recall that the plots below the rasters are crosscorrelation histograms between pairs of input spike trains. These histograms reveal the statistical dependencies of excitatory and inhibitory inputs, which were constant in each one of the panels.
The spike rasters in Figure 6 show that the same amount of correlation now produces, overall, a small enhancement in rate that is not multiplicative. Nevertheless, a fraction φ_{E} = 0.1 still raises the output rate by ∼10 spikes/sec when the input rate is <80 spikes/sec or so. On the other hand, correlations still cause a large increase in CV_{ISI}. Note, in particular, that the increase is seen even when the three correlation terms are identical. This is because the relative values of their coefficients in Equation 19 do not produce a full cancellation. Here, however, the neuron is much less variable to begin with (that is, with uncorrelated inputs), and the CV_{ISI} is almost flat as a function of input rate. Thus, as shown in Figure7 b, in the presence of weak correlations CV_{ISI} stays around or <1 at all input rates considered. As in the balanced condition, additional simulations were also run using faster synaptic timescales, but this made practically no difference on the results; refractoriness played a minor role in this case.
Figure 7 a shows the impact of input correlations on the input–output firing rate curve of an unbalanced neuron. Note, in particular, that correlations effectively decrease the threshold (Kenyon et al., 1990; Bernander et al., 1991; Bell et al., 1995). These curves should be compared with those in Figure 2 b, which correspond to the random walk model in the presence of a large drift, or net excitatory drive. The match between the two sets of curves is not perfect, but the shape of the rate curve and the changes caused by the presence of input correlations are well described by the theoretical expressions. Thus, although parameter values used in Figure 2, a andb, were adjusted to optimize overall agreement between the two models, the predictions of the stochastic model, particularly the differences between balanced and unbalanced regimes, are remarkably accurate, considering its simplicity.
Impact of correlations generated by oscillations
The firing probabilities of two neurons may fluctuate in time around some fixed average. If such temporal fluctuations tend to occur together, the two neurons will be correlated; the probability of one of them firing will be higher when the other one also fires, because this will reflect an upward increase in the underlying firing rates. We investigated whether inputs that become correlated in this way produce effects similar to the ones observed through common drive. For this, we modeled input firing rates as periodic functions of time, such that:
Figure 8 shows sample spike trains from the balanced integrateandfire neuron when the input rates vary periodically at a frequency f = 40 Hz. As before, the plots below the voltage traces are crosscorrelation histograms, which show the statistical dependencies between inputs in each condition. The different panels correspond to different values of ε_{E} and ε_{I}. For comparison, Figure 8 a shows a response to uncorrelated inputs; this case corresponds to all crosscorrelation histograms being flat. When only r_{E} fluctuates in time, as shown in Figure 8 b, excitatory inputs become correlated, their crosscorrelation histogram reflects the periodicity of the underlying rates, and the output neuron fires more strongly. Based on the results of previous sections, this is precisely what is expected when net excitatory correlations are present. In addition, the output neuron fires in bursts, reflecting the underlying input oscillations. Other possible correlation patterns also modify the output firing rate as predicted: cofluctuations between inhibitory neurons also enhance the output rate, and when all possible pairs are equally correlated the enhancement practically disappears. Figure 8 e shows a variant of this latter case in which a phase difference exists between excitatory and inhibitory rates (for a similar situation in real neural circuits, see Skaggs et al., 1996; Tsodyks et al., 1997). For this plot, ε_{E} and ε_{I} were both equal to 0.6, but we used a cosine instead of a sine in Equation 36. This introduced a time delay of 6.25 msec between the peaks of r_{E}(t) and r_{I}(t) , which is reflected in the crosscorrelogram between excitatory and inhibitory neurons (Fig.8 e, EI ). In this situation the output rate is greatly enhanced; it is almost twice that obtained with uncorrelated inputs. Here all input pairs are correlated, but at slightly different times. This produces the greatest excitation at a time when inhibition is not at its peak. This result demonstrates that the timing of correlations also plays a crucial role in determining the output firing rate.
Figure 9 expands these results by varying the input rates and correlations throughout a range. This figure has the same format as Figure 5, except that ε_{E}and ε_{I} vary along the x axes in Figure 9, c and d, because they determine the correlation strengths in this case. The symbols also have the same meanings as in Figure 5, except that nonzero values of φ_{E} and φ_{I} now correspond to nonzero values of ε_{E} and ε_{I}. The curves in Figure 9 aobtained with the conductancebased model are again in good agreement with the analytical results from the random walk model. The major difference is the greater effect that inhibitory oscillations have on output rate. This occurs because, with oscillatory rates, the variance in the number of spikes produced by each input per Δt depends more strongly on the firing rate than what was assumed before (the difference is the step between Equations 18 and 19: with oscillatory rates, different expressions for s_{E} ^{2} and s_{I} ^{2}should be used, and this results in different coefficients for the ρ terms). Inhibitory correlations are stronger in this case because inhibitory inputs fire faster than excitatory ones.
As shown in Figure 9 c, larger correlations still produce larger increases in rate. Overall, the effects on output rate of correlations induced by temporal cofluctuations in firing probability are similar to those produced by common drive to the inputs. On the other hand, as shown in Figure 9, b and d, the variability of the output spike trains tends to decrease when input rates oscillate, as indicated by the CV_{ISI}. This is not surprising because, in this case, input firing is more regular, and the output spikes tend to follow the periodic increases in excitation.
Figures 10 and11 show the corresponding results for the unbalanced postsynaptic neuron with β = 0.45 . As was shown above, the unbalanced neuron is less sensitive to correlations than the balanced neuron. However, as seen in Figures 10and 11 a, the temporal modulation of excitatory input rates using ε_{E} = 0.6 still raises the output rate by ∼10 spikes/sec when the mean input rate is <80 spikes/sec approximately, and the effect still increases monotonically as correlations become stronger, as indicated in Figure 11 c. Oscillations, however, seem to be less effective in driving the target neuron when it is below threshold, as can be observed by comparing Figures 11 a and 7 a (insets). It should also be borne in mind that higher rates will be evoked whenever there is a phase difference between excitatory and inhibitory inputs, as illustrated in Figure 8 e.
In summary, as far as output firing rate goes, the presence of input correlations has similar effects regardless of the mechanism by which those correlations are generated, and such effects are well described by the theoretical model developed above. A comparison between Figures2 a, 5 a, and 9 a shows that this is true for a balanced model neuron, and a comparison between Figures2 b, 7 a, and 11 a shows that this is also true for an unbalanced neuron. In contrast to the rate, the variability of the interspike intervals is sensitive to the dynamics that give rise to the correlations. Correlations arising from common drive tend to increase the CV_{ISI} of the output, whether the postsynaptic neuron is balanced (Figures 5 b, d) or not (Figures 7 b, d), whereas correlations arising from periodic, temporal cofluctuations in rate have a weaker tendency to decrease the CV_{ISI} when the postsynaptic neuron is balanced (Figures 9 b, d), and their effect on an unbalanced neuron may be either upward or downward (Figure11 b).
Impact of other factors contributing to input variance
A general implication of the analytic results presented here is that a balanced neuron responds to variance in synaptic input: variance is its driving force. In the stochastic model, variability in the statistics of input spikes provided all the variance, but synapses themselves also behave stochastically, and so do the channels and receptors on the postsynaptic membrane (Calvin and Stevens, 1968). Including synaptic variability corresponds to considering a Δ_{E} that is not constant, but rather comes from a distribution. If the theoretical analysis is correct, unreliable, stochastic synapses that on average produce a change in voltage of a given size should give rise to larger firing rates than perfectly reliable synapses that always produce a depolarization of the same mean size. In this case, failures in synaptic transmission may actually boost the output firing rate, because large but infrequent depolarizations have a better chance of causing a spike than small and frequent ones.
To study this, we allowed synaptic failures in the conductancebased model. In this case, whenever an input spike arrives, the synaptic conductance can do two things, either increase by an amount Δg(t)/P_{T} (as in Eqs. 25 and 27), or remain the same, as if no spike had arrived. The first option corresponds to successful transmission and occurs with probability P_{T}; the second option corresponds to a failure and occurs with probability 1 − P_{T}. With this scheme, the mean conductance change averaged over many input spikes is always the same (and equal to Δg(t) ) regardless of the probability of transmission P_{T}. The case P_{T} = 1 corresponds to zero failures, as was considered in all previous results.
Figure 12 shows how the output rate and CV_{ISI} change when synapses are allowed to fail. For this figure, the dynamics described in the previous paragraph were applied to AMPA and GABA conductances, with separate P_{T} values for each. Based on experimental reports (Murthy et al., 1997), we chose P_{T} = 0.15 . Figure 12, a andb, corresponds to a balanced postsynaptic neuron, and Figure12, c and d, corresponds to an unbalanced one, with the same sets of parameters used before. For these curves, all inputs were uncorrelated. The curves are in good agreement with the theory: failures increase input variance and, when the neuron is balanced, this produces an increase in gain. In contrast to spike train correlations, inhibitory failures cause larger effects than excitatory ones. This can be understood by calculating the variance of ΔV across time steps. Starting from Equations 2 and 3, we proceed as in the calculation of Equation 18, but assume that the unitary voltage changes Δ_{E} and Δ_{I} have binomial distributions with means 〈Δ_{E}〉 and 〈Δ_{I}〉 . In the absence of correlations, the variance of ΔV is then:
In contrast to correlations, synaptic failures always add more variance. This is because excitatory and inhibitory synapses release their neurotransmitters independently of each other, so there is no negative term equivalent to ρ_{EI}. The small dots in Figure 12 a show that a larger increase in mean output rate is obtained when both excitatory and inhibitory synaptic failures are included, as expected from the two equations above. As shown in Figure 12 c, for the unbalanced neuron the increase in rate is much smaller, but still appreciable, especially near threshold. Lowering of firing threshold attributable to increased variance in membrane potential fluctuations has also been observed in more detailed neuronal models (Bernander et al., 1991;Destexhe and Hô, 1999; Hô et al., 2000).
According to this result, an excitatory presynaptic neuron may be more effective, on average, when it makes a synaptic contact of strength 2 S that fails half the time, than when it makes a contact of strengths S that never fails. Thus, synaptic failures and synchronous spikes give rise to similar effects on the postsynaptic membrane. This may seem counterintuitive, but the key is that both increase the variance in synaptic drive, and variance drives the neuron when the mean drive is small in comparison.
The intrinsic variability of the postsynaptic membrane also contributes to the variance of the membrane potential. This means that, in a balanced neuron driven exclusively by fluctuations, intrinsic membrane noise should produce an upward shift in the output versus input firing rate curve. We confirmed this through simulations in which, on each integration time step, independent Gaussian noise with zero mean was added directly to the voltage of the postsynaptic neuron. The output rate of the target neuron indeed increased with added Gaussian noise as expected for balanced and unbalanced conditions and, interestingly, Gaussian noise could also decrease the CV_{ISI} in both cases (data not shown; Tiesinga and José, 1999). To observe these effects it was necessary to inject relatively large amounts of noise (we used a SD of 0.17 mV). Experimental findings (Calvin and Stevens, 1968;Mainen and Sejnowski, 1995; Holt et al., 1996) and theoretical arguments (Softky and Koch, 1993) indicate that statistical fluctuations caused by spikegenerating mechanisms should be much smaller than those produced by variations in spike arrival times. In agreement with these observations, we found that intrinsic membrane noise had much less impact on firing rate and interspike interval variability than input correlations. The reason for this, from Equations 19 and 20, is that the correlation terms are multiplied by the square of the corresponding numbers of inputs, which are at least in the hundreds. Therefore, even weak correlations may contribute much more than other noise sources to fluctuations in membrane potential.
DISCUSSION
The impact of input correlations on the mean firing rate of a postsynaptic neuron was calculated using a simple stochastic model. The analytical results (Fig. 2) were confirmed through simulations of a conductancebased, integrateandfire neuron driven by hundreds of synaptic inputs. In this framework, the process of synaptic integration can be cast as follows. Both the drift, or mean net drive μ—average excitation minus inhibition minus decay caused by leakage—and the fluctuations around this mean may cause a postsynaptic neuron to fire (Troyer and Miller, 1997). Correlations between excitatory neurons or between inhibitory neurons increase the variance ς^{2} of the fluctuations, whereas correlations between excitatory–inhibitory pairs decrease it. Correlations have their largest impact upon fully balanced neurons (μ ≤ 0), because these are driven exclusively by fluctuations. However, even in unbalanced neurons weak correlations may have a significant impact around firing threshold, because at that point it is usually the case that μ ≈ 0. By weak we mean shared input with φ = 0.1 , which corresponds roughly to the probability that two nearby excitatory cortical neurons are connected (Braitenberg and Schüz, 1997;Shadlen and Newsome, 1998).
The theoretical model did not provide an expression for the variability of the interspike interval distribution of the postsynaptic neuron, but it did bound its coefficient of variation CV_{ISI}between 0 and 1, depending on which contributed more to the firing of the neuron, μ or ς. When ς = 0 and μ > 0 , a neuron fires regularly, like a clock, and when ς > 0 and μ ≤ 0 , it fires irregularly, like a Geiger counter. These facts are useful in reviewing previous studies regarding neuronal variability.
The spike trains of neurons recorded in awake animals are extremely variable (Burns and Webb, 1976; Dean, 1981; Softky and Koch, 1993; Holt et al., 1996; Stevens and Zador, 1998; Shinomoto et al., 1999), but spike generation mechanisms themselves seem to be highly reliable (Calvin and Stevens, 1968;Mainen and Sejnowski, 1995; Holt et al., 1996). Therefore, although intrinsic properties may still be important (Bell et al., 1995; Troyer and Miller, 1997), the variability of a neuron in an intact microcircuit should come mostly from the variability of its inputs. Softky and Koch (1992, 1993) pointed out that, although the CV_{ISI} of typical cortical neurons is close to 1, this number should be much lower for an integrator that adds up many small contributions to fire, especially at high output rates. They found that, in the absence of inhibition, high CV_{ISI} values could be obtained either with an unrealistically small (i.e. submillisecond) membrane time constant, or using active dendrites that worked as isolated coincidence detectors (Abeles, 1982, 1991; Softky, 1993;Shadlen and Newsome, 1994). In our integrateandfire model, a smaller time constant τ_{m} also produced higher CV_{ISI} values, but otherwise it did not alter the effects of correlations (time constants down to 1 msec were tested; results not shown). Shadlen and Newsome (1994) later showed that, as in earlier stochastic models (Gerstein and Mandelbrot, 1964; Tuckwell, 1988; Smith, 1992), including incoming inhibitory spikes produces high CV_{ISI} values. Their model was nearly balanced. Troyer and Miller (1997) extended these results by carefully tuning an integrateandfire neuron to produce a relatively high CV_{ISI} while using biologically plausible amounts of inhibition (Bell et al., 1995). Their simulation results fell in the lower half of the range 0.5 to 1 reported bySoftky and Koch (1993). More recently, Stevens and Zador (1998) (see also Destexhe and Paré, 2000) have also suggested, based on experiments in a slice preparation, that input synchrony is required to produce the high variability observed in vivo. This is consistent with the suggestion that stochastic eye movements, which provide a common, correlating signal, are responsible for a large fraction of the variability observed in primary visual neurons (Gur et al., 1997). It also agrees with recent theoretical results (Feng and Brown, 2000), and with simulation studies in which network interactions produce synchronized recurrent input, which also leads to high variability (Usher et al., 1994;Tsodyks and Sejnowski, 1995; Van Vreeswijk and Sompolinsky, 1996).
In the unbalanced condition, we found that weak correlations generated by common drive to the inputs could raise the CV_{ISI} from ∼0.6 to 1 (Stevens and Zador, 1998). This could result from correlations between excitatory neurons only or between all inputs (Fig. 7 b;Feng and Brown, 2000). We also observed that correlations could raise the CV_{ISI} of the conductancebased model well beyond 1 (Figure 5 b, d), even in the unbalanced condition (Fig. 7 d). This is in marked contrast to our random walk model and to other models based on memoryless processes where, it should be noted, the refractory period (or dead time) may play an important role in limiting the CV_{ISI} (Smith, 1992; see alsoTuckwell, 1988; Shadlen and Newsome, 1998; Koch, 1999; Shinomoto et al., 1999).
Earlier studies also explored the relationship between input correlation and output rate. Bernander et al., (1994)and Murthy and Fetz (1994) found that synchrony could increase the firing rate of the target cell but only up to a point, after which the rate tended to decrease. This happened because, on one hand, leakage tends to erase the effect of previous inputs, so these should come close together in time to avoid wasting depolarization. But on the other hand, synaptic inputs that arrive during the refractory period have little or no effect, and to avoid this situation inputs should be somewhat spread out in time. The tradeoff between these two conditions, as well as the degree of correlation between inputs, gives rise to an optimal time window for synaptic integration. However, this scenario is valid when excitation is overwhelmingly larger than inhibition and when the output neuron is firing intensely, such that refractory effects become important. In this unbalanced regime the postsynaptic cell operates at a low gain—in these studies, at least 100 (net) excitatory input spikes were required to produce a single output spike—and firing is much more regular than observed experimentally. Because neurons fire irregularly and appear to operate at high gain (Softky and Koch, 1992, 1993;Shadlen and Newsome, 1994, 1998; Troyer and Miller, 1997), the effects in low gain models may be small within the normal dynamic range of real neurons. In our simulations, the effects of correlations did not vary appreciably with the refractory period.
Softky and Koch (1993) (see also Ritz and Sejnowski, 1997) also noticed that synchronized inputs produced higher output firing rates than independent inputs. They observed this for positive correlations between excitatory pairs. Other studies (Sejnowski, 1976; Kenyon et al., 1990;Bernander et al., 1991; Bell et al., 1995) are also consistent with our finding that an unbalanced neuron should be considerably more sensitive to synchronous excitatory spikes than to uncorrelated ones. In contrast, Shadlen and Newsome (1998) studied the effect of input synchrony and found that even fractions of shared connections among inputs as large as 0.4 did not elicit any change in the statistics of a target neuron. Their model was fully balanced, so the output rate must have been most sensitive to the presence of correlations. However, they chose parameters for which ς^{2} is given by Equation 20 and, because they used identical correlations among all possible pairs, the three correlation terms cancelled out exactly.
This brings us to the two experimental questions that are crucial to evaluate the impact of correlations. First, what is the actual balance between excitation and inhibition? In our models, inhibition countered excitation through changes in conductance and firing rate, but other factors like synaptic location are also important. Whatever the biophysical implementation, balance is crucial because, according to our models, it determines whether input correlations affect the gain of a neuron either throughout its full dynamic range or mostly at low firing rates. Second, what are the relative magnitudes of the three correlation terms in typical cortical circuits? At the moment, experimental data seem insufficient to determine this. Based on anatomical considerations (White, 1989;Braitenberg and Schüz, 1997) and neurophysiological measurements (Fetz et al., 1991;Nelson et al., 1992; Zohary et al., 1994;Salinas et al., 2000), it seems likely that all terms are different from zero, at least for local microcircuits; but what really needs to be known is the final weighted sum. This final sum might not be constant, neither in time nor across cortical areas. Here we illustrated various possible combinations of the three terms and found that even a small net deviation from zero may have a large impact on output gain and variability. Given such high sensitivity, perhaps the correlation structure between neurons is dynamically modulated. Recent experiments suggest that synchrony may covary with attention (Steinmetz et al., 2000). Furthermore, because the timing of correlations may also enhance their effects (Fig.8 e), correlations could also interact with normal plasticity mechanisms that depend on spike timing (Linden, 1999;Sejnowski, 1999; Paulsen and Sejnowski, 2000). Therefore, dynamic changes in correlations could have profound functional implications (Singer and Gray, 1995;Shadlen and Newsome, 1998).
It is interesting that our model neurons may encode the statistical properties of their inputs in different ways, depending on their balance. In a balanced neuron, output firing rate and variability are both sensitive to input rate and to correlations, and the input–output rate curve is nonlinear (Fig. 5 a,b). On the other hand, in an unbalanced neuron output rate is only modestly sensitive to input correlations, the input–output rate curve is practically linear, except for the threshold, and variability is sensitive to correlations but much less so to input rate (Fig. 7 a,b). The next step is to explore the statistics of spike trains in feedback models and to relate these to the functional properties of cortical circuits (Singer and Gray, 1995; Tsodyks and Sejnowski, 1995; Van Vreeswijk and Sompolinsky, 1996;Shadlen and Newsome, 1998).
Appendix
Here we derive Equations 6 and 7, which give the firing rate of the stochastic neuron as a function of μ and ς, the mean and SD, respectively, of the net number of excitatory inputs that arrive at the neuron in one Δt . We make use of Equations 15.
The methods used to develop the stochastic model are standard in the mathematics and physics literature. The present model is closely related to the Ornstein–Uhlenbeck process, for which a closedform solution is not known, although asymptotic expansions have been found (Ricciardi, 1977; Smith, 1992;Shinomoto et al., 1999). The major difference is that the Ornstein–Uhlenbeck process includes a term proportional to −V in Equation 1, a true leak, which drives the membrane potential toward rest. The constant decay assumed here and represented by the term d is thus a key simplification. As general references on stochastic processes and on the computation of first passage times the reader may consult, for example, the books byRicciardi (1977), Tuckwell (1989), orRisken (1996); see also Berg (1993) for applications to other problems in biology.
First consider a situation analogous to a purely random walk without drift, for which μ = 0 . Assume that the distribution function P is symmetric, so that P(n) = P(−n) for any n . In this case, the mean number of steps needed to reach a threshold, as a function of N , obeys the relationship:
Recalling that the integral over all probabilities must be equal to 1, the above expression can be rearranged as:
This solution satisfies Equation 40 exactly. The caveat here is that Equation 39 itself is an approximation, because at the boundaries it is not true that increases and decreases in N are equally probable. Nevertheless, the solution does not require that individual steps be infinitesimally small, as in the limit where the diffusion equation holds (Gerstein and Mandelbrot, 1964;Tuckwell, 1988; Smith, 1992). Instead, Equation 45 should break down only when ς is comparable to N_{θ} and boundary effects become significant.
The case where there is a constant drift and μ is different from zero can be approximated by substituting N_{reset} + μν for N in the righthand side of Equation 45. This can be seen as follows. After ν steps, the drift must have contributed an amount μν to the current value of N , but this is equivalent to starting from the initial value N = N_{reset} + μν with zero drift. Equivalently, one may think that it is the lower limit and the threshold that move at a constant speed equal to −μ so that after ν steps N_{rest} and N_{θ}have both changed by an amount −μν. Either way, the result is a quadratic equation for ν,
Including μ as we have done in the last two equations is valid when μ ≥ 0 , or when μ is negative but small in absolute value with respect to ς. Otherwise, N hits the lower boundary frequently, and N_{reset} + μν falls below zero. Through simulations, we found that the case in which μ is negative can be approximated accurately by reducing ς by an amount proportional to μ and eliminating the μ terms in the above equations; the result is:
Footnotes

This work was supported by the Howard Hughes Medical Institute. We thank Larry Abbott and Paul Tiesinga for helpful comments.
Correspondence should be addressed to Emilio Salinas, Computational Neurobiology Laboratory, The Salk Institute for Biological Studies, 10010 North Torrey Pines Road, La Jolla, CA 92037. Email:emilio{at}salk.edu.