Abstract
Functional interactions between neurons in vivo are often quantified by crosscorrelation functions (CCFs) between their spike trains. It is therefore essential to understand quantitatively how CCFs are shaped by different factors, such as connectivity, synaptic parameters, and background activity. Here, we study the CCF between two neurons using analytical calculations and numerical simulations. We quantify the role of synaptic parameters, such as peak conductance, decay time, and reversal potential, and analyze how various patterns of connectivity influence CCF shapes. In particular, we find that the symmetry of the CCF distinguishes in general, but not always, the case of shared inputs between two neurons from the case in which they are directly synaptically connected. We systematically examine the influence of background synaptic inputs from the surrounding network that set the baseline firing statistics of the neurons and modulate their response properties. We find that variations in the background noise modify the amplitude of the crosscorrelation function as strongly as variations of synaptic strength. In particular, we show that the postsynaptic neuron spiking regularity has a pronounced influence on CCF amplitude. This suggests an efficient and flexible mechanism for modulating functional interactions.
Introduction
Recordings of multineuron spike trains have revealed significant interdependencies between the firing of different neurons in a population (Zohary et al., 1994; Meister et al., 1995; Alonso et al., 1996; deCharms and Merzenich, 1996; Bair et al., 2001; Kohn and Smith, 2005). Although it is important to identify the role such functional interactions play in neural coding (Abbott and Dayan, 1999; Nirenberg et al., 2001; Nirenberg and Latham, 2003; Averbeck et al., 2006; Schneidman et al., 2006; Pillow et al., 2008), it is also important to understand how they depend on biophysical parameters and network activity (Poliakov et al., 1996, 1997; de la Rocha et al., 2007). For pairs of neurons, functional interactions are quantified by the crosscorrelation function (CCF) between their spike trains, which measures how much the firing of one of the two neurons influences the firing of the other at different time lags. Statistically significant crosscorrelations arise from the presence of a direct synaptic connection (Snider et al., 1998; Csicsvari et al., 1998; Barthó et al., 2004; Fujisawa et al., 2008) and/or from common or correlated inputs to the two neurons (Sears and Stagg, 1976; Binder and Powers, 2001; Constantinidis et al., 2001; Türker and Powers, 2001, 2002). The amplitude of the CCF therefore directly depends on the properties of the synapses mediating the interactions, but it is also modulated by the activity of the surrounding network (Aertsen et al., 1989; Poliakov et al., 1996; Constantinidis et al., 2001). The shape of the CCF also carries information on the underlying connectivity, yet inferring the circuitry from the CCF is a notoriously difficult problem (Melssen and Epping, 1987; Alonso and Martinez, 1998; Trong and Rieke, 2008). A detailed, quantitative understanding of the influences of synaptic parameters, network activity, and local circuitry on the CCF is therefore necessary for a correct interpretation of CCFs.
The basic influence of the underlying circuit on the shape of the CCF has long been considered at a qualitative level (Moore et al., 1970; Palm et al., 1988). More quantitative studies have been devoted to CCFs induced by a direct synaptic connection (Knox, 1974; Ashby and Zilm, 1982; Fetz and Gustafsson, 1983; Herrmann and Gerstner, 2002; Veredas et al., 2005). Early theoretical studies did not take into account the activity of the surrounding network, and it is only more recently that the effects of background inputs have been assessed using a phenomenological noise model (Herrmann and Gerstner, 2001). For the case of common inputs to the neurons, previous theoretical studies have concentrated on spikecount correlations (de la Rocha et al., 2007; SheaBrown et al., 2008), and the results for the full CCF appear scarce (Kirkwood and Sears, 1978; Tchumatchenko et al., 2008).
In the present work, we systematically examine how the amplitude and time course of the CCF depend on the synaptic parameters, surrounding network activity, and local connectivity. To this end, we use pairs of integrateandfire neurons and represent the activity of the surrounding network by a compound, fluctuating background input that sets the baseline firing statistics of the neurons. We first show that the average firing response of a neuron to a given synaptic input strongly depends on the regularity of its firing. We next determine analytically the CCF in various simple microcircuits within a linear approximation. We first consider the two basic situations of a direct synaptic connection and common synaptic inputs to the two neurons. We then show how the results obtained for these two microcircuits can be used to study more complex ones, such as two mutually connected neurons and feedforward inhibition.
Materials and Methods
Integrateandfire models.
To study the influence of various biophysical parameters on the shape and amplitude of the crosscorrelation function between the spike trains of two neurons, we used integrateandfire models in which action potentials are generated from the underlying dynamics of the membrane potential (Gerstner and Kistler, 2002). These dynamics are given by the following: where the membrane potential V is determined with respect to the resting potential of the cell, c_{m} is the membrane capacitance of the neuron, g_{m} is the membrane conductance, g_{m}ψ(V) is a spikegenerating current, and I_{syn} is the total current elicited by synaptic inputs to the neuron. We used c_{m} = 250 pF and g_{m} = 25 nS. These parameters are taken from in vitro recordings from layer V neocortical pyramidal (Badel et al., 2008b), with g_{m} increased by a factor of 2–3 compared with in vitro values, to account for in vivotype synaptic background inputs. We studied two different versions of the integrateandfire model.
In the leaky integrateandfire (LIF) model, ψ(V) = 0, there is no spikegeneration current, and an action potential (AP) is emitted when the membrane potential crosses a fixed threshold value V_{T}. The membrane potential is subsequently reset to a value V_{R}. We did not introduce any refractory period after the emission of an AP. The values used for the threshold and reset were V_{T} = 20 mV and V_{R} = 10 mV.
In the exponential integrateandfire (EIF) model, the spikegeneration current is exponential: Once the membrane potential crosses the threshold V_{T}, it diverges to infinity in finite time. This divergence represents the firing of an action potential. After the divergence, the membrane potential is reset to a value V_{R}. The values used in this study were V_{T} = 10 mV and V_{R} = 3 mV. The parameter Δ_{T} quantifies the sharpness of the AP initiation. We took here Δ_{T} = 1 mV, a typical value for pyramidal cells (Badel et al., 2008b).
The LIF model (Lapicque, 1907) presents the advantage of being analytically tractable. However, the absence of spikegenerating currents and the fixed threshold for spike emission may induce some differences with the behavior of conductancebased models. The EIF model, in contrast, reproduces in the simplest possible way the spike initiation in Hodgkin–Huxley type models (FourcaudTrocmé et al., 2003), the spike sharpness being described in the EIF model with a single parameter Δ_{T}. Moreover, it has been shown recently that the EIF model provides an excellent fit to in vitro dynamics of the membrane potential in cortical pyramidal neurons (Badel et al., 2008b), with a fitted value for the spike sharpness of Δ_{T} = ∼1 mV.
If the neuron receives a constant input current I_{syn}(t) = −I_{0}, we find it useful to define the effective rest potential μ = I_{0}/g_{m} as the value the membrane potential of the neuron would reach in the absence of threshold and spikegenerating mechanisms.
Spiketrain statistics and the crosscorrelation function.
A spike train is represented as the following time series: where t_{j} for j = 1,…, p is the series of spike times ordered in time on the interval [0,T]. It is often useful to work with Fourier transforms ñ(ω) = [n] of spike trains, the Fourier transform of a function f being defined as follows: so that The instantaneous firing rate ν(t) is defined as follows: where the brackets denote averaging over trials. If the firing is stationary, ν(t) = ν_{0} for all t.
For stationary firing, the regularity of the spike train is quantified using the coefficient of variation (CV), defined as follows: where Δt_{j} = t_{j}−t_{j} _{−1} is the jth interspike interval, and the bar denotes averaging over all interspike intervals in the spike train.
The autocorrelation function of a spike train is defined as follows:
The Fourier transform of A(t), Ã(ω), is equal to the power spectrum of the spike train. It is given by the following:
The crosscorrelation function between the spike trains n^{(1)}(t) and n^{(2)}(t) of two neurons is as follows:
The value of C(t) represents the variation of the firing rate of the neuron 2, conditioned on the fact that neuron 1 fires t milliseconds earlier. With the normalization adopted here, this variation is expressed as the fraction of the baseline firing rate of neuron 2. Note that the normalization used here is different from the normalization by the geometric mean
The Fourier transform C̃(ω) of C(t), also called the crossspectrum between neurons 1 and 2 is given by the following: In this study, we determine C̃(ω) analytically and then recover C(t) using the inverse Fourier transform.
Synaptic inputs.
Synaptic inputs are modeled as transient conductance increases that cause a voltagedependent inward or outward current flow in the postsynaptic neuron. More precisely, the postsynaptic current (PSC) elicited by a single presynaptic AP at time t_{0} is given by the following: where E_{syn} is the synaptic reversal potential, and g_{syn} is the conductance of a synapse. The time evolution of g_{syn} after a presynaptic spike is given by a delayed exponential: Here g_{0} is the peak conductance, and δ_{s} and τ_{s} are the latency and decay time. Unless otherwise indicated, the values of the synaptic times are δ_{s} = 1.5 ms (Markram et al., 1997) and τ_{s} = 3 ms (Hestrin, 1993).
The maximum amplitude of the postsynaptic current fluctuates from presynaptic spike to presynaptic spike because of the voltage dependence in Equation 12. Its mean is given by (μ − E_{syn})g_{0}, where μ is the effective rest potential of the neuron. E_{syn} − μ represents the effective driving force of a synapse with reversal potential E_{syn} in the presence of fluctuating inputs.
Background synaptic activity.
Each cortical neuron receives a large number of synapses from other neurons, the typical estimate of this number being of the order of tens of thousands (Braitenberg and Schüz, 1991). In vivo, neurons are spontaneously active so that any neuron receives persistent background inputs as a result of the firing of its afferents. To study the effect of this background activity on crosscorrelation functions, we represent it as a compound background input to the neurons, as precisely described below.
If the amplitude of each postsynaptic current is small with respect to the threshold for spike generation, a large number of synaptic events is needed to cause the firing of an action potential. In such a situation, the cumulative conductance of many cooccurring synaptic inputs generated from random background activity can be described as a Gaussian random process, the socalled diffusion approximation (Tuckwell, 1988).
More precisely, in the case in which the neuron receives two types of synaptic inputs, excitatory and inhibitory ones, for each type of input, the total conductance attributable to background synaptic activity can be expressed as a sum of a tonic and a fluctuating part as follows: where g_{0}^{E} and g_{0}^{I} are the mean background synaptic conductances, respectively, for the excitatory (E) and the inhibitory (I) synapses, σ_{g}^{E} and σ_{g}^{I} are the SDs of the background synaptic conductances, and η^{E}(t) and η^{I}(t) are Gaussian stochastic processes of zero mean and unit SD. For the sake of analytical understanding, we assume that η^{E} and η^{I} are uncorrelated in time, i.e., η^{E} and η^{I} are whitenoise processes, although this is generally not the case because of the presence of the nonvanishing synaptic decay times τ_{s}. Including a finite correlation time in the background inputs modifies qualitatively the behavior of the LIF model (Brunel et al., 2001) but not the behavior of the EIF model (FourcaudTrocmé et al., 2003) or of cortical neurons in vitro (Köndgen et al., 2008).
The total postsynaptic current attributable to the background synaptic inputs is given by the following:
where E_{E} and E_{I} are the reversal potentials of excitatory and inhibitory synapses. This current can be decomposed in a sum of a voltageindependent term and a voltagedependent term. Within the diffusion approximation, the voltageindependent term is described by a tonic and a fluctuating part as follows:
where the mean I_{0} and the SD σ can be expressed in terms of g_{0}^{E}, g_{0}^{I}, σ_{g}^{E}, and σ_{g}^{I} and the synaptic reversal potentials E_{E} and E_{I} (Richardson, 2004). This is the background synaptic current term in Equation 1. Throughout this study, we examine only the stationary situation in which I_{0} and σ are constant. Note that σ is expressed in millivolt units. In the absence of firing threshold, the SD of the membrane potential is equal to σ/
The voltage dependent part is simply given by (g_{0}^{E} + g_{0}^{I})V, i.e., it has no fluctuating part within the diffusion approximation (Richardson, 2004; Richardson and Gerstner, 2005). The only effect of the voltagedependent part is thus to modify tonically the membrane conductance of the neuron, in a timeindependent manner. We therefore incorporate this effect in the model by setting g_{m} to a value larger than typically measured in vitro in the absence of background inputs.
Linear response to synaptic inputs.
To evaluate the crosscorrelation function between the spike trains of two neurons, we need to quantify how much a conductance change attributable to a single synapse modifies the instantaneous firing rate of a postsynaptic neuron that is in a stationary state. The current elicited by a synaptic conductance change g(t) is as follows: where E_{syn} is the synaptic reversal potential.
We assume that the amplitude of this postsynaptic current is small so that the resulting variation of the firing rate is small, too, and can be described as a linear variation around the stationary firing rate: Here we have averaged over trials, the conductance variation being identical in all trials; ν_{0} is the baseline firing rate attributable to tonic background inputs, and R_{n} is the linear response kernel to a synaptic conductance variation. A similar technique was used previously (Lindner et al., 2005). The linear response approximation can be written in frequency as follows: where R̃_{n} = [R_{n}] is the linear response in frequency, and g̃ = [g].
The postsynaptic current in Equation 18 can be decomposed in a sum of two components, a component −g(t)E_{syn} independent of the membrane potential of the postsynaptic cell, and a component g(t)V proportional to the membrane potential and equivalent to a variation of the membrane conductance. Correspondingly, the linear response to a synaptic input can therefore be expressed as follows: where R_{E} is the linear response kernel to a variation of the input current to the postsynaptic cell, μ is the effective rest potential of the neuron, and R_{shunt} is the linear response kernel to a variation of its membrane conductance. Equivalently, for the response in frequency, we write R̃_{n} = R̃_{E}(E_{syn} − μ) + R̃_{shunt}.
If E_{syn} is significantly different from μ, the total linear response is dominated by R_{E}: The linear response function R_{E} is closely related to the Wiener kernel of the neuron (Poliakov et al., 1997) and the spiketriggered average (STA) input current (Paninski, 2006) as described in Appendix B. The full linear response kernel R_{n} is calculated in Appendix A.
Dominant timescale approximation for R_{n}.
The linear kernel R_{n} can be written as follows: where {a_{j}} and {z_{j}} are complex numbers, with 0 > Re(z_{1}) > Re(z_{2})… (for more details, see Appendix C).
The dominant timescale approximation consists in keeping only the long time asymptotic behavior for R_{n}(t), i.e., the exponential of the pole z_{1} with the least negative real part. If the SD σ of the background noise term is small, the dominant pole has a nonzero imaginary part, and we write it as z_{1} = −1/τ_{1} + iω_{0}. In the dominant pole approximation, R_{n} becomes as follows: If σ is larger than a critical value, z_{1} becomes real, and we write it as z_{1} = −1/τ_{1}, so that in the dominant pole approximation, the following holds true:
Results
Firingrate responses to current and conductance variations
The crosscorrelation function between the spike trains of two neurons quantifies the average temporal variation of the firing rate of one neuron relatively to the firing time of the other neuron (Perkel et al., 1967). This variation of the firing rate around its baseline value is ultimately attributable to synaptic inputs, arising for example from a direct synapse between the two neurons or from common or correlated inputs to the two neurons. To describe the crosscorrelation function in any circuit, one thus first needs to quantify the effect of a change of its input synaptic conductance on the firing rate of a neuron. It is important to note that the average response to a given synaptic input depends on the baseline state of the neuron, i.e., its baseline firing rate ν_{0} but also the regularity of this firing as quantified by its CV. These properties of the baseline state are set by the background synaptic inputs to the neuron. We therefore systematically examine the influence of stationary background inputs (see Materials and Methods) on the response properties of the neuron.
If the amplitude of the input conductance variation is small compared with the firing threshold of the neuron—as is often the case when only a subset of presynaptic neurons fires—the corresponding variation of the firing rate is proportional to amplitude of the conductance variation and can be described by the linear response of the firing rate of the neuron to synaptic inputs (see Materials and Methods), i.e., the temporal filter mapping the synaptic conductance variation to a variation of the firing rate. This linear filter can be specified either in time by the firingrate response function R_{n} or in frequency by the Fourier transform of R_{n} denoted R̃_{n}. Before studying crosscorrelation functions, we describe here the properties of R_{n} and R̃_{n} for two models of neurons, leaky integrateandfire and exponential integrateandfire, in the presence of background synaptic noise that induces a baseline firing specified by its frequency ν_{0} and CV. To disentangle the influence on the response of the baseline firing rate ν_{0} and baseline firing regularity, while varying the SD σ of background noise, we systematically adjust the mean input to keep ν_{0} constant. Increasing σ then results in increasing the CV at fixed firing rate (Fig. 1D).
The response function R_{n} can be written as follows: where μ is the effective rest potential, R_{E} is the firingrate response to a varying input current, and R_{shunt} is the response to a variation of the membrane conductance (for additional details, see Materials and Methods). If the synaptic reversal potential E_{syn} is sufficiently different from the effective rest potential μ, i.e., if E_{syn} − μ is large, the response function R_{n} to a synaptic input is essentially equivalent to the response to a varying input current R_{E}. Conversely, if E_{syn} is close to μ, then R_{n} depends strongly on the response R_{shunt} to a variation of the membrane conductance. We therefore describe R_{E} and R_{shunt} separately in the following.
Response to current modulations
The linear response function R̃_{E} to input current modulations of different frequencies has been studied theoretically (Brunel et al., 2001; FourcaudTrocmé et al., 2003; Richardson, 2007) and experimentally (Köndgen et al., 2008; Boucsein et al., 2009), whereas its equivalent in time R_{E} is closely related to the Wiener kernel (Poliakov et al., 1997) and the STA current of the neuron (Paninski, 2006; Badel et al., 2008a) (see Appendix).
For both LIF and EIF models, the response R̃_{E} to modulated current is essentially a lowpass filter, the cutoff frequency of which decreases with increasing background synaptic noise. For low background synaptic noise (CV ≲ 0.5), R̃_{E}(ω) displays resonances at frequencies multiple of the underlying baseline firing frequency ν_{0} (Fig. 1A), and the response in time R_{E}(t) displays oscillations at a frequency equal to the baseline firing frequency ν_{0} (Fig. 1C). Qualitatively, in this regime in which the neuron spikes regularly with a period T, the effect of a given input at time t_{0} is to shift the times of the following spike emissions. Specifically, for the LIF and EIF neurons, an excitatory input advances the next spike, especially when this spike follows closely the input arrival. The effect is to increase the probability of spike emission at the input time t_{0} and, because spikes are emitted regularly, also at times that follow it by an integer number of periods (t_{0} + T, t_{0} + 2T,…). Correlatively, between these regularly spaced times when probability of spike emission is increased, the differential shift of spike time by the input decreases the probability of spike emission. For high background noise (CV ≳ 0.5), R̃_{E}(ω) is instead a pure lowpass filter with a cutoff frequency of the order of ν_{0}, and R_{E}(t) decays monotonically (Fig. 1C). Qualitatively, for high background noise, an excitatory input at time t_{0} simply increases the probability of spike emission in a time range after t_{0}.
The time course of R_{E} at short times is determined by the asymptotic behavior of R̃_{E} at high frequencies. For the leaky integrateandfire model, the amplitude of R̃_{E} decays asymptotically as 1/
The asymptotic behavior of R_{E}(t) at long times can be described using the dominant mode approximation (see Materials and Methods and Appendix C). For high background noise, this approximation predicts that R_{E}(t) decays exponentially with a time constant τ_{1} that can be calculated from Equation 49 (Appendix B). This time constant determines the longest timescale in the dynamics of the firing rate and sets a limit on the rate of variation of the firing frequency in response to a varying input current. Interestingly, the value of τ_{1} depends on the input statistics, i.e., on the SD of the background noise σ and the baseline firing frequency ν_{0} of the neuron. In Figure 1F, the time constant τ_{1} is displayed as a function of the SD of background noise σ, the firing rate ν_{0} being held constant by adjusting the mean background input while σ is varied. For the LIF model, τ_{1} increases with σ, whereas for the EIF model, it reaches a maximum at intermediate values of σ and decreases for larger values of σ. In both models, τ_{1} decreases with increasing ν_{0}. In particular, it appears to be always smaller than the membrane time constant τ_{m}.
The amplitude σ of the background noise strongly modulates the amplitude of the response and affects differently the response at low and high frequencies. The response R̃_{E} at low frequencies is given by the gain of the neuron, i.e., the variation of its background firing rate when its mean input current is changed. The gain of a neuron is known to depend on the amplitude of background noise (Chance et al., 2002). Figure 1E shows that, for both models studied, the gain decreases with increasing σ, the magnitude of this variation being dependent on the baseline firing rate ν_{0}. The effect of noise on the amplitude of the response at high frequencies is different from its effect on the gain but depends on the model: for the LIF model, R̃_{E} (ω) scales as 1/σ for large ω, whereas for the EIF model, it is independent of σ (FourcaudTrocmé et al., 2003).
Response to membrane conductance modulations
If the synaptic reversal potential E_{syn} is close to the effective rest potential μ of the neuron, the membrane conductance variation after a synaptic input plays an important role in the firing rate response. In particular, if E_{syn} is exactly equal to the effective rest potential μ, then the firing rate response is entirely attributable to shunting, i.e., R̃_{n} = R̃_{shunt}. The influence of the variation of membrane conductance on the firing response can be understood in the LIF model as a superposition of two effects, a variation of the membrane timescale of the neuron, and a variation of the amplitude of the fluctuating input (for details, see Appendix A). These two effects compete against each other: increasing the conductance decreases the effective time constant, leading to an instantaneous increase of the firing rate, but it also decreases the effective level of noise, the effect of which depends on whether the mean input is subthreshold (μ below the threshold V_{T}) or suprathreshold (μ > V_{T}). In the case of subthreshold input, the shunting part R_{shunt} of the linear response in time R_{n} is biphasic, with a fast negative part and a slower positive part. In contrast, for suprathreshold inputs, R_{n} is positive at all times. These two cases are illustrated in Figure 2, where ν_{0} = 30 Hz corresponds to a subthreshold input and ν_{0} = 65 Hz corresponds to a suprathreshold input.
In summary, in this section, we have examined the firingrate response of a neuron to a timevarying conductance input. We have shown that both the amplitude and the timescales of this response strongly depend on the baseline firing statistics of the neuron, set by the background synaptic input.
Correlations arising from a direct synaptic connection
Having characterized the response of a neuron to synaptic inputs, we now turn to the crosscorrelation function between spike trains of two neurons arranged in different circuits. We first examine the crosscorrelations attributable to a direct synaptic connection between two neurons (Fig. 3). Assuming that both the presynaptic and the postsynaptic neuron receive background synaptic inputs uncorrelated from each other, the crossspectrum can be expressed as follows: where R̃_{n} is the linear response function of the postsynaptic neuron (see previous section) (Figs. 1, 2), g̃_{syn} is the Fourier transform of the synaptic conductance time course after a single presynaptic spike (see Materials and Methods), Ã_{pre}(ω) is the power spectrum of the spike train of the presynaptic neuron, and ν_{0}^{(pre)} (resp. ν_{0}^{(post)}) is the stationary firing rate of the presynaptic (resp. postsynaptic) neuron. Details of the derivation are provided in Appendix D.
Because R̃_{n} = R̃_{E}(E_{syn} − μ) + R̃_{shunt}, the crosscorrelation function can be written as C(t) = C_{E}(t) + C_{shunt}(t), with C_{E}(t) purely induced by current variations and proportional to the amplitude g_{0}(E_{syn} − μ) of the postsynaptic current and C_{shunt}(t) purely induced by membrane conductance variations and proportional to the peak synaptic conductance g_{0}. If E_{syn} − μ is large enough, C_{shunt}(t) is negligible with respect to C_{E}(t). We therefore first examine currentinduced crosscorrelations and later examine the influence of conductance variations for E_{syn} close to μ. Note that C_{E}(t) is proportional to (E_{syn} − μ), the sign of which determines whether the synapse is excitatory or inhibitory.
Poisson presynaptic firing
It is instructive to consider first the situation in which the firing of the presynaptic neuron is a Poisson process, in which case the power spectrum Ã_{pre}(ω) is a constant. The crosscorrelation function C_{E}(t) is then simply given by the synaptic conductance time course filtered through the linear response function R_{n}. In particular, C_{E}(t) at negative times is necessarily zero, and a nonzero synaptic delay δ_{s} simply shifts C_{E}(t) by δ_{s} to positive times.
Because R_{E} filters out high frequencies (Fig. 1A,B), the time course of currentinduced crosscorrelations is always slower than the time course of the postsynaptic current (Fig. 3A,B). For low background noise (Fig. 3A), the resonances in R̃_{E} (Fig. 1A) lead to secondary peaks at positive times in the crosscorrelation function C_{E}(t), located at multiples of the baseline firing frequency ν_{0}. For high background noise, C_{E}(t) displays a single peak as shown in Figure 3B. In that case, within the dominant timescale approximation, the crosscorrelation function can be expressed as a difference of two exponentials: where δ_{s} and τ_{s} are the latency and the decay time of the synaptic conductance, τ_{1} is the dominant timescale in R_{n}(t) (Fig. 1C), and C_{0} is a constant given in Appendix C. Figure 3B shows that the dominant mode approximation describes well the asymptotic decay of C_{E}(t) at large times, the timescale of that decay being given by the maximum between the neuronal timescale τ_{1} and the synaptic timescale τ_{s} as seen from Equation 28. The dominant approximation, however, fails to describe the small time behavior of C_{E}(t), which is determined by the fast components of R_{E}(t) at small t. The main difference between the LIF and the EIF model is that, for the LIF model, R_{E} diverges at small t, which leads to a fast rise of C_{E}(t) at small t, whereas in the EIF model, R_{E} remains finite and correspondingly the rise C_{E}(t) at small t is slower. Despite this qualitative difference at short times, the CCFs for the two models end up looking very similar (Fig. 3, compare both B panels). This is in part attributable to the fact that, for the values of the noise used in Figure 3B, the dominant timescale τ_{1} is very similar in both models.
The peak time and amplitude of the crosscorrelation function C_{E}(t) naturally depend on the biophysical parameters of the synaptic connection, its peak conductance g_{0}, and decay timescale τ_{s}. Because we are using a linear approximation, the amplitude of C_{E}(t) is proportional to g_{0}. In Figure 4, we examine the accuracy of the linear approximation by comparing our theoretical predictions with numerical simulations for different values of the PSC amplitude g_{0}(μ − E_{syn}). The level of accuracy of the linear approximation depends on the ratio between the PSC amplitude and the noise amplitude: for high noise, the approximation is excellent over a wide range of physiological PSC amplitudes; in contrast, for low noise, deviations are seen for large PSCs. Note that the linear approximation does not capture the asymmetry between inhibitory and excitatory CCFs, as pointed out previously (Herrmann and Gerstner, 2001).
Because the crosscorrelation function is a filtered version of the synaptic conductance time course, its peak time and amplitude increase with τ_{s} as seen in Figure 3, C and D. For fixed g_{0}, longerlasting postsynaptic currents thus have a stronger effect on the firing of the postsynaptic neuron. Such a dependence is qualitatively described by the dominant timescale approximation (see Appendix D), although this approximation is not very accurate quantitatively.
The peak time and amplitude of the crosscorrelation function are not set by the synaptic properties alone but depend strongly on the statistics of baseline firing of the postsynaptic neuron. The effect of the background synaptic noise amplitude is shown in Figure 3E. The neuron firing rate is kept fixed by adjusting the mean input so that only its CV varies with the noise amplitude. Remarkably, one sees that, for the LIF model, the crosscorrelation amplitude decreases inversely proportionally to the SD σ of the background noise. In other words, the regularity of the baseline firing of the postsynaptic neuron has a strong influence on the amplitude of the crosscorrelations: for a given synaptic peak conductance and decay time, the more regularly the postsynaptic neuron fires, the larger the crosscorrelation amplitude. This is also true for the EIF model, although in that model the modulation of the crosscorrelation amplitude with noise strength is somewhat weaker, as shown in Figure 3E. For both models, the crosscorrelation amplitude decreases with increasing baseline firing rate ν_{0}, but varying ν_{0} modulates the crosscorrelation amplitude in a weaker manner than varying σ. Conversely, the peak time of C(t) is relatively insensitive to σ, for both LIF and EIF models. The peak time decreases with increasing firing rate, in agreement with the predictions of the dominant timescale approximation.
For the sake of comparison with experimental studies, in Figure 3, C and E, the peak value of the crosscorrelation function is given for a synaptic conductance corresponding to a postsynaptic current amplitude of 60 pA and a synaptic decay time τ_{s} = 3 ms, yielding a postsynaptic potential (PSP) peak of 0.5 mV, a typical order of magnitude for AMPAdependent EPSPs in the cortex (Markram et al., 1997; Sjöström et al., 2001; Holmgren et al., 2003; Barbour et al., 2007). For background synaptic inputs corresponding to a firing rate ν_{0} = 30 Hz and CV = 0.9 (σ = 8 mV in the LIF model), our analysis predicts a corresponding peak crosscorrelation of 0.15, meaning that the synaptic inputs from the presynaptic neuron increase the firing rate of the postsynaptic neuron by 15% with respect to the baseline firing rate. If the background noise amplitude is reduced by half to σ = 4 mV, the CV becomes 0.7, but the amplitude of the crosscorrelations increases to 0.3.
Influence of the firing statistics of the presynaptic neuron
If the firing of the presynaptic neuron is not poissonian, the autocorrelation of the presynaptic firing also plays a role in the crosscorrelation function (see Eq. 27). We examined the effect of the presynaptic autocorrelation for the LIF models, for which the power spectrum Ã_{pre}(ω) can be analytically calculated (see Appendix E). Depending on the level of background synaptic noise, the LIF neuron can be found in one of the three following regimes, illustrated in Figure 5A: (1) for weak background noise (CV < 0.5), the neuron fires rhythmically, and its autocorrelation function exhibits peaks at multiples of the mean interspike interval; (2) for intermediate background noise (0.5 < CV < 1), the presynaptic neuron fires irregularly but exhibits a noisedependent refractory period around zero in the autocorrelation function; (3) for strong background noise (CV > 1), the neuron tends to fire in bursts, which results in positive autocorrelation at short time lags. The effect of the presynaptic autocorrelation on the crosscorrelation function is most prominent at negative times (corresponding to the postsynaptic neuron firing before the presynaptic one), in which C(t) is essentially given by the autocorrelation function of the presynaptic neuron. This is illustrated in Figure 5B in the three firing regimes of the presynaptic neuron. For weak noise in the presynaptic neuron, C(t) displays periodic secondary peaks at both positive and negative times, corresponding to the period T in the firing of the presynaptic neuron. The peaks correspond to increased postsynaptic spiking arising from the spikes at (…, −2T, −T, 0, T, 2T, …) in the presynaptic neuron. For intermediate noise in the presynaptic neuron, regular spiking disappears and C(t) simply displays a dip close to zero attributable to the refractory period of the presynaptic neuron. For strong noise in the presynaptic neuron, C(t) displays positive values close to zero as a result of the bursts in the presynaptic neuron. It is important to note that the firing statistics of the presynaptic neuron affect only marginally the amplitude of the crosscorrelation function, in contrast to the firing statistics of the postsynaptic neuron. Interestingly, the amplitude of the CCFs increase when presynaptic CV increases, whereas it decreases when postsynaptic CV increases. Figure 5C displays the effect of the presynaptic firing rate on C(t) at intermediate presynaptic noise: the effect of the presynaptic refractory period increases with increasing presynaptic firing rate.
Shunting effects
If the synaptic reversal potential E_{syn} is close to the effective resting potential μ of the postsynaptic neuron, the membrane conductance variations have an important influence on the crosscorrelation function. Figure 6 displays the total crosscorrelation function C(t) = C_{E}(t) + C_{shunt}(t) for E_{syn} close to the effective rest potential μ. For E_{syn} = μ, at the point of crossing between inhibition and excitation, the full crosscorrelation is given by C_{shunt}. If the input is subthreshold (μ < V_{T}), R_{shunt} is biphasic, so that C(t) is biphasic with a fast inhibitory phase at small times and a slower excitatory phase at long times (Fig. 6A). For E_{syn} close to μ, depending on the sign of E_{syn} − μ, the excitatory or inhibitory phase of C_{shunt}(t) is amplified by the crosscorrelation C_{E}(t) arising from current variations. For E_{syn} − μ, negative but small C_{E}(t) approximately cancels the slow excitatory phase of C_{shunt}, and only the fast inhibitory part of C_{shunt} remains in C(t). Note that this inhibitory part is as fast as the postsynaptic current and thus significantly faster than crosscorrelations arising from current variations alone.
If the input is suprathreshold, R_{shunt} is monophasic, and C_{shunt}(t) is purely excitatory for E_{syn} = μ. C(t) becomes biphasic when the effective resting potential μ is slightly below the synaptic reversal potential (Fig. 5C) arising from a competition between an excitatory shunting effect and an inhibitory effect arising from the negative effective driving force. The average background level at which the transition between an excitatory and an inhibitory CCF occurs thus depends on the baseline firing rate.
If the effective driving force E_{syn} − μ is of the order of tens of millivolts, current variations dominate, and shunting hardly affects the amplitude of C(t). However, it modifies the time course of C(t) with respect to the CCF C_{E}(t) purely induced by current variations: the time course of C(t) is faster than that of C_{E}(t) in the case of inhibition and slower in the case of excitation. This is illustrated in Figure 6B for subthreshold inputs and E_{syn} − μ = 10 mV and in Figure 6D for suprathreshold input and E_{syn} − μ = 30 mV.
To summarize, we have shown that the primary peak in the crosscorrelation function arising from a synaptic connection corresponds to a filtered postsynaptic current. The amplitude and the shape of this peak therefore depend on the peak postsynaptic current and decay time of the corresponding synapse. However, these amplitude and shape are not determined by synaptic properties alone but are strongly modulated by the baseline firing statistics of the postsynaptic neuron and in particular the regularity of its firing. Conversely, the firing statistics of the presynaptic neuron only have a minor effect on correlations at short times. Finally, if the synaptic reversal potential is close to the effective rest potential of the postsynaptic neuron, shunting effects can either accelerate (in the case of inhibition) or slow down (in the case of excitation) the time course of the crosscorrelation function.
Correlations arising from common inputs
Correlations between the spike trains of two neurons can be induced by common or correlated inputs to the two neurons, even in the absence of a direct synaptic connection between them (Sears and Stagg, 1976; Binder and Powers, 2001; Türker and Powers, 2001, 2002; de la Rocha et al., 2007; SheaBrown et al., 2008; Tchumatchenko et al., 2008). Assuming that the two neurons, labeled 1 and 2, receive a spike train n_{pre} from N_{pre} common presynaptic neurons on top of other uncorrelated background inputs (Fig. 7A), the crossspectrum between neurons 1 and 2 can be expressed as follows: where R̃_{n}^{(1)} and R̃_{n}^{(2)} are the linear response functions of the neurons 1 and 2, g̃_{syn}^{(1)} and g̃_{syn}^{(2)} are the Fourier transforms of the synaptic conductance time courses in the two neurons after a single presynaptic spike, and ν_{0}^{(pre)} and Ã_{pre}(ω) are the stationary rate and power spectrum of the activity n_{pre} in the common presynaptic network of the two neurons (n_{pre} is obtained by superposing the spike trains of all N_{pre} neurons in the common presynaptic network). Here we assume that the synaptic properties (reversal potential, peak conductance, and decay time) for each neuron are identical for all inputs from the common presynaptic network. Details of the derivation are provided in Appendix D.
For each neuron, the response function R̃_{n} is a sum of a currentdependent term R̃_{E}(E_{syn} − μ) and a conductancedependent term R̃_{shunt}. We consider only the currentdominated regime in which E_{syn} − μ is large so that C(t) is proportional to g_{0}^{2}(E_{syn} − μ)^{2}N_{pre}^{2}, i.e., to the square of the peak PSC of a single synapse, multiplied by the number N_{pre} of common presynaptic neurons. Within this approximation, inhibitory and excitatory common inputs lead to identical crosscorrelations. For E_{syn} close to the effective rest potential μ, the shunting term prevents C(t) from going to zero but does not change the qualitative shape of C(t) (data not shown).
We first examine the crosscorrelations resulting from uncorrelated Poisson common inputs, corresponding to constant presynaptic power spectrum Ã_{pre}(ω) = A_{0} and later discuss the additional effect of synchrony in common inputs.
Asynchronous inputs
If the two neurons have identical properties, meaning that their firing statistics and synaptic inputs are identical, Equation 29 predicts that the crosscorrelation function C(t) is symmetrical around its maximum. Here we consider only the situation in which the common inputs reach the two cells simultaneously, in which case the maximum is located at the origin. If the inputs reach the two cells with different delays, the maximum is shifted away from the origin. Figure 7, A and B, displays the crosscorrelation function C(t) in the two cases of low and high background noise. For low background noise (CV = 0.2), C(t) displays a central peak as well as secondary peaks at multiples of the firing period as a result of to the resonance present in R̃_{E}. Hence, common stochastic inputs induce oscillatory synchronization between the two neurons. This observation corresponds to the well known phenomenon of noiseinduced synchronization (Pikovsky et al., 1997; Ermentrout et al., 2008).
As noise is increased, the amplitude of the secondary peaks decreases, and, for moderately high background noise (CV = 0.8), C(t) has a single central peak. In that case, using the dominant timescale approximation for R̃_{E}, the crosscorrelation function can be expressed as the difference of two exponentials: where τ_{s} is the decay time of the synaptic conductance, τ_{1} is the dominant timescale in R_{n}(t) (Fig. 1F), and C_{0} is a constant given in Appendix D. As shown in Figure 7B, this approximation typically underestimates the peak of C(t) at zero but captures well the decay of C(t), the timescale of which is given by the maximum between the synaptic timescale τ_{s} and the neuronal timescale τ_{1}.
The synaptic parameters of the common inputs have an important influence on the shape and amplitude of the crosscorrelation function. The dominant timescale approximation shows that the decay of the crosscorrelation function cannot be faster than the timescale τ_{s} of synaptic decay. The amplitude of the crosscorrelation function is proportional to the square of the peak synaptic conductance but also depends on the synaptic timescale τ_{s}. Figure 7C shows that the peak value of C(t) increases approximately linearly with τ_{s}, in contrast to the case of correlations generated by a direct synaptic connection between the two neurons, where τ_{s} has a weaker effect on the amplitude of C(t). Note that this linear trend is not captured by the dominant timescale approximation.
The shape and amplitude of the crosscorrelation function are not fully determined by the synaptic properties of common inputs but are again also strongly modulated by the statistics of firing of the two neurons, which are determined by independent background inputs to each of them. In particular, Figure 7D shows that changing the regularity of firing while keeping the firing rate constant strongly modulates the amplitude of the crosscorrelation function, the peak of which decreases approximately quadratically with the SD σ of background noise in the LIF model. In the EIF model, the dependence on σ is somewhat weaker but nevertheless important. The amplitude of the crosscorrelation function is also modulated by the baseline firing rate ν_{0} but in a weaker manner. In both LIF and EIF models, the maximum of C(t) decreases with increasing ν_{0}.
Effects of heterogeneity
The crosscorrelation function C(t) is symmetric around the origin as long as all properties of the two neurons are identical. This symmetry is, however, broken if the firing properties (firing rate and coefficient of variation) of the two neurons are different. Figure 8, A and B, displays the crosscorrelation functions for neurons firing at different firing rates but with identical CV compared with the symmetric case. For high background noise, even if heterogeneity is strong (neuron 1 firing at 10 Hz and neuron 2 firing at 100 Hz), the crosscorrelation function remains highly symmetric and very close to the homogeneous case. Note that, if the two neurons have different firing thresholds but identical firing rate and CV, the crosscorrelation function is perfectly symmetric. For low background noise (Fig. 8A), the two different firing rates lead to secondary peaks at different periods on the two sides of the origin. This is a consequence of causality, namely, that an input modifies spiking after its arrival, not before. Qualitatively, observing a spike in neuron 1 at t = 0 increases the probability that an excitatory input has arrived just before t = 0. Because part of the inputs are common to the two neurons, it also increases the probability of a spike in neuron 2 at approximately t = 0 and also subsequently approximately (t = T_{2}, 2T_{2},…) when neuron 2 is spiking regularly with period T_{2}. Because conditioning the firing of neuron 1 on the spikes of neuron 2 amounts to time inversion, the same reasoning explains that, for negative t, the peaks in the correlation function occur at the firing period of neuron 1. The full crosscorrelation function is therefore highly asymmetric, although the two neurons receive the same input that arrives with the same delay. Altogether, these results suggest that the symmetry of the crosscorrelation function allows for a reliable discrimination between the underlying common inputs and a direct synaptic connection in the case of strong background noise but not in the case of low background noise, the two cases being distinguished by the presence of secondary peaks.
Effects of the autocorrelation of common inputs
So far, we considered only the situation in which the activity in the common presynaptic network does not display correlations. If that activity display some amount of shortterm temporal correlation, an important question is how far these correlations are transmitted to the two neurons receiving the common inputs. If the presynaptic activity is correlated on a timescale τ_{pre}, i.e., its autocorrelation is of the form A_{0}exp(−t /τ_{pre}), these correlations will induce an additional timescale τ_{pre} in C(t). Figure 9 displays the crosscorrelation function C(t) obtained for two values of τ_{pre} compared with C(t) obtained from asynchronous inputs. If τ_{pre} is shorter than the correlation time (maximum between synaptic decay timescale τ_{s} and dominant neural timescale τ_{1}, 4.5 ms in the case for parameters of Fig. 9), the crosscorrelation function is essentially identical to the case of asynchronous inputs. If τ_{pre} is larger than max(τ_{s},τ_{1}), the decay timescale of C(t) is given by τ_{pre}. Correlations in the presynaptic activity can therefore only broaden the CCF between the two neurons but not sharpen it.
In summary, the crosscorrelation function arising from common inputs is in general highly symmetric. This is an important distinction from the highly asymmetric crosscorrelation function generated by a direct synaptic connection. However, if the firing rates of the two neurons differ significantly and the amplitude of background is low in at least one of the two neurons, the crosscorrelation function arising from common inputs is asymmetric so that, in that case, asymmetry does not imply that a direct connection is present between the two neurons. The width and the amplitude of the crosscorrelation function depend on the properties of the synapses mediating common inputs. The amplitude of the CCF is, however, modulated in a much stronger way by the firing statistics of the neurons, which are set by independent background inputs: for fixed amplitude of common inputs, the amplitude of the crosscorrelations is much larger when the neurons fire regularly than when they fire irregularly.
Other simple microcircuits
The knowledge of the crosscorrelation functions for the two cases of directly connected neurons and neurons receiving common inputs provides us with the basic tools for calculating the crosscorrelation function in more complex circuits. Within the linear approximation, any circuit can be decomposed in a superposition of simpler ones, and the crosscorrelation function for that circuit can be obtained as a sum of crosscorrelation functions of the simpler circuits. This approach is illustrated here on a couple of basic, experimentally relevant microcircuits.
Mutually connected neurons
The circuit, consisting of two neurons mutually connected by two synapses (Fig. 10A), is one of the connectivity patterns found to occur with high probability in the cortex (Markram et al., 1997; Sjöström et al., 2001; Song et al., 2005). A number of theoretical (Van Vreeswijk et al., 1994; Ernst et al., 1995; Lewis and Rinzel, 2003) studies have examined the activity in mutually coupled pairs of neurons, especially the synchronization properties between the two neurons in the case in which they fire regularly (low background synaptic noise).
The crosscorrelation between the two neurons can be written as follows (Fig. 10A): where C_{1}_{→}_{2} (resp. C_{2}_{→}_{1}) is the crosscorrelation of the monosynaptic circuit in which the neuron 1 projects a synapse on neuron 2 (resp. neuron 2 on neuron 1). If the two neurons are identical and receive identical background inputs, then C_{2}_{→}_{1}(t) = C_{1}_{→}_{2}(−t).
Here we only examine the case of high background synaptic noise. This will allow us to determine how far the main findings of previous, lownoise, studies extend to high noise. We moreover assume that the synaptic reversal potential E_{syn} is significantly different from the mean membrane potential of the two neurons, so that synaptic inputs are dominated by current inputs.
Previous studies have found that the delay in synaptic transmission plays a key role in synchronizing the two neurons (Ernst et al., 1995). If C_{1}_{→}_{2,0}(t) is the crosscorrelation function for a single synaptic connection without delay, the crosscorrelation function C_{δs}(t) for two mutually coupled neurons with a synaptic delay δ_{s} is given by the following: Figure 10B displays the crosscorrelation function as the delay is increased, for the case of inhibitory synapses. In absence of synaptic delay, inhibitory coupling induces strong anticorrelations at short times, but at zero lag the crosscorrelation is equal to zero, i.e., no synchrony is present. For a synaptic delay of 1 ms, the inhibitory anticorrelations persist, but positive correlations appear at zero time lag so that the synchronization of the two neurons is increased. This increase is attributable to the disinhibition caused by the effective refractory period at negative times in C_{1}_{→}_{2,0}(t) (Fig. 5). If the synaptic delay is further increased to 5 ms, the maximum of C_{δs}(t) shifts away from zero, so that the coupling does not promote zero lag synchrony anymore. Our results therefore confirm the role of synaptic delays in the synchronization of two mutually coupled neurons, even for high background noise.
Another striking result of lownoise studies is that inhibitory synapses between mutually coupled neurons promote synchrony, whereas excitatory synapses instead promote antisynchrony (Van Vreeswijk et al., 1994; Ernst et al., 1995). Within our linear approximation in the regime of currentdominated synaptic interactions, the crosscorrelation function C_{δs}^{(e)}(t) for neurons coupled by excitatory synapses is simply obtained as C_{δs}^{(e)}(t) = −C_{δs}^{(i)}(t), where C_{δs}^{(i)}(t) is the crosscorrelation function for neurons coupled by inhibitory synapses. It is thus immediately clear that, whereas inhibition induces zerolag correlations and anticorrelations at small times, excitation instead leads to anticorrelations at zero lag and positive correlations at small times, i.e., outofphase synchrony.
Feedforward inhibition
It has been experimentally observed that inhibitory and excitatory inputs to neurons are often not independent but instead cooccur with a precise timing (Pouille and Scanziani, 2001; Wehr and Zador, 2003; Brunel et al., 2004; Mittmann, 2005). Such a coordination of excitatory and inhibitory inputs can be implemented via feedforward inhibition. In this simple microcircuit, illustrated in Figure 11, common excitatory inputs arrive on neuron 1, an inhibitory interneuron, and neuron 2. Neuron 2 also receives direct synaptic inputs from neuron 1. Neuron 2 thus receives excitatory inputs, closely followed by inhibitory inputs elicited by excitation in neuron 1. The crosscorrelation function C(t) for feedforward inhibition can be obtained as the sum of crosscorrelations arising from common inputs and crosscorrelations arising from a direct synaptic connection. It should be noted that the crosscorrelation function depends linearly on the inhibitory synaptic weight but quadratically on the excitatory weights. The precise shape of the crosscorrelation function is thus not simply scaled by the input strengths but depend on their relative weights. Figure 11 illustrates the shape of the crosscorrelation function for feedforward inhibition in the case of strong background synaptic noise in both neurons. The common inputs lead to a broad central peak in C(t), which is truncated by the inhibition, thus improving the precision of synchrony between the two neurons. A crosscorrelogram with the same characteristic shape has been recently obtained from in vivo recordings in a cerebellar circuit identified as potential feedforward inhibition by in vitro studies (Léna et al., 2008).
Discussion
The aim of this study was to examine the relation between the CCF of two neural spike trains and the underlying connectivity, synaptic properties, and firing statistics of the two neurons. To this end, we used integrateandfire models of neurons that incorporate some of the essential biophysical properties of real neurons but remain simple enough to be fully analyzed mathematically. We explicitly modeled two neurons only and took into account the activity of the surrounding network, which we assumed to be stationary, as a fluctuating background input that sets each of the neurons in a baseline state with a nonzero firing rate. The response properties to synaptic inputs in this baseline state can be determined analytically and give access to the CCFs within a linear approximation (Brunel and Hakim, 1999; Brunel, 2000; Lindner and SchimanskyGeier, 2001; Lindner et al., 2005; Richardson, 2007). This approximation remains accurate for crosscorrelations of amplitude up to 0.3. We therefore expect it to be relevant for the strong background noise usually observed in vivo (Anderson et al., 2000). Using this approach, we determined the CCFs for different patterns of connectivity, starting with the case of a direct synaptic connection from one neuron to the other and the case of common inputs to the two neurons. We then showed how the results for these two simple circuits can be exploited to study the CCF in more complex situations.
Modulating the functional interactions
The functional interactions between two neurons, as quantified by the CCF between their spike trains, depend in an essential manner on the properties of the synapses providing the inputs. In a first approximation, the amplitude of the CCF is proportional to the peak PSC in the case of a direct synaptic connection and to its square in the case of common inputs so that plastic modifications of synaptic strength obviously modulate functional interactions. The full time course of the CCF is determined by a temporal filtering of the PSC by the firing response of the postsynaptic neuron(s). The properties of this firing response however depend on the statistics of baseline firing of the postsynaptic neuron(s).
The amplitude and shape of functional interactions are therefore not set by the synaptic properties alone but strongly depend on the background synaptic inputs to the postsynaptic neuron(s), i.e., the activity of the surrounding network, which determines their firing statistics. If the fluctuations in background inputs are weak and the firing of the postsynaptic neuron regular, additional synaptic inputs have a much larger effect than if the background fluctuations are strong and the firing of that neuron very irregular. As a consequence, for a fixed PSC amplitude, the amplitude of the CCF varies in a highly nonlinear manner with the strength of background noise, in agreement with previous observations (Poliakov et al., 1996). Changes of the firing regularity by background inputs modulates the functional interactions in a much stronger and flexible manner than plastic modifications of synaptic weights. Thus, background noise affects both the gain of neurons (Chance et al., 2002), i.e., their steadystate direct current response, and their correlations that are determined by their response at intermediate frequencies.
Reading out the connectivity
An important question is whether the CCF can be used to distinguish monosynaptically connected neurons from neurons receiving common inputs. For two identical neurons firing with identical statistics, the CCF is highly asymmetric in the former case, although it is perfectly symmetric in the latter case. Symmetry has therefore been commonly used as a criterion to distinguish between the two situations (Alonso and Martinez, 1998; Fujisawa et al., 2008). It was, however, a priori not clear how accurate this criterion would remain for two neurons with very different intrinsic properties or firing statistics, because such heterogeneities disrupt the symmetry of the CCF in response to common inputs. We found that, even for high degrees of heterogeneity between the two neurons, in the case of strong background fluctuations, the asymmetry remains much weaker in the case of common inputs than for a direct synaptic connection. In contrast, for low background noise, the CCF arising from common inputs can be highly asymmetric. The symmetry is therefore a robust criterion for distinguishing a direct synaptic connection from common inputs only in the case of strong background noise.
In large recurrent networks, two neurons that are part of the same network receive common input from the network that can potentially have a nontrivial temporal structure as a result of the collective network dynamics (Brunel and Hakim, 1999; Brunel, 2000). In this situation, an interesting question is whether the crosscorrelation is dominated by the collective dynamics of the network (making identification of synaptic connectivity difficult if not impossible) or by the direct (monodirectional or bidirectional) synaptic connection. For networks of binary neurons, Ginzburg and Sompolinsky (1994) showed that, if the network is in an asynchronous state and far from bifurcations leading to synchronized states, crosscorrelations can be dominated by direct connections, whereas close to such bifurcations the effect of direct connections is very small compared with the collective dynamics of the network. For more realistic networks of spiking neurons, crosscorrelations have been studied analytically only in extremely simplified architectures [homogeneous and fully connected networks (Meyer and Van Vreeswijk, 2002)] or through numerical simulations (Amit and Brunel, 1997). The question of the relative impact of direct connections and collective dynamics on crosscorrelations therefore remains an open issue in realistic networks of spiking neurons.
Comparison with previous studies
Previous theoretical studies have examined separately the crosscorrelations in the case of a direct synaptic connection (Knox, 1974; Ashby and Zilm, 1982; Fetz and Gustafsson, 1983; Poliakov et al., 1996, 1997; Herrmann and Gerstner, 2001, 2002) and common inputs (de la Rocha et al., 2007; SheaBrown et al., 2008; Tchumatchenko et al., 2008). Here, we have developed a common framework for these two and other circuits and systematically included the effects of the surrounding network by taking into account background synaptic inputs to the neurons. For a direct synaptic connection, our approach is similar in spirit to the study of Herrmann and Gerstner (2001). These authors also made use of linear response, but they modeled background synaptic inputs as “escape” noise instead of diffusion noise in our work. For the case of common inputs, our approach is also closely related to the one adopted by de la Rocha et al. (2007) and SheaBrown et al. (2008) to study spikecount correlations. However, the spikecount correlation of two spike trains is related to the time integral of the CCF (Bair et al., 2001) and therefore different from the amplitude of the CCF (Kohn and Smith, 2005). Moreover, different normalizations are commonly used for the two quantities (see Materials and Methods). Although spikecount correlations arising from common inputs depend on the firing rate of the neurons, they were found to be insensitive to the regularity of the firing (de la Rocha et al., 2007), in contrast to our results for the CCFs.
For the situation of a direct synaptic connection, the previous studies (Knox, 1974; Ashby and Zilm, 1982; Fetz and Gustafsson, 1983; Poliakov et al., 1996, 1997; Herrmann and Gerstner, 2001, 2002) have sought to relate the postsynaptic potential to the CCF, and it has been debated whether the PSP, its time derivative, or a combination of the two determine the shape of the CCF. The reason to consider the PSP (or its derivative) rather than the PSC, as we do, was that, in the absence of background synaptic inputs, the PSP determines when the membrane potential crosses the threshold. In the presence of background synaptic inputs, a probabilistic reasoning becomes necessary, and we find it more natural to express the CCF in terms of the PSC shape rather than PSP. Because the PSP is obtained by the filtering of the PSC through the membrane, our results are not in conflict with previous observation that both the PSP and its time derivative have an influence on the CCF.
Interpreting experimental data
The present results should be helpful for the interpretation of experimental results. Here, we do not attempt to provide an exhaustive survey of the literature but instead point out a couple of examples. In an impressive recent study, Fujisawa et al. (2008) were able to track the variations of CCF between neurons from the prefrontal cortex of the rat, at different epochs of a working memory task. They observed strong modulations of the CCF amplitude and interpreted them as evidence of shortterm plasticity. Our results suggest that, alternatively, such modulations may result from modulations of the firing regularity of the postsynaptic neuron, arising from changes in background inputs from the surrounding network. The present analysis, however, assumed stationary background inputs, and the implications of the variation of background inputs certainly deserves additional examination. In another study (Csicsvari et al., 1998), strong crosscorrelations were observed on the timescale of 1 ms. Our results, which relate the neuronal and synaptic timescales to the timescales of the CCF, suggest that extremely fast neuronal and synaptic timescales underlie such fast correlations. A number of other experimental studies have observed crosscorrelations at timescales significantly longer that the synaptic and neural timescales (Kohn and Smith, 2005). These long timescales could arise from either intrinsic neuronal currents acting on slow timescales, such as firing rate adaptation, or slow dynamics of a network projecting to both recorded neurons.
Finally, several recent theoretical studies have focused on reproducing quantitatively the precise time course of arbitrary CCFs (Rosenberg et al., 1998; Truccolo et al., 2004). Powerful methods, based on the framework of point processes, are now available to fit models to experimental CCFs (Pillow et al., 2008) and extract effective parameters for neuronal interactions. However, these effective parameters do not have a direct biophysical interpretation and are not directly linked to the underlying circuitry. It would thus be interesting to combine the tools developed for statistical inference with the present analytical results to extract biophysically constrained parameters from the fitting of models to experimental data.
Appendix A: linear response to synaptic inputs
In this section, we calculate the linear response of the firing rate to a variation g(t) of the synaptic conductance.
The dynamics of the membrane potential of the postsynaptic cell are given by the following: We divide both sides by g_{m} + g(t), write δg = g(t)/g_{m}, and keep only firstorder terms in δg, so that Equation 33 becomes the following: where τ_{m} = c_{m}/g_{m} is the membrane time constant, and μ = I_{0}/g_{m} represents the effective rest value that the membrane potential would reach in absence of threshold. Equation 34 can be seen as a perturbed version of the following equation: with the parameters τ_{m}, μ, σ^{2}, and V_{T} (in the EIF model only) being perturbed to values τ_{m}(1 − δg), μ(1 − δg) + δg(t)E_{syn}, σ^{2}(1 − δg), and V_{T} + Δ_{T}δg, respectively.
In absence of the perturbation corresponding to synaptic inputs, the inputs in Equation 35 are constant and lead to a stationary firing ν_{0} = F(μ,σ), F being the transfer (or f–I curve) function, which can be calculated analytically (Tuckwell, 1988). The presence of a timevarying perturbation leads to a temporal variation of the firing rate, which on the linear level can be written as follows: or in frequency where R̃_{n}(ω) is the Fourier transform of the linear response kernel, and g̃(ω) is the Fourier transform of the synaptic conductance g(t).
From Equation 34, it is apparent that the linear response to a synaptic conductance change can be expressed as the sum of linear responses to the varied parameters as follows: where R̃_{E}, R̃_{σ2}, R̃_{τm}, and R̃_{VT} are the linear responses in frequency of the firing rate to variations of parameters μ, σ^{2}, τ_{m}, and V_{T} (EIF model only), each divided by g_{m} with the chosen normalization.
The linear response functions to variations of any parameter in Equation 35 have been analyzed in previous studies using the Fokker–Planck equation. In particular, R̃_{E} is the response to the variation of the external input current (Brunel and Hakim, 1999; Brunel et al., 2001). For the LIF model, R̃_{E} and R̃_{σ2} can be expressed explicitly (Brunel and Hakim, 1999; Brunel et al., 2001; Lindner and SchimanskyGeier, 2001): and where y_{T} = (V_{th} − I_{0})/σ, y_{R} = (V_{r} − I_{0})/σ, and u(y,ω) is given in terms of a combination of hypergeometric functions or equivalently as the solution of the following differential equation: with the condition that u is bounded as y → −∞.
Note that the amplitude of R̃_{E} decays asymptotically to zero as 1/
The response R̃_{τm} to the variation of the membrane timescale is simply a constant (for both LIF and EIF models) (Richardson, 2007): Any temporal variation of the membrane time constant is thus perfectly reproduced by the temporal variation of the firing rate.
For the EIF model, the response functions R̃_{E}, R̃_{σ2}, and R̃_{VT} have been studied by Richardson (2007), in which an efficient seminumerical method based on the Fokker–Planck equation was developed to evaluate the response R̃_{α} to the variation of any parameter α as follows: where the function j_{r} in the denominator is independent of the varied parameter α. We used the same method in this study. Note that, for the EIF model, R̃_{E} is a lowpass filter, the amplitude of which decays as 1/ω. In contrast to the LIF model, R̃_{σ2} is also a lowpass filter, with an amplitude also decreasing as 1/ω. R̃_{VT} is, however, a highpass filter, privileging fast variations of V_{T}.
Note that R̃_{n} can be written as follows: where is independent of E_{syn} and corresponds to the shunting part of the synaptic conductance variation (for the LIF model, Δ_{T} = 0).
As a consequence, if the synaptic reversal potential is sufficiently different from the mean membrane potential μ, the response to synaptic inputs is dominated by the response to the input current variation. In contrast, if E_{syn} = ∼μ, the response to synaptic inputs is dominated by the response to the variation of the membrane conductance.
Appendix B: relation between the response function and the spiketriggered average current
The linear response function to an input current R_{E} is closely related to the STA current. It is not, however, fully equivalent to the Wiener kernel and STA current studied previously (Poliakov et al., 1997; Paninski, 2006). Here we describe explicitly this relation.
Consider a neuron that receives a mean input current and two timevarying inputs with flat power spectrum: The first whitenoise input η(t) represents background noise and is different in each trial. The second white noiseinput η_{s}(t) has a flat power spectrum, but its time course is identical in each trial so that it represents the signal the neuron receives.
If the amplitude σ_{s} of the signal is small, and the variation of the firing rate (averaged over trials or equivalently background noise) around its mean ν_{0} is given by the convolution with the response function R_{E}: The linear response function R_{E} is the optimal Wiener kernel in the presence of a background noise of amplitude σ and therefore corresponds to the spiketriggered average signal current. The cases studied by Poliakov et al. (1997) and Paninski (2006) correspond to the situation in which background noise is absent, so that all the timevarying input corresponds to the signal.
Appendix C: dominant timescale approximation for R_{n}
The linear response function R̃_{n}(ω) is an analytic function. We call ζ the analytical extension of R̃_{n}, defined by its values on the imaginary axis: The complex function ζ possesses a set of simple poles {z_{j}}_{j}_{>0}, with ∀j, Re(z_{j}) < 0, and labeled such that Re(z_{j}_{1}) < Re(z_{j}_{2}) if j_{1} < j_{2}. These poles are the zeros of the denominator of R̃_{n}, i.e., the zeros of u(y_{T}) − u(y_{R}) (see Eq. 40) in the case of the LIF model and the zeros of j_{r} (see Eq. 43) in the case of the EIF model. Note that R̃_{E} and R̃_{shunt} have the same poles, so that the present analysis is not restricted to R̃_{E} alone.
The function ζ can be written as an expansion over its poles: where a_{j} is the residue of ζ at its jth pole.
Using Equation 49, the linear kernel R_{n}, the inverse Fourier transform of R̃_{E}, can be expressed as follows: where θ(t) = 1 if t > 0 and θ(t) = 0 otherwise.
Appendix D: calculation of crosscorrelation functions
Direct synaptic connection
In the situation in which one neuron makes a direct synaptic connection on the other neuron, the dynamics of the membrane potentials of the presynaptic and postsynaptic neuron are given by the following: and The postsynaptic conductance g(t) is given by the following: where n_{pre} is the train of spikes emitted by the presynaptic neuron, and g_{syn} is the postsynaptic conductance change attributable to a single presynaptic spike, defined in Equation 13.
We compute the crosscorrelation function between the two neurons in a linear approximation in which the interaction between the two neurons is treated as a perturbation of the case when the two neurons are not connected. A similar approach was used previously by Lindner et al. (2005).
Within the linear approximation, the probability that the postsynaptic neuron spikes at time t can be written as follows:
where ν_{0,0}^{(post)} is the firing rate that the postsynaptic neuron would have in the absence of the connection to the singledout presynaptic neuron. The crosscorrelation function between the spike trains of these two neurons is therefore as follows:
Taking the Fourier transform with respect to both times t and t′ gives the equivalent form in frequency as follows:
From Equation 53, the Fourier transform of the conductance g̃(ω) is proportional to the Fourier transform of the presynaptic spike train as follows:
so that the following holds true:
One can express the averages on both sides of Equation 58 using the power spectrum of the presynaptic spike train, Ã_{pre}(ω), and the crossspectrum between the presynaptic and postsynaptic neurons, as defined and normalized in Equations 9 and 11. We also note that the stationary firing rate of the postsynaptic neuron is modified when the singledout presynaptic connection is taken into account and is given to linear order by the following:
Finally, we obtain the following:
In the dominant timescale approximation, R_{n} =
Common inputs
In the situation in which two neurons labeled 1 and 2 receive common inputs, the dynamics of the membrane potentials of the two neurons are given by the following: where i = 1,2.
The conductance g^{(i)} of neuron i attributable to synaptic inputs from the common presynaptic network is given by the following: where n_{pre} is the spike train obtained by superposing the spike trains of all common presynaptic neurons. Here we assume that the synapses made by the presynaptic neurons are identical and simply depend on the postsynaptic neuron identity, but the derivation can be easily extended to a more general case.
Within the linear approximation, the firing probability of neuron i for a given presynaptic train n_{pre} can be written as follows: where the average conductance 〈g^{(i)}〉 attributable to the common presynaptic input has been substracted in the integral term so that ν_{0}^{(i)} is the average firing rate of neuron i, arising from both the common inputs and the inputs specific to neuron i. Note also that, even for two identical neurons, the response functions R_{n}^{(1)} and R_{n}^{(2)} are different when the inputs (I_{1},σ_{1}) and (I_{2},σ_{2}) are different.
The correlation between the spike trains of neurons 1 and 2 is obtained by averaging over the common inputs as follows:
or equivalently
Moreover, the crosscorrelation between the conductances of the two neurons can be related to the autocorrelation function of the common presynaptic spike train n_{pre} as follows:
Finally, taking the Fourier transform of Equation 68 with respect to the time difference t′ − t relates the crosscorrelation function between the spike trains of neuron 1 and neuron 2 to the autocorrelation function of the common presynaptic spike train. With the normalization of Equation 9, it reads as follows:
In the dominant timescale approximation, R_{n} =
Appendix E: power spectrum for the LIF model
The autocorrelation function can be deduced directly from the Fourier transform of the interspike interval distribution f(t) (Gerstner and Kistler, 2002). Indeed, if we denote by A(t) the full autocorrelation function, its positive part A_{+}(t) = ν_{0}A(t)θ(t) obeys the following: Taking the Fourier transform, we have the following: and as Ã(ω) = (1 + 2Re(Ã_{+}(ω)))/ν_{0}, The Fourier transform of the interspike interval distribution can be calculated analytically for the LIF neuron (Tuckwell, 1988), and it reads as follows: where u is the function defined by Equation 41.
In conclusion, the power spectrum of the LIF neuron is given by the following:
Footnotes

This work was supported by Agence Nationale de la Recherche Grant ANR05NEUR030 and by Region IledeFrance (S.O.). We are grateful to Clément Léna, Wulfram Gerstner, and Tatjana Tchumatchenko for helpful discussions. We thank C. Léna for a careful reading of this manuscript.
 Correspondence should be addressed to Srdjan Ostojic, Laboratoire de Physique Statistique, 24, rue Lhomond, F75005 Paris, France. srdjan{at}lps.ens.fr