We model the effects of excitation and inhibition on the gain of cortical neurons. Previous theoretical work has concluded that excitation or inhibition alone will not cause a multiplicative gain change in the curve of firing rate versus input current. However, such gain changes in vivo are measured in the curve of firing rate versus stimulus parameter. We find that when this curve is considered, and when the nonlinear relationships between stimulus parameter and input current and between input current and firing rate in vivo are taken into account, then simple excitation or inhibition alone can induce a multiplicative gain change. In particular, the power-law relationship between voltage and firing rate that is induced by neuronal noise is critical to this result. This suggests an unexpectedly simple mechanism that may underlie the gain modulations commonly observed in cortex. More generally, it suggests that a smaller input will multiplicatively modulate the gain of a larger one when both converge on a common cortical target.
Gain modulation is a roughly multiplicative or divisive change in the tuning curve of a neuron to one stimulus parameter as some other parameter or state is modified. Such gain changes are observed frequently in the responses of cortical neurons and are thought to play an important role in neural computations (for review, see Salinas and Thier, 2000). A particularly well studied example exists in monkey posterior parietal cortex, in which the responses of neurons to the retinal position of a visual stimulus are multiplicatively scaled by eye position (Andersen and Mountcastle, 1983; Andersen et al., 1985). Similar modulation of responses by eye position is seen in a variety of visual areas (Galletti and Battaglini, 1989; Boussaoud et al., 1993; Bremmer et al., 1997b; Trotter and Celebrini, 1999). This gain modulation has been proposed to underlie coordinate transforms computed by these neurons, which are necessary for visually guided reaching (Zipser and Andersen, 1988; Salinas and Abbott, 1995, 2001; Pouget and Sejnowski, 1997; Pouget and Snyder, 2000). Gain modulations have also been observed in the enhancement of neural responses by attention. McAdams and Maunsell (1999a) showed that attention can increase the gain of the orientation tuning curves of neurons in areas V1 and V4 of macaque visual cortex, whereas Treue and Martinez-Trujillo (1999) found that attention increases the gain of direction tuning curves in macaque area MT. Modulation of responses in V1 by stimuli outside the classical receptive field seems to be divisive in character (Cavanaugh et al., 2002; Palmer and Nafziger, 2002; Muller et al., 2003). Gain modulation can also be induced pharmacologically. Fox et al. (1990) found that by iontophoretically applying NMDA to neurons in cat V1, they could increase the gain of the contrast-response (CR) curve of the neuron.
Despite the apparent importance of multiplicative gain modulation in the cortex, the mechanisms responsible for producing such gain changes are not well understood (but, see Srinivasan and Bernard, 1976; Fox and Daw, 1992; Mel, 1993; Salinas and Abbott, 1996; Chance et al., 2002; Doiron et al., 2002; Smith et al., 2002). In particular, it has been concluded that simple excitation or inhibition alone cannot achieve a gain change (Holt and Koch, 1997; Chance et al., 2002), except at low firing rates (Doiron et al., 2001). Instead, it has been shown recently that concurrent, balanced increases in background excitation and inhibition together, which cause an increase in current noise and in conductance with no net depolarization or hyperpolarization, can serve to divisively decrease gain (Chance et al., 2002). These conclusions were based on examining the gain of the relationship between injected current and firing rate. However, multiplicative gain changes in cortex in vivo are observed in the relationship between a “stimulus parameter” and firing rate. Here, we consider the nonlinear relationship between stimulus parameter and injected current, as well as the nonlinear relationship between injected current and firing rate. We show that multiplicative gain changes arise robustly from the simple addition of excitation or inhibition alone, provided the modulating excitation or inhibition is small relative to the peak of the tuning curve of the driving excitation. That is, the observed cortical gain changes can be induced if the modulating influence simply adds or subtracts excitation or inhibition.
An important part of our model is the large background synaptic conductances to which neurons are subject in vivo, which give rise to a noisy subthreshold membrane potential (Destexhe and Paré, 1999; Anderson et al., 2000). A noisy subthreshold membrane potential in turn gives rise to an expansive power-law relationship between the average membrane potential and the firing rate of a neuron (Hansel and van Vreeswijk, 2002; Miller and Troyer, 2002). This nonlinear relationship between voltage or input current and firing rate, along with the nonlinear relationship between stimulus parameter and input current, together cause excitation or inhibition alone to yield roughly multiplicative gain changes in neuronal responses. We demonstrate this using both numerical simulations and a simple analytical model.
Materials and Methods
We simulate a cortical neuron using two models: a conductance-based, integrate-and-fire model and a Hodgkin-Huxley-type model.
Integrate-and-fire model. The integrate-and-fire model is described by the following equation: 1 where C is the capacitance, gleak is the leak conductance and Eleak is its reversal potential, Iinj is the injected current, and the gi are conductances with corresponding reversal potentials Ei. When the voltage reaches the spike threshold Vthresh, it is reset to Vreset and held there for a refractory period (trefract).
The parameters for the integrate-and-fire model and its synaptic and noise currents were selected to match cortical properties, primarily in the course of previous work (Troyer and Miller, 1997; Troyer et al., 1998; Krukowski and Miller, 2001; Palmer and Miller, 2002) but with minor adjustments in the present work. In particular, the parameters were designed without reference to (and before obtaining) the results presented here. The values for the following parameters are the same in each simulation: gleak = 10 nS, C = 488 pF, Eleak = -70 mV, Vthresh = -54 mV, Vreset = -60 mV, and trefract = 1.7 msec. The value of C was chosen so that, after taking into account additional background (noise) conductances described below, the resting membrane time constant is 20 msec. This is consistent with values of 15-24 msec observed in vivo for cortical neurons (Hirsch et al., 1998).
There are two excitatory synaptic conductances, NMDA and AMPA, and two inhibitory synaptic conductances, GABAA and GABAB. Their reversal potentials are ENMDA = 0 mV, EAMPA = 0 mV, EGABAA = -70 mV, and EGABAB = -90 mV. The NMDA conductance is voltage dependent in accordance with the model described by Jahr and Stevens (1990), using [Mg2+] = 1.2 mm. The voltage we use to compute the NMDA conductance is the “shadow voltage” Vs. Vs is obtained by integrating the membrane potential continuously in time in the absence of a spike threshold (e.g., it is not reset when it reaches the spike threshold). This is meant to approximate the potential experienced by NMDA channels located on the dendrites of the neuron and to eliminate discontinuities in the conductance at spike times. This model yields an NMDA conductance that is 3.7% of maximum at the resting potential of the neuron (where maximum is defined to be the conductance at a voltage of 100 mV) and 10.6% of maximum when Vs = -51 mV (the largest in this study).
The time courses of AMPA, GABAA, and GABAB conductances after presynaptic action potentials are modeled as a difference of single exponentials: 2 Here, Δtj is defined as (t-tj), where tj is the time of the jth presynaptic action potential and tj < t. The time constants are τAMPArise τAMPAfall = 1.75 msec, τGABAArise 40 msec, and τGABABfall = 0.75 msec, τGABAAfall = 0.25 msec, = 5.25 msec, τGABABrise = 80 msec. Parameters for GABAergic synaptic conductances are set to roughly match experimental data (Connors et al., 1988; Benardo, 1994). NMDA conductances decay as a double exponential with a fast and slow component: 3 Parameters for NMDA conductances are taken from experimental data for adult rats (Carmignoto and Vicini, 1992): τfastfall = 63 msec, Δslowfall = 200 msec, ffast = 0.88, and fslow = (1 - ffast). We set τNMDArise = 5.5 msec to match the experimentally observed 10-90% rise time for NMDA receptor-mediated postsynaptic currents (Lester et al., 1990).
The sizes of synaptic conductances evoked by presynaptic action potentials are set in terms of their total conductance integrated over time in units of nS·msec. The relative strengths of AMPA and NMDA conductances are set to match those observed in thalamocortical slices (Crair and Malenka, 1995). This, along with [Mg2+] = 1.2 mm, yields the result that, at Vthresh, the time integrated conductance for NMDA is 2.57 times that of AMPA. The AMPA conductance evoked by a single excitatory presynaptic action potential is set to 2.8 nS·msec. At Vthresh, the evoked NMDA conductance is 7.2 nS·msec. The GABAA conductance resulting from a single inhibitory presynaptic action potential is set to 8 nS·msec, and the GABAB conductance to 2 nS·msec.
In addition to the above described synaptic conductances, the model includes two fluctuating background conductances, an inhibitory conductance [gI(t)] with Erev = -80 mV and an excitatory conductance [gE(t)] with Erev = 0 mV. These conductances are meant to simulate the background synaptic input received by cortical neurons in vivo. They are produced by an Ornstein-Uhlenbeck process, as described by Destexhe et al. (2001): 4 where g0 is the mean conductance, τ is a noise time constant, σ2 is the variance of the conductance, and χ(t) is a Gaussian random variable with 0 mean and a SD of 1. Parameters were chosen by beginning with the parameters used by Palmer and Miller (2002) and adjusting these to produce membrane potential fluctuations of ∼5 mV at rest and a rest potential of approximately -70 mV. This is in accordance with recordings from cortical cells in vivo (Anderson et al., 2000). For the inhibitory background conductance, g0 = 12.0 nS, τ = 34.1 msec, and σ = 4.3 nS. For the excitatory background conductance, g0 = 2.4 nS, τ = 34.1 msec, and σ = 2.4 nS. The noise time constant of 34.1 msec was chosen to cause the power spectrum of the voltage noise to match that seen in experimental data generously provided by Jeff Anderson and David Ferster (Northwestern University, Evanston, IL) (this matching was performed by S. E. Palmer in the laboratory of K.D.M.) (Palmer and Miller, 2002). The mean background conductance, combined with gleak, gives the cell a resting input resistance (R) of 41 MΩ. The capacitance (C) is set to give the cell a resting membrane time constant (τm = RC) of 20 msec.
Hodgkin-Huxley model. Simulations of a more biophysically detailed single compartment model were produced using the NEURON simulation environment (Hines and Carnevale, 1997). This model includes fluctuating background conductances produced by an Ornstein-Uhlenbeck process, voltage-dependent sodium and potassium conductances to model action potentials, and a noninactivating potassium conductance responsible for spike frequency adaptation as described by Destexhe et al. (2001) for a single compartment neuron model. Parameters for the model were taken from Destexhe et al. (2001): gleak = 0.045 mS/cm2, Eleak = -80 mV, C = 1 μF/cm2. For the inhibitory background conductance, g0 = 57.0 nS, τ = 10.5 msec, and σ = 15.84 nS. For the excitatory background conductance, g0 = 12 nS, τ = 2.7 msec, and σ = 7.2 nS. The densities of voltage-dependent sodium and potassium channels are 480 pS/μm2 and 100 pS/μm2, respectively. The density of spike adaptation potassium channels is 3 pS/μm2. The surface area of the simulated neuron is 34,636 μm2.
The model also includes a hyperpolarization-activated conductance (Ih) with kinetics modeled as described by Migliore (2003). This conductance has a reversal potential of -43 mV (Stuart and Spruston, 1998) and a density of 0.05 mS/cm2.
Our results are divided into three sections. First, we present the results of a series of numerical simulations of a single model neuron of the visual cortex. The model neuron is a conductance-based integrate-and-fire neuron. Its input includes noise conductances designed to match voltage noise observed in vivo (Anderson et al., 2000), which make the RMS voltage noise ∼5 mV. In the second section, we provide a simplified, but more intuitive, analytical model of the neuron that shows the generality and robustness of the results obtained with the integrate-and-fire model. Finally, to further illustrate the robustness of the results, we show that the assumptions of the analytical model and the results of the integrate-and-fire model all hold for a more biophysically detailed Hodgkin-Huxley-type model neuron that also receives noise conductances. This model includes conductances responsible for spike generation and spike frequency adaptation, as well as a subthreshold hyperpolarization-activated conductance (Ih). We refer to the two numerical models as either the integrate-and-fire model or the Hodgkin-Huxley model.
Gain is defined here as the slope of a plot of a stimulus parameter, such as contrast or orientation, versus the response of the neuron. A pure gain change is one in which the curve of response versus stimulus parameter is multiplicatively scaled, so that the gain is scaled by a constant factor for all values of the stimulus parameter.
Contrast-response curve gain
Neurons in the visual cortex respond to stimuli of increasing contrast with an increasing firing rate. Plots of contrast versus firing rate are often well fit by a hyperbolic ratio function (Albrecht and Hamilton, 1982; Sclar et al., 1990): 5 where C is the contrast, R is the firing rate, S is the spontaneous activity, and C50 is the contrast that gives a half-maximal stimulus-induced firing rate. We refer to a plot of contrast versus firing rate as a CR curve. The gain of the neuron is the slope of the CR curve.
We assume that our model neuron receives a stimulus-driven excitatory Poisson input. The rate of this input is a hyperbolic ratio function of stimulus contrast, with Rmax = 2000 Hz, C50 = 0.133, n = 1.2, and S = 0. This is designed to model synaptic input from a previous stage of visual processing. We then study how the gain of the response of the model neuron to the stimulus is altered by additional excitation or inhibition in the form of glutamate or GABA receptor-binding drugs or direct injected current.
Iontophoresis. We modeled iontophoretic application of drugs binding to NMDA, AMPA, GABAA, or GABAB receptors by opening a constant conductance of the appropriate type and studied the effects on the CR curve of the neuron (Fig. 1A).
We first simulated the iontophoresis of NMDA onto the neuron by opening a constant NMDA conductance equivalent to 10 nS if the neuron were held at +100 mV. Because of the voltage dependence of NMDA channels, the mean iontophoretic NMDA conductance is 0.48 nS at zero contrast and increases to 1.1 nS at maximum contrast. This conductance increased the firing rate of the model neuron at all contrasts. The maximum firing rate (at C = 1.0) increased to 50 Hz from 34 Hz in the baseline curve. The firing rate at C = 0.0 increased to 0.96 Hz from 0.26 Hz. This corresponds to a 46% increase in the average slope of the CR curve. One might imagine that this slope increase is caused by the voltage-dependent increase of the NMDA conductance. However, when we simulated the iontophoresis of AMPA by opening a constant 1.0 nS AMPA conductance, we observed a similar increase in firing rates. The maximum firing rate increased to 48 Hz, and the firing rate at C = 0.0 increased to 0.81 Hz. The average slope of the CR curve increased by 40%. Because the AMPA conductance is not voltage dependent, this implies another mechanism of gain change.
We next simulated the iontophoresis of inhibitory drugs binding to GABAA or GABAB receptors. In both cases, the firing rate of the model neuron was reduced at all contrasts. A constant 2.0 nS GABAA conductance decreased the maximum firing rate from 34 to 28 Hz and the firing rate at C = 0.0 from 0.26 to 0.15 Hz. The average slope of the CR curve decreased by 17%. A constant 2.0 nS GABAB conductance had a larger effect, decreasing the maximum firing rate to 20 Hz and the firing at C = 0.0 to 0.06 Hz. The average slope decreased by 41%. The different effects of GABAA and GABAB can be attributed to their different reversal potentials.
We scaled the iontophoretic CR curves to optimally fit the baseline curve (Fig. 1B). The overlap of the scaled CR curves indicates that changes in the firing rate caused by iontophoretic conductances are very close to pure, multiplicative gain changes. Nonetheless, there are clearly systematic deviations from a purely multiplicative scaling. These deviations are made clear by analyzing the fits of the CR curves to hyperbolic ratio functions (Fig. 1A, solid lines). Fit parameters are given in Table 1. The most significant of these deviations are left and right shifts of the CR curve, indicated by changes in C50 (Eq. 5). NMDA and AMPA conductances shifted the baseline curve left, decreasing C50 by 11 and 13%, respectively. GABAA and GABAB shifted the curve right, increasing the baseline C50 by 6 and 17%, respectively. NMDA and AMPA also increased spontaneous activity (C = 0.0) somewhat, whereas GABAA and GABAB reduced it.
Injected current. In our model, iontophoresis causes both polarizing ionic current and changes in conductance. The conductance changes are small relative to the mean resting conductance of the neuron of ∼24 nS, rendering it unlikely that the conductance changes are a significant cause of the gain change. However, to directly separate the effects of injected current and conductance change, we next simulated the direct injection of current into the model neuron.
The effects of current injection on CR curves are very much like the effects of iontophoretic drugs (Fig. 2). Injection of depolarizing current into the neuron has effects similar to an iontophoretic AMPA or NMDA conductance. Current (50 pA) increased the maximum firing rate from 34 to 47 Hz and the firing rate at C = 0.0 from 0.26 to 0.73 Hz. The average slope of the CR curve increased by 37%. Conversely, injection of the hyperpolarizing current has effects similar to an iontophoretic GABAA or GABAB conductance. Current (-50 pA) decreased the maximum firing rate to 24 Hz and the firing rate at C = 0.0 to 0.09 Hz. The average slope of the CR curve decreased by 29%.
The changes in firing rate caused by injected current are close to purely multiplicative gain changes (Fig. 2B). Deviations from purely multiplicative changes are indicated by the fit parameters in Table 2. These deviations are in the same direction and of similar magnitude to those seen above for iontophoresis. Positive current shifted the baseline CR curve left, decreasing C50 by 12%. Negative current shifted the curve right, increasing the baseline C50 by 12%. These results imply that polarizing current, and not a change in conductance, is the primary factor responsible for changing the gain of the model neuron.
Our model of the effects of injecting current can be directly compared with existing in vivo data. Sanchez-Vives et al. (2000) examined the effect on visual cortical CR curves of injecting a hyperpolarizing current into the cell (Fig. 3A). Effects in the model are strikingly similar (Fig. 3B). The simulation produces a more purely multiplicative effect than is seen in the average experimental data (Fig. 3D), but the deviations are in the same directions in simulation as in the data. In particular, the hyperpolarization induces a statistically significant (Sanchez-Vives et al., 2000) increase in C50 in the experimental data, as predicted by the model. It should be noted that the experimental data are averaged over many cells; if each cell were modulated multiplicatively but different cells had different CR curves and were modulated by different factors, then the modulation of the average would not be purely multiplicative. Thus, it is possible that individual cells show a more purely multiplicative effect than the average data. In any case, the overall resemblance suggests that our simple model gives a reasonable representation of neurons in vivo.
Tuning curve gain
A number of influences, including attention and eye position, have been shown to change the gain of tuning curves in cortex. Although the mechanisms by which these gain changes occur are not clear, we demonstrate that, in our model neuron, one synaptic input can modulate the gain of the response to another. To do so, we introduce two Poisson inputs to the neuron. The first is an excitatory driving input, the rate R of which is a Gaussian function of an arbitrary stimulus parameter θ: 6 The second is a modulatory input, either excitatory or inhibitory, the rate of which is independent of θ. For the driving input, σ = 1.0, Rmax = 2000 Hz, and S = 0 Hz.
We constructed tuning curves for the model neuron by plotting average firing rate versus θ, with and without modulatory inputs (Fig. 4A). The driving input alone produced a maximum firing rate at θ = 0.0 of 41 Hz and a minimum firing rate at θ = ±3 of 0.29 Hz. We then added modulatory inputs to the neuron. A 250 Hz excitatory input increased the firing rate of the neuron for all values of θ. The maximum response increased to 55 Hz, and the firing rate at θ = ±3 increased to 0.86 Hz. A 250 Hz inhibitory input decreased the firing rate of the model neuron for all values of θ. The maximum firing rate decreased to 31 Hz, and the firing rate at θ = ±3 decreased to 0.12 Hz.
We scaled the tuning curves with modulatory input to best fit the baseline tuning curve (Fig. 4B). As in Figures 1 and 2, the tuning curves with modulatory input can be scaled to nearly fit the baseline curve. Parameters for Gaussian fits of the data in Figure 4A are shown in Table 3. These fits show that, in addition to the multiplicative scaling, there are systematic changes in the width of the tuning curve caused by modulatory input. Excitatory modulation caused an 8% increase in the width of the tuning curve. Inhibitory modulation caused a 6% decrease in the width of the tuning curve.
Results obtained by injecting constant current, instead of adding a modulatory Poisson input, are very similar (data not shown). None of the modulatory inputs used in any of these simulations had an affect on the response variability of the model neuron as measured by the coefficient of variation of interspike intervals.
Although we have chosen the parameters for the integrate-and-fire model carefully to match the experimentally measured properties of cortical neurons, our results do not depend on the detailed parameters of the simulations. For example, nothing qualitatively changes if the membrane time constant is doubled or halved (by correspondingly changing the capacitance) or if the stimulus-dependent Poisson input is replaced by an injected current (so long as the amplitude of the current remains the same nonlinear function of the stimulus parameter). The reason for this robustness is shown by the following simple analytical model, which shows the more general conditions required for these results to hold.
We express the state of the neuron in terms of the “shadow voltage,” defined to be the voltage the neuron would have if it did not spike or undergo postspike voltage resets. The effect of the voltage noise in the model neuron is to make the firing rate f of the neuron depend on its mean shadow voltage V as a power law (Hansel and van Vreeswijk, 2002; Miller and Troyer, 2002) (Fig. 5): 7 Here, α is ∼3.4. Furthermore the shadow voltage is linear in the mean input, so it can be thought of as a function of a driving input and a modulatory input: 8 The modulatory input m is a constant, whereas the driving input d(p) is a function of an input parameter p. In our numerical simulations, p corresponds to either contrast or the tuning parameter θ. Likewise, m corresponds to iontophoretic application of drugs, injected current, or modulatory synaptic inputs. Using these two equations, we obtain an expression for firing rate with respect to d(p) and m: 9 This equation is already sufficient to largely explain the results of our simulations. Letting d(p) be a sigmoid (Fig. 6A) or a Gaussian (Fig. 6B), we can compare k(d(p))α (Fig. 6, solid lines) with k(d(p) + m)α for positive m, renormalized to best match k(d(p))α (Fig. 6, dotted lines). Here, positive m represents an excitatory modulatory input. We find that the effect of excitatory modulation is to approximately multiply a sigmoidal CR curve, but with a slight left shift of the curve (Fig. 6A), and to approximately multiply a Gaussian tuning curve, but with a slight widening of the curve (Fig. 6B). In both cases, the result of the simple model given by Equation 9 is essentially identical to the results of the integrate-and-fire simulations.
To further understand why Equation 9 leads to a gain change, we define the gain as the derivative of the firing rate with respect to the input parameter. Taking the derivative of Equation 9 with respect to p, we obtain an expression for the gain: 10 Because α > 1 the gain is directly related to the modulatory input, m.
However, Chance et al. (2002) point out that a change in for a given p can arise from either a “true” gain change or a simple left or right shift of the nonlinear f versus p curve. To distinguish between these two cases, they plot we can rewrite Equation 10 as: versus f. Using Equation 9 11 When d(p) is linear in p, is a count, Therefore has no dependence on m, and changes in modulatory inputs (m) do not affect the gain. This can be seen more directly from Equation 9: if d(p) = ap + b, then f = k(ap + b + m)α, so that m simply left or right shifts the curve of f versus p without changing its shape. In the study by Chance et al. (2002), the driving input is an injected current and d(p) is a linear function of p. In our results, and in many biologically relevant situations, d(p) is not linear. In this case, in Equation 11 is not constant but depends on p; when reexpressed in terms of f, there will also be a dependence on m. That is, the value of as a function of f depends on m. Thus, changes in modulatory inputs do not simply shift the f versus p curve, they change its gain, its slope for a given value of f.
The gain changes observed in our numerical simulations are very nearly multiplicative; the slope at each point is changed by roughly the same factor. The above arguments explain why there should be a gain change but not why it should be nearly multiplicative. To make this more clear, it is useful to look at the expansion of Equation 9 to first order in : 12 With no modulatory input, m = 0, the firing rate is simply kd(p)α. If a non-zero modulatory input had a purely multiplicative effect on firing rate, we would expect it to add to the firing rate an amount proportional to d(p)α. Equation 12 shows that a non-zero m actually adds an amount proportional to d(p)α-1. Thus, the multiplicative effects of modulatory inputs in our model depend on d(p)α-1 being similar in shape to d(p)α, which in turn depends on α being substantially larger than 1. Of course, d(p)α and d(p)α-1 cannot be perfectly identical in shape unless d(p) is a constant, so small discrepancies from a perfectly multiplicative scaling are predicted; similar discrepancies are seen in some experimental results, as addressed in Discussion.
From the inputs represented in our simulations, the relationship between d(p)α and d(p)α-1 is most easily seen when d(p) is a Gaussian function. In this case, and . So, when m = 0, the firing rate of the neuron is a Gaussian, which is narrower than the Gaussian input d(p) by a factor of . A positive m adds an amount proportional to a slightly wider Gaussian (narrower than the Gaussian input by a factor of ). This produces a new tuning curve that is multiplicatively scaled and wider than the curve with m = 0 by an amount that depends on m and α. The width should increase by an amount no less than zero and no more than a factor of . For α = 3.4, the width should increase no more than 19%, which is consistent with results from the integrate-and-fire simulation in which modulatory excitation increased the width of the tuning curve by 8%, consistent with the slight widening observed in Figure 6B.
The relationship between d(p)α and d(p)α-1 is less clear when d(p) is a hyperbolic ratio function. However, it is easy to show that if , then [d(p)]α reaches its half-maximum at , which increases with increasing α. That is, d(p)α-1 should reach its half-maximum at a slightly lower value of C than d(p)α, so that adding d(p)α-1 to d(p)α should cause a slight left-shifting of the sigmoid curve. This is the result of excitatory modulation as seen in simulations and in Figure 6A.
The approximation in Equation 12 is justified in cases when m/d(p) is small. Clearly, this is not always the case in our simulations; for instance, d(p) goes to zero when the contrast is zero. In this case, the value of the function goes to kmα, and the relative error of the approximation is very large. However, as long as m is sufficiently small, the absolute error is small and the approximation is useful. The usefulness of this approximation in describing our numerical results is shown in Fig. 7, which plots the right side of Equation 12 using values for d(p) and m taken from our integrate-and-fire simulations. d(p) is simply the mean shadow voltage (without modulatory inputs) of the model neuron in the simulated CR and tuning curves. m is the mean difference between the shadow voltage of the neuron with a modulatory input and the baseline. The firing rates reconstituted using Equation 12 (Fig. 7, solid lines) correspond well with the actual firing rates in the integrate-and-fire simulations (Fig. 7, symbols). This indicates that this approximation is a reasonable one for describing these simulations.
The analytical model assumes that inputs cause additive changes in the shadow voltage. This is true for input currents. However, input conductances need not translate additively into input currents. To completely describe the effects of an input conductance, one has to consider reversal potential effects by which the current flowing through a conductance depends not only on the size of the conductance but also on the driving force. As such, the analytical model does not completely account for the effects of conductances, particularly shunting inhibitory conductances with reversal potentials close to rest. Nonetheless, we have found that shunting (GABAA) conductances behave similarly to injected currents in our numerical model, causing multiplicative changes in the curve of stimulus parameter versus firing rate. This is probably because, over the range of shadow voltages for which the firing rate of the neuron is significantly different from zero, the changes in the driving force are relatively small. Similar arguments (Holt and Koch, 1997) lead to the result that shunting conductances, like injected currents, have an additive effect on the curve of input current versus firing rate (Holt and Koch, 1997; Chance et al., 2002).
We are arguing that the addition of two inputs, followed by raising to a power, gives an approximate multiplication. If the input-output function were an exponential (Gabbiani et al., 2002) rather than a power law, this relationship would be exact: ea+b = eaeb. This raises the question of whether a better analytic approximation to our results might be given by an exponential rather than a power law. However, an exponential input-output relationship for the stimulus-induced firing rate must have the form f = k(eV - 1) so that f = 0 when V = 0 (where 0 represents rest). We tried fitting a function of this form to the input-output relationship of the neuron but the fit is visibly considerably worse than that shown in Figure 5 for a power law. Furthermore the fits to the simulation data using this equation (equivalents of Figs. 6 and 7) are quite poor, particularly for excitatory modulatory input. We conclude that the power law gives the better description of our simulations.
The success of this simple analytical model in describing the more complex biophysical model used in simulations demonstrates the robustness of our results. The analytical model shows that achievement of multiplicative gain modulation depends on only two features of the biophysical model: the shadow voltage should be a roughly linear function of the mean input, and the output rate should be a power law of the shadow voltage with an exponent significantly larger than 1. These are both attributes of a wide variety of biophysical models with a wide variety of parameters. In particular, the robustness with which noise induces a power law in a series of models has been demonstrated previously (Hansel and van Vreeswijk, 2002; Miller and Troyer, 2002), and we have also verified this: the power α = 3.39 found in our integrate-and-fire model becomes α = 3.33 if only AMPA and no NMDA is used for excitatory currents; α = 4.06 if the time constant is doubled; and α = 3.16 if the time constant is halved (the time constant was manipulated by changing the capacitance), with excellent fits of a power law in all cases. The change in α with a change in time constant is as expected theoretically: it has been shown that α should primarily depend on the distance from rest to threshold in units of the SD of the noise (Miller and Troyer, 2002); doubling the time constant decreases the noise and hence increases this distance, which increases α, whereas halving the time constant increases the noise and hence decreases this distance, which decreases α.
Do these two features hold for real cortical neurons? The assumption that the voltage of a neuron is approximately a linear sum of its inputs is often made, and, indeed, it is under this assumption that a mechanism of gain modulation by inhibition or excitation alone has proven elusive; if inputs can multiplicatively influence one another, multiplicative gain modulations are likely to be easier to attain. Nonetheless, our model provides such a mechanism under the assumption of linear input summation. Much evidence exists that summation in cortical or hippocampal pyramidal neurons can be linear (Jagadeesh et al., 1993, 1997; Cash and Yuste, 1998, 1999). However, pyramidal neurons contain voltage-dependent conductances that can affect the summation of inputs (for review, see Reyes, 2001) and can cause inputs to summate in a nonlinear manner (Schwindt and Crill, 1998; Nettleton and Spain, 2000; Wei et al., 2001). In addition, dendritic integration can be nonlinear, although nonlinear conductances can correct this and linearize dendritic integration (Bernander et al., 1994; Cash and Yuste, 1998, 1999). A recent modeling study suggests that integration of multiple inputs on a single thin apical dendrite may be nonlinear but that integration between dendrites is remarkably linear (Poirazi et al., 2003a,b).
The second key assumption, of a power-law relationship between voltage and firing rate, seems likely to hold in many cortical neurons. This is a general outcome of the presence of neural noise in a variety of neural models (Hansel and van Vreeswijk, 2002; Miller and Troyer, 2002), and many studies indicate the presence of substantial voltage noise in cortex (Arieli et al., 1996; Paré et al., 1998; Azouz and Gray, 1999; Tsodyks et al., 1999; Anderson et al., 2000; Hô and Destexhe, 2000). More specific evidence is provided by the finding by Anderson et al. (2000) that voltage noise can transform contrast-invariant voltage tuning into contrast-invariant spiking tuning in visual cortical neurons. As shown by Miller and Troyer (2002), a power-law transformation from voltage to spiking rate is the only such transformation that can achieve this, thus indicating that such a transformation is found in visual cortical cells.
The above arguments suggest that the two assumptions of our model may hold in many cortical cells. To provide further evidence that our proposed mechanism of gain modulation can apply to real cortical neurons, we now present numerical simulations of a model that includes some of the more detailed biophysical properties of cortical pyramidal neurons.
The Hodgkin-Huxley neuron model includes three potentially important conductances not present in our integrate-and-fire simulations: voltage-activated spiking conductances, a spike frequency adaptation conductance, and a hyperpolarization-activated mixed cation conductance (Ih). To demonstrate that the mechanism of gain change we are proposing is valid in this neuron model, we first show that the mean voltage of the neuron is approximately a linear function of the input it receives and that the firing rate of the neuron is related to the mean voltage by a power law. We then generate CR and Gaussian tuning curves for the model neuron with and without modulatory inputs. These curves are generated in the same way as the curves in the integrate-and-fire simulations presented above, but for simplicity the synaptic inputs (both modulatory and driving inputs) have been replaced with constant excitatory (Erev = 0 mV) or inhibitory (Erev = -80 mV) conductances. Replacing these inputs with injected currents yields very similar results (data not shown).
A power-law relationship between mean voltage and firing rate has been reported previously for a Hodgkin-Huxley-type model neuron (Hansel and van Vreeswijk, 2002) with a noisy membrane potential. We confirm that this relationship is also present in the model neuron studied here (Fig. 8B). The mean voltage of the model neuron is related to the output firing rate by a power law with α = 2.98 and k = 0.024. In addition we find that the relationship between excitatory input conductance and mean voltage is roughly linear over this range of output firing rate (Fig. 8A), although some deviation from linearity is seen as voltage approaches threshold.
We construct a CR curve by introducing a stimulus-driven excitatory conductance, the magnitude of which is a hyperbolic ratio function of contrast with parameters Rmax = 20 nS, C50 = 0.133, n = 1.2, and S = 0 (see Eq. 5). The modulatory inputs do not vary with contrast and are either a 3.5 nS excitatory conductance or a 8.5 nS inhibitory conductance. Modulatory inputs have an effect on the CR curve very similar to that in the integrate-and-fire model (Fig. 9, compare Fig. 2). A 3.5 nS excitatory conductance increased the maximum firing rate from 36 to 49 Hz and the firing rate at C = 0.0 from 0.7 to 2.3 Hz. The average slope of the CR curve increased by 32%. A 8.5 nS inhibitory conductance decreased the maximum firing rate to 25 Hz and the firing rate at C = 0.0 to 0.18 Hz. The average slope of the CR curve decreased by 30%.
The changes in firing rate caused by the modulatory inputs are close to purely multiplicative gain changes (Fig. 9B). Deviations from purely multiplicative changes are indicated by the fit parameters in Table 4. Excitatory modulatory conductance shifted the baseline CR curve left, decreasing C50 by 19%. Inhibitory modulatory conductance shifted the curve right, increasing the baseline C50 by 23%.
We construct a tuning curve by introducing a stimulus-driven excitatory conductance, the magnitude of which is a Gaussian function of a stimulus parameter θ with parameters Rmax = 17.5 nS, σ = 1.0, and S = 0 Hz (see Eq. 6). The modulatory inputs were the same as for the CR curve, a 3.5 nS excitatory conductance or an 8.5 nS inhibitory conductance.
Modulatory inputs have an effect on the tuning curve very similar to that in the integrate-and-fire model (Fig. 10, compare Fig. 4). The driving input alone produced a maximum firing rate at θ = 0.0 of 33 Hz and a minimum firing rate at θ = ±2.5 of 0.7 Hz. We then added modulatory inputs to the neuron. The excitatory modulatory input increased the firing rate of the neuron for all values of θ. The maximum response increased to 46 Hz, and the firing rate at θ = ±2.5 increased to 2.3 Hz. The inhibitory modulatory input decreased the firing rate of the model neuron for all values of θ. The maximum firing rate decreased to 22 Hz, and the firing rate at θ = ±2.5 decreased to 0.18 Hz.
The changes in firing rate caused by modulatory inputs are close to purely multiplicative gain changes (Fig. 10B). Deviations from purely multiplicative changes are indicated by the fit parameters in Table 5. An excitatory modulatory conductance caused a 12% increase in tuning curve width, whereas an inhibitory modulatory conductance caused an 11% decrease in width.
For both the CR curve and the tuning curve, deviations from purely multiplicative changes are in the same direction as the integrate-and-fire model and are similar in magnitude (although somewhat larger), which suggests that our proposed mechanism of gain change also operates in this more detailed biophysical model.
The results demonstrate that changes in excitatory or inhibitory inputs alone can approximately multiplicatively change the gain of a neuron's response to a stimulus-dependent input. These gain changes observed in our model are primarily dependent on two things: an expansive nonlinearity (a power law with exponent substantially larger than one) relating the average membrane potential and the firing rate of the neuron, and an appropriately nonlinear dependence (e.g., sigmoidal or Gaussian) of the stimulus-dependent input rate on the corresponding stimulus parameter. Power-law input-output functions are likely to be ubiquitous in cortex because of voltage noise (Arieli et al., 1996; Paré et al., 1998; Azouz and Gray, 1999; Tsodyks et al., 1999; Anderson et al., 2000; Hô and Destexhe, 2000; Hansel and van Vreeswijk, 2002; Miller and Troyer, 2002), and cortical firing rates commonly have a sigmoidal or approximately Gaussian dependence on stimulus parameters. Hence, multiplicative gain changes should be common in cortex, and convergent inputs onto a neuron should multiplicatively modulate one another's gain, so long as one input (the “modulatory” input) is small relative to the peak input evoked by the other. This seems to be a natural result of the properties of cortical neurons and the input that they receive and could help explain the ubiquity of such gain changes observed experimentally (Andersen et al., 1985; Galletti and Battaglini, 1989; Boussaoud et al., 1993; Bremmer et al., 1997a; McAdams and Maunsell, 1999a,b; Treue and Martinez-Trujillo, 1999; Trotter and Celebrini, 1999; Salinas and Thier, 2000).
The most obvious prediction of our model is that at least some of the multiplicative gain changes observed in cortex will be found to arise from purely excitatory or purely inhibitory modulation (or more generally, from unbalanced modulatory inputs yielding a net excitation or inhibition). In addition, our work suggests a number of clues that would be consistent with a mechanism involving net excitation or inhibition.
Net excitatory or inhibitory modulatory inputs should cause small systematic deviations from a purely multiplicative gain change: gain increases should lead to decreases in C50 and increases in tuning width, whereas gain decreases should yield opposite changes. However, the predicted deviations are small (∼10-20%) and may not be observable in practice. Even so, this is a fundamental prediction of our model and may become more experimentally accessible in the future. Furthermore, the size of the deviations should be proportional to the size of the gain change, so larger changes in gain should give larger deviations from multiplication. This has two important implications. First, the deviations may become observable with sufficiently large gain changes. Second, modulation by unbalanced inputs should yield a correlation between the change in the amplitude of a tuning curve and the change in its width (for Gaussian tuning curves) or its C50 (for sigmoidal tuning curves).
We also predict that multiplicative gain changes in vivo induced by unbalanced inputs should be accompanied by an upward or downward shift in mean voltage, but little change in total conductance or response variability. Such gain changes should occur in neurons that have substantial voltage noise and, thus, have power-law input-output functions. If the gain change is caused by a tonic excitation or inhibition (rather than a modulatory excitation or inhibition that only accompanies stimulus-induced input), then spontaneous activity levels should be modulated to the same degree as stimulus-induced responses.
Previous theory and experiment
We predict that tuning curves in cortex measured during intracellular current injection will exhibit multiplicative gain changes similar to those in our simulations. A gain decrease in the CR functions of neurons in primary visual cortex has been observed after injection of hyperpolarizing current (Fig. 3) (Sanchez-Vives et al., 2000). Accompanying this gain decrease was a statistically significant increase in the parameter C50 describing the curve (Eq. 5), as predicted by our model.
Fox et al. (1990) examined the effects of externally applied glutamate receptor-binding drugs on the responses of cortical neurons to visual stimulation. They found that NMDA increased the gain of the CR curve of the neuron, whereas quisqualate shifted the curve upward. Our model suggests that simple depolarization induced by NMDA application, rather than nonlinearities in NMDA-induced responses, caused the gain increase. The seemingly straightforward effects of quisqualate are more difficult to explain. They may have been confounded by the fact that quisqualate has a number of effects besides activating non-NMDA ionotropic glutamate receptors, including binding to metabotropic glutamate receptors (Pin and Duvoisin, 1995; Chu and Hablitz, 2000) and glutamate transporters (Chase et al., 2001). We predict that a selective agonist of AMPA receptors, such as AMPA, would also cause a gain change.
A model proposed by Fox and Daw (1992) to account for their experimental results assumes that quisqualate acts exclusively at ionotropic non-NMDA receptors and that the firing rate of the neuron is linearly related to its membrane potential. In this case, the quisqualate-induced shifts are easily explained. However, given a more realistic power-law relationship between membrane potential and firing rate, it is difficult to account for the shifts induced by quisqualate based on its ionotropic action alone. To explain the effects of NMDA, they assumed cooperativity in binding between externally applied NMDA and synaptic glutamate released during visual stimulation, which requires that NMDA and glutamate bind to a common set of receptors. This was not always the case in their experiments; some cells exhibited gain increases during NMDA application, but no significant gain change during application of APV (an antagonist of NMDA receptors), implying that both bound to nonvisual NMDA receptors. In light of this, depolarization seems a more plausible explanation for NMDA-induced gain changes.
Our model complements the recent model of Chance et al. (2002). They showed that a balanced change in inhibitory and excitatory inputs could cause a multiplicative gain change as assessed by a change in a curve of firing rate versus current of a neuron. We show that, when one instead considers the curve of firing rate versus stimulus parameter, simple excitation alone or inhibition alone is sufficient to produce a gain change. This gain change is primarily dependent on the hyperpolarization or depolarization induced by the modulatory input, which causes only small changes in the conductance of the cell. In contrast, the gain changes described by Chance et al. (2002) require a relatively large change in both current noise and total conductance. A balanced change in excitation and inhibition together, as described by Chance et al. (2002), also produces a gain change in our model (data not shown). We conclude that a wide range of modulatory inputs, balanced or unbalanced, should induce a multiplicative gain change.
In cortical areas in which eye position modulates neural responses to visual stimuli, some visually driven neurons can also be driven directly by eye position alone (Boussaoud et al., 1993; Squatrito and Maioli, 1996, 1997; Bremmer et al., 1997a,b). Our model explains both the modulatory and driving effects of eye position on these neurons as resulting from a single excitatory input. This requires that direct responses to eye position be small relative to visual responses in such neurons, which seems consistent with experiments (Bremmer et al., 1997a,b; Squatrito and Maioli, 1997). This suggests some advantage of our proposal over that of Chance et al. (2002), which seems to require two different types of eye-position-evoked inputs for such neurons: a set of balanced inhibitory and excitatory inputs that modulates the gain of visual responses and another excitatory input that drives direct responses to eye position.
Changes in attention have been shown to multiplicatively scale the orientation and direction tuning curves of cortical neurons (McAdams and Maunsell, 1999a,b; Treue and Martinez-Trujillo, 1999). One interpretation of these experimental results is that the neurons are responding with increased gain to the same visually induced input, as in our model. However, more recent results (Reynolds et al., 2000; Martinez-Trujillo and Treue, 2002) show that attention can cause a shift, rather than a gain change, in the CR function of these neurons. This suggests that the effects of attention may represent an increase in the effective contrast of the stimulus and not a change in the response gain of the neuron.
Treue and Martinez-Trujillo (1999) observed a slight (8%), but not statistically significant, widening of MT direction tuning curves during feature-based attention. This widening, if substantiated by more data, would be in accordance with our model, which predicts a widening of tuning curves with increasing gain (Fig. 4). Changes in attention do not seem to cause changes in the width of orientation tuning curves in area V4 (McAdams and Maunsell, 1999a). Furthermore, although attention multiplicatively scaled tuning curves in V4 (including responses at nonpreferred orientations), it did not systematically affect spontaneous activity in the absence of visual stimulation. A net excitatory modulatory input, if tonically active, would have scaled spontaneous activity in the same manner as stimulus-evoked activity. Thus, if one models the effects of attention as a gain modulation rather than an increase in effective stimulus contrast, the model of Chance et al. (2002) may better explain the results from area V4. However, the widening of tuning curves in MT with attention is more in accordance with unbalanced or purely excitatory modulation; it would be interesting to determine whether spontaneous activity is affected by attention in MT and to determine conclusively whether attention affects the width of direction tuning curves in this area.
Our proposal is similar in spirit to that of Gabbiani et al. (2002), who suggested that an insect neuron does multiplication by subtracting one input from another at the level of voltages and having an exponential input-output relationship (although they found that a power law fit their input-output relationship better than an exponential). However, it differs in being based on the properties of cortical neurons.
Since this work was completed, two papers have appeared that address the role of shunting inhibition in gain modulation. Both highlight the importance of voltage noise. Prescott and De Koninck (2003) showed in a modeling study that dendritic saturation of the excitatory input, along with voltage noise, could cause shunting inhibition to divisively alter firing rate. The effect of saturation seems similar to that of a nonlinear, sigmoidal relationship between stimulus parameter and input rate in our model. Mitchell and Silver (2003) studied a cerebellar granular neuron that received relatively few excitatory synaptic inputs, all of which had large unitary conductances. As a result, an increase in input rate caused a significant increase in voltage noise. This, in turn, caused shunting inhibition to have a partially divisive effect on the curve of input rate versus output rate. If voltage noise does not increase significantly with input rate, as in the present and most previous studies of gain modulation, then shunting inhibition causes a subtractive shift in this curve (Chance et al., 2002).
Implications for neuronal computation
The question of whether a single neuron can biophysically multiply its inputs has long been of interest to those concerned with the computational capabilities of single neurons (Torre and Poggio, 1978; Mel, 1993; Koch, 1998; Gabbiani et al., 2002). We are proposing that a cortical neuron that adds its inputs at the level of voltages, but raises this net input to a power significantly greater than one to produce an output, can effectively compute a multiplication of the inputs [or more strictly, of functions of the inputs: the output R is given by R ≈ f(i1) g(i2), where i2 and i2 are the inputs and f and g are some functions]. Furthermore if the input voltages are nonlinear functions of a stimulus parameter, then this multiplication will not produce a mere left or right shift of the curve of output versus parameter. Multiplication computed in this manner is only approximate. The approximation is accurate, although small systematic differences remain, when the modulatory input is substantially smaller than the driving input over the range of the tuning curve in which the driving input produces substantial responses.
Given a few basic assumptions about the common properties of cortical neurons, specifically the nonlinear ways that their input firing rates depend on the properties of a stimulus and their output firing rates depend on their input, we have shown that it should be expected that a smaller input will multiplicatively modulate the gain of the response to a larger input. No special mechanisms are required to account for these multiplicative interactions. Although other mechanisms may also play a role in experimentally observed gain changes, we are proposing that multiplicative gain changes are a normal property of the cortex, the natural outcome of these simple attributes of cortical neurons.
This work was supported by National Institutes of Health Grant R01-EY11001 to K.D.M. and by a predoctoral fellowship to B.K.M. from a Burroughs-Wellcome Institutional Award at the Scientific Interface to the Program in Quantitative Biology at University of California, San Francisco.
Correspondence should be addressed to Kenneth Miller, Department of Physiology, University of California, San Francisco, 513 Parnassus, San Francisco, CA 94143-0444. E-mail:.
Copyright © 2003 Society for Neuroscience 0270-6474/03/2310040-12$15.00/0