Abstract
Contrast adaptation is a psychophysical phenomenon, the neuronal bases of which reside largely in the primary visual cortex. The cellular mechanisms of contrast adaptation were investigated in the cat primary visual cortex in vivo through intracellular recording and current injections. Visual cortex cells, and to a much less extent, dorsal lateral geniculate nucleus (dLGN) neurons, exhibited a reduction in firing rate during prolonged presentations of a high-contrast visual stimulus, a process we termed high-contrast adaptation. In a majority of cortical and dLGN cells, the period of adaptation to high contrast was followed by a prolonged (5–80 sec) period of reduced responsiveness to a low-contrast stimulus (postadaptation suppression), an effect that was associated, and positively correlated, with a hyperpolarization of the membrane potential and an increase in apparent membrane conductance. In simple cells, the period of postadaptation suppression was not consistently associated with a decrease in the grating modulated component of the evoked synaptic barrages (the F1 component).
The generation of the hyperpolarization appears to be at least partially intrinsic to the recorded cells, because the induction of neuronal activity with the intracellular injection of current resulted in both a hyperpolarization of the membrane potential and a decrease in the spike response to either current injections or visual stimuli. Conversely, high-contrast visual stimulation could suppress the response to low-intensity sinusoidal current injection.
We conclude that control of the membrane potential by intrinsic neuronal mechanisms contributes importantly to the adaptation of neuronal responsiveness to varying levels of contrast. This feedback mechanism, internal to cortical neurons, provides them with the ability to continually adjust their responsiveness as a function of their history of synaptic and action potential activity.
Contrast adaptation was first described in psychophysical studies in which exposure to a high-contrast grating lead to aftereffects consisting of a decrease of contrast sensitivity (Blakemore and Campbell, 1969; Dealy and Tolhurst, 1974; Swift and Smith, 1982; Georgeson and Harris, 1984; Berkley, 1990;Määtänen and Koenderink, 1991; Hammett et al., 1994) and of a decrease of the perceived contrast compared to preadaptation (Blakemore et al., 1973; Georgeson 1985; Ross and Speed, 1996; Snowden and Hammett, 1996), requiring tens of seconds to recover (Blakemore and Campbell, 1969; Blakemore et al., 1973; Lorenceau, 1987; Ho and Berkley, 1988). The psychophysical correlate of high-contrast adaptation itself consists in a perceived fading of the contrast (Blakemore et al., 1973; Hammet et al., 1994).
Neuronal correlates of these phenomena occur in the primary visual cortex. The response of neurons to the prolonged presentation of a high-contrast stimulus progressively decreases (adapts) with a time constant of seconds (Maffei et al., 1973; Vautin and Berkley, 1977;Albrecht et al., 1984; Ohzawa et al., 1985; Marlin et al., 1988). The aftereffects of contrast adaptation consist in a decreased spontaneous activity level (Vautin and Berkley, 1977) and in a reduced response to low-contrast stimuli compared to the preadaptation level (Maffei et al., 1973; Movshon and Lennie, 1979; Dean, 1983; Albrecht et al., 1984;Ohzawa et al., 1985; Saul and Cynader, 1989a; Sclar et al., 1989;Allison et al., 1993).
Psychophysical studies have shown that contrast threshold changes after adaptation are greatest for test stimuli having an orientation and a spatial frequency close to that of the adapting stimulus (Blakemore and Campbell, 1969; Blakemore and Nachmias, 1971; Dealy and Tolhurst, 1974;Swift and Smith, 1982; Georgeson and Harris, 1984; Berkley, 1990;Määttänen and Koenderink, 1991; Ross and Speed, 1996; Snowden and Hammett, 1996). In addition, contrast adaptation shows an interocular transfer (Blakemore and Campbell, 1969;Bjorklund and Magnussen, 1981). Because neurons in the lateral geniculate nucleus (LGN) and retina are monocular, poorly tuned to orientation, and broadly tuned to spatial frequency, this has led to the notion that contrast adaptation is largely a cortical phenomenon. Electrophysiological studies further showed that, whereas adaptation and postadaptation changes are pronounced in primary visual cortex, at best moderate changes take place in the retina and lateral geniculate nucleus (Maffei et al., 1973; Ohzawa et al., 1985; Saul and Cynader, 1989a; Bonds, 1991; Mukherjee and Kaplan, 1995; Shou et al., 1996;Smirnakis et al., 1997).
Several mechanisms have been proposed to explain contrast adaptation, such as “fatigue” of cortical cells after intense firing (Swift and Smith, 1982; Georgeson and Harris, 1984), prolonged inhibition (Dealy and Tolhurst, 1974; Ohzawa et al., 1985), synaptic facilitation on inhibitory neurons (Wilson and Humanski, 1993), synaptic depression on excitatory neurons (Finlayson and Cynader, 1995; Chance et al., 1998;Adorján et al., 1999), and network interactions (Vidyasagar, 1990; Ahmed et al., 1997). Recently, Carandini and Ferster (1997)demonstrated that contrast adaptation is associated with a hyperpolarization of the membrane potential in cat area 17 neurons and suggested that this could reflect a decrease in tonic synaptic excitation by a mechanism of synaptic depression. In the present study we show that adaptation to high contrast leads to a hyperpolarization of the membrane potential that is largely an intrinsic cell property and that it contributes to the postadaptation suppression of activity. In the companion paper (Sanchez-Vives et al., 2000), we demonstrate that the long-lasting activation of area 17 neurons in vitro, mimicking contrast adaptation, results in a prolonged hyperpolarization through the activation of Ca2+ and Na+-dependent K+ conductances.
Part of these results have been presented in abstract form (Sanchez-Vives et al., 1997).
MATERIALS AND METHODS
Cat preparation. Adult cats (2.5–3.5 kg) were anesthetized with ketamine (12–15 mg/kg, i.m.) and xylazine (1 mg/kg, i.m.). Atropine (0.05 mg/kg, s.c.) was given to reduce secretions. A forelimb vein was cannulated for intravenous perfusion, a tracheal tube was inserted for active ventilation, and wires were placed through the skin for EKG recording. The cat was then mounted in a stereotaxic frame and ventilated with either a mixture of nitrous oxide and oxygen 2:1 with halothane (1.5%), or with oxygen and isoflurane (2.5%). Silver wires were inserted above the frontal cortex for epidural recording of the EEG. To minimize pulsation arising from the heartbeat and respiration, a cisternal drainage and a bilateral pneumothorax were performed, and the animal was suspended by the rib cage to the stereotaxic frame. A craniotomy (3–4 mm wide) was made overlying the representation of the area centralis of area 17. In some experiments, another craniotomy was made at Horsley–Clarke (H–C) coordinates: 5.5 mm anterior and 9 mm lateral, to access the dLGN.
After surgery, the animals were paralyzed with Pancuronium Bromide (Pavulon; 3 mg/kg for induction followed by a constant intravenous perfusion at 3 mg · kg−1 · hr−1in a Ringer's solution containing 5% dextrose). The nictitating membranes were retracted using ophthalmic phenylephrine, and the pupils were dilated and accommodation paralyzed with ophthalmic atropine. The area centralis and optic disks were projected onto a screen at a distance of 114 cm from the eyes, and the eyes were focused using corrective, gas-permeable contact lenses.
During recording, anesthesia was maintained with 0.4–1% halothane or with 0.5–2% isoflurane. The heart rate, expiratory CO2 concentration, rectal temperature, and blood O2 concentration were monitored throughout the experiment and maintained at 150–180 bpm, 3–4%, 37–38°C, and >95%, respectively. The EEG and the absence of reaction to noxious stimuli were regularly checked. After the recording session, the animal was given a lethal injection of sodium pentobarbital. This protocol was approved by the Yale University Institutional Animal Care and Use Committees and conforms to the guidelines recommended inPreparation and Maintenance of Higher Mammals During Neuroscience Experiments, National Institutes of Health publication No. 91–3207.
Recording and electrophysiological signal acquisition.Extracellular and intracellular recordings were performed in area 17 within an area 10° wide centered on the area centralis. Extracellular recordings were also performed in the dLGN within the same visual field position as the cortical recordings (Sanderson, 1971). Tungsten-in-glass microelectrodes (Merill and Ainsworth, 1972) were used for extracellular recording of single units in the dLGN and area 17. For intracortical recordings, a small opening was made in the dura, and a microelectrode was positioned just above the cortical surface. Stability was achieved by application of agar (4% in artificial CSF) to the cortical surface before penetrating the cortex.
Intracellular recordings were obtained using conventional “sharp” electrodes, pulled on a P-80 micropipette puller (Sutter Instruments, Novato, CA) from medium-walled glass capillaries (1BF100; World Precision Instruments, Sarasota, FL), filled with 2 mK+ acetate and 2% biocytin and beveled to a final resistance of 50–100 MΩ on a Sutter Instruments beveler. Intracellular recordings were included if they showed stable membrane potentials less than −55 mV at rest and an input resistance >20 MΩ.
Intracellular signals were amplified with an Axoclamp-2B amplifier (Axon Instruments, Foster City, CA) and recorded on tape and acquired, without filtering, on-line and off-line with a 1401 interface and Spike2 software (Cambridge Electronic Design, Cambridge, UK), with digitization rates of between 200 and 50,000 Hz. The timing of action potentials was collected at 10 μsec resolution.
Protocols of intracellular current injection. The electrophysiological properties of each stable cortical neuron were examined with the intracellular injection of current pulses and classified as fast-spiking, regular-spiking, intrinsic bursting, or chattering (McCormick et al., 1985; Gray and McCormick, 1996; Azouz et al., 1997).
Sinusoidal current injections (2 Hz) were used to characterize the long-lasting adaptation and postadaptation effects in 34 cortical cells and consisted of low-intensity (±0.15–0.5 nA, preadaptation period) current injection, adjusted to elicit a reliable low-frequency firing, followed by 20 sec of high intensity (± 0.5–1.2 nA; adaptation period) and back to the initial low intensity for at least 30 sec (postadaptation period). We termed these protocols “sine-sine-sine”. In a variant of this protocol, the low-intensity sinusoidal injection was replaced by 200–300 msec hyperpolarizing square pulses at 0.5–1 Hz. This protocol (pulse-sine-pulse) allowed us to quantify changes in input resistance occurring after the high-intensity sinusoidal current injection. The current injection protocols were repeated two to six times for subsequent averaging.
Visual stimulation. The receptive field's location, as well as length and velocity preferences, were first determined with a handheld projector. Subsequently, visual stimuli were generated and presented through a VSG-Series 3 computer system (Cambridge Research Systems, Cambridge, UK) on a 19 inch color monitor (80 Hz noninterlaced refresh; 1024 × 768 resolution). The preferred orientation and spatial frequency were determined from peristimulus time histograms (PSTHs) calculated on line.
The response to the best spatial frequency was used off-line to classify cells as simple or complex. For this purpose, a PSTH of the spike response, triggered on each cycle of the grating and with a width equal to the period of the drift, was Fourier-analyzed (after subtraction of the mean spontaneous activity level). The F0 (mean response, or DC component) and F1 (first harmonic of the response, which corresponds to the modulation of the response at the frequency of the grating drift) components were extracted. The ratio of F1/F0, or “relative modulation index” (Skottun et al., 1991) was used to classify cells as simple or complex. The distribution of the relative modulation indices was clearly bimodal, with a gap at 0.7. Based on this distribution, we considered cells as simple when the relative modulation index was >0.7 and complex when it was <0.7.
For studying contrast adaptation, the stimulus consisted of a sinusoidal drifting grating with the preferred orientation and spatial frequency and a drift velocity of 1.56 or 3.12 cycles/sec. It was presented in a circular patch of 3–10° diameter, centered on the receptive field. Outside the patch the monitor display was a homogenous gray with a luminance equal to the mean luminance of the grating. The contrast adaptation protocol was preceded by a 2 min period during which the cell was adapted to the low-contrast stimulus. The adaptation protocol consisted of presenting the grating at a low contrast [Michelson contrast, C(%) = 100 × (Lmax −Lmin/Lmax+ Lmin)] of 5–20% for 30 sec (preadaptation period), then at high contrast (30–80%) for 30 or 60 sec (adaptation period), then at low contrast anew for 60 or 120 sec (postadaptation period). The whole cycle (preadaptation, adaptation, postadaptation) was repeated 4–10 times in most of the cases.
The contrast adaptation protocol used in this study consisted of a presentation of only two different contrasts for an extended period of time. This differs from protocols used in several studies (Movshon and Lennie 1979; Ohzawa et al., 1985; Bonds, 1991; Carandini and Ferster 1997; Ahmed et al., 1997) in which several test contrasts were presented for short duration to determine changes of contrast–response function resulting from contrast adaptation. Although our protocol did not enable us to study adaptation-dependent changes of the contrast–response function, it allowed us to study the time course of the changes, both during and after high-contrast adaptation and allowed the comparison of our results to psychophysical studies on changes of contrast sensitivity. Finally, the contrast adaptation protocol we used could be mimicked with sinusoidal current injections both in vivo and in vitro, which enabled us to study the potential role of membrane conductances in contrast adaptation.
In some experiments (“hybrid protocols”) either the high contrast was replaced by high-intensity sinusoidal current injection, or the low contrast by low-intensity sinusoidal current injection. To determine the effects of a tonic hyperpolarization on the contrast–response function, this function was determined after movement of the membrane potential to each of several different levels with the intracellular injection of DC (see Fig. 12). A hysteresis protocol was used (Bonds, 1991), consisting in the presentation of nine different contrasts in ascending then descending order. Only responses to the ascending series of contrasts are presented in this paper. Each contrast was presented for 1.5 sec. Increments constituted a geometric series (increment by √2). The lowest contrast was set either at 2.5%, yielding a highest contrast of 40%, or at 5%, yielding a highest contrast of 80%. Contrast ramps were separated from each other by a 10 sec period during which the contrast remained at 0% to allow measurements of spontaneous activity as well as recovery from adaptation. For each membrane potential, 5–20 ramps were presented, and the results were averaged together.
Data analysis. Similar analysis was used for data from contrast adaptation and from the sinusoidal current injection protocols.
Spike responses. The presentation of the high-contrast grating, or of the high-intensity current, was expected to lead to a reduction of the firing rate during the postadaptation period with respect to the preadaptation value. The first step consisted of determining the significance of the changes induced by the high-contrast or high-intensity stimulus. For this purpose we calculated a PSTH with a bin width of 5 sec (more rarely 2.5 sec), in which the spike count was not normalized (Fig.1, insets). From the histogram, the mean spike count per bin (m) for the preadaptation period (6 bins, 30 sec) was calculated and used to calculate the lower 95% confidence limit, using the formula: lower 95% limit = m − (2.58 × √m) (Abeles, 1982; this formula is not valid for m < 30). A second lower 95% confidence limit was also calculated for the period corresponding to the end of the postadaptation (last 6 bins, last 30 sec of the postadaptation period). This proved necessary in some cases for which the activity underwent some slow changes. Postadaptation suppression was considered significant when at least one bin was less than the 95% confidence limit (Fig. 1). When the mean for the preadaptation period and the mean for the end of the postadaptation period were different, we always used the least favorable 95% confidence limit (Fig.2A3).
The duration of the postadaptation suppression was taken as the time between the end of high-contrast stimulus and the middle of the first of the first two adjacent bins that cross the 95% confidence limit (Figs. 1, 2A3).
For determining the time course of the response changes during the high-contrast visual stimulation or during the high-current intensity stimulation, a PSTH was calculated using a 1 or 0.5 sec bin width (Fig.2A1). The response was fitted using one or two exponential curves. The time constants of the exponentials were used as the measure of the firing rate decay time course.
Both the F0 (mean firing rate) and F1 components were measured as a function of time within the adaptation protocols. This was achieved by constructing PSTHs (16 bins) of the spiking response elicited by each cycle of the drifting grating (or of the sinusoidal current) with the same ordinal position across the several repeats of the adaptation runs. Hence, for an adaptation protocol repeated five times, the responses for the five drift cycles with the same time of occurrence in each run were averaged together. For contrast adaptation protocols, these PSTHs have a width corresponding to the period of the drifting grating (0.64 or 0.32 sec). For the sinusoidal current injection protocol, it corresponds to a period of 0.5 sec. For a contrast adaptation run of 210 sec duration, this resulted in the calculation of 656 or 328 PSTHs, depending on the drift velocity. These PSTHs were Fourier-analyzed, and the F0 and F1 were extracted. The result of this analysis consisted of series of F0 values (for all cells) and F1 values (LGN and simple cells) as a function of time within the adaptation or current injection protocols. The height of each of the bins of the PSTHs presented in Figure 1, for example, correspond to the F0 value for the drift cycle that occurred at this time. When hyperpolarizing current pulses were injected to measure the input resistance (see Fig.7), the PSTHs were not Fourier-analyzed. Instead, the mean firing rate was derived from the spiking activity for the period outside of the pulse itself.
The strength of adaptation during high-contrast visual stimulation or high-intensity current injection was calculated as follows (Fig.2A1): the F0 values of the first five cycles of the high-contrast (high-intensity) stimulus were averaged (F0beghigh), as well as those for the last five cycles of the high contrast (high intensity) (F0endhigh). The “adaptation ratio” was expressed as a percentage of the firing at the end of the high contrast with respect to the beginning (100 × F0endhigh/F0beghigh). The same calculation was made for the F1 component in dLGN and simple cells. The strength of the postadaptation reduction of firing rate was determined as a “postadaptation ratio” (%), 100 × F0post/F0pre, where F0pre represents the mean of the F0 values for the 30 sec of preadaptation, and F0postrepresents the mean of the first five F0 values of the postadaptation period (Fig. 2A2). The calculation was also done for the F1 component in dLGN and simple cells (100 × F1post/F1pre). See also legend of Figure 2.
Measurements for contrast–response functions (see Fig. 12) were constructed from the F0 (complex cells) or F1 component (simple cells) obtained after fast Fourier transform of the cycle-triggered PSTH calculated for each contrast value. The contrast–response functions for the ascending series of contrast were fitted with a modified Hill equation (Sclar et al., 1989), of the form r = (Rmax × (Cs/(Cs+ C50s))) +M, where Rmax is the maximal response, s the slope coefficient,C50 the contrast that gives 50% of the maximal spike response, and M a constant term corresponding to the offset introduced by the spontaneous activity.
Measurements for intracellular signals. Voltage signals from intracellular recording performed in vivo show large fluctuations resulting from ongoing spontaneous activity, necessitating the use of averaging and statistical tests. For both contrast adaptation and sinusoidal current injection data, the response elicited by each cycle of the drifting grating or of the sinusoidal current with the same ordinal position was averaged across the several repeats of the whole cycle. These averages were Fourier-analyzed, and the F0 (average membrane potential) and F1 component (in simple cells) were extracted. The result of this analysis consists of series of F0 (see Figs. 4, 5C) or F1 values (Fig. 5D) as a function of time within the adaptation or current injection protocols.
A similar procedure was followed for input resistance measurements: all the pulses with the same ordinal position within the different repeats of the adaptation protocol were averaged together, then the mean value of the membrane potential outside the current pulse and for a 100–200 msec period within the plateau of the negative pulse response extracted, and these two values were used to calculate the input resistance. Because in these cases the F0 component could not be calculated using the Fourier method, the mean membrane potential corresponds to the values calculated outside the current pulse. The result then consisted in a series of resistance and mean membrane potential measurements as a function of time within the adaptation protocol (see Fig. 7D–F).
The distribution of F0, F1, and Rnapproximated a normal distribution. This allowed the use of parametric tests to determine the significance of the changes in the postadaptation period. This was done by running a t test comparing the first five values of F0, F1, or ofRn immediately after the end of the high visual contrast (or high current intensity) with all of the values of the control period (30 sec). Similarly, the significance of the changes during the high-contrast adaptation was determined by comparing the first five and the last five F0, F1, orRn values.
The amplitude of the changes after adaptation was determined in a way similar to that used for the firing rates. The preadaptation value corresponds to the average of the F0, F1, orRn value for the 30 sec of the preadaptation period, and the values for the beginning of the postadaptation correspond to the average of the F0, F1, orRn values obtained for the first five cycles of the low-contrast (or low-intensity) stimulus that immediately follows the high-contrast (or high-intensity) stimulus. The amplitude of the postadaptation changes was then expressed as the subtraction of the postadaptation values to the preadaptation values. This yields the amplitude of the hyperpolarization for F0 subtraction, or the reduction of the modulated response component for F1 subtraction. TheRn changes were expressed as percentages (100 ×Rnpost/Rnpre). Changes in membrane potential during high contrast were measured as the difference between the F0 (or the F1) of the last five cycles of high-contrast (high intensity) stimulus minus the F0 (or F1) obtained for the first five cycles of high contrast (high intensity).
The time series of membrane potential parameters remained very noisy despite the averaging procedure. For the measurement of the duration of the postadaptation changes, the time series were smoothed (15 or 29 point running average). The time course of the postadaptation changes only rarely displayed a clearly exponential or linear shape. This made the use of fitting procedures difficult. Therefore, the duration of the postadaptation changes simply corresponds to the time at which the smoothed version of the time series crosses the mean control value. Note that the measurement of significance and amplitude of changes were made on nonsmoothed data.
Population data are given as the mean ± SD. The median is given additionally for data presenting skewed distributions.
RESULTS
The results presented here are based on extracellular recordings from 12 dLGN cells and 6 cortical neurons and intracellular recordings from 81 cortical neurons. For a representative sample of 60 cortical cells, the input resistance was 43.9 ± 17.6 MΩ, and the time constant 9.3 ± 4.2 msec. The cortical neurons were characterized as regular-spiking (n = 49), fast-spiking neurons (n = 3), chattering cells (n = 13), and intrinsic bursting neurons (n = 4). Twelve cells either could not be characterized or these data were not available.
In our investigations of contrast adaptation, we used several different protocols. Initially, we examined the events associated with contrast adaptation using a protocol consisting of the presentation of drifting sinewave grating at orientation and spatial frequency optimal for the cell under study, first at low contrast for 30 sec, then at high contrast for 30–60 sec, then at low contrast anew for 60 or 120 sec (repeated 4–10 times for averaging). The low contrast (5–20%) was chosen to generate action potential activity consistent and large enough to enable detection of changes, and it evoked a mean firing rate of 10.7 ± 5.8 Hz in LGN cells and 9.4 ± 5.1 Hz and 8.7 ± 5.0 Hz in simple and complex cells, respectively. The high contrast (30–80%) was chosen to evoke a strong visual response, yielding a mean firing rate of 24.7 ± 11.2 Hz in LGN cells, 49.4 ± 35.3 Hz in simple cells, and 44.1 ± 25.8 Hz in complex cells at the beginning of the high-contrast stimulation period. Both dLGN (n = 12; all recorded extracellularly) and cortical neurons (n = 39; 6 extracellularly and 33 intracellularly recorded; complex n = 20; simplen = 19) were studied with this contrast adaptation protocol.
The presentation of a high-contrast visual stimulus resulted in an increase of activity, which was followed by a subsequent decay in the firing rate in all but three cells (1 dLGN, 1 simple, 1 complex) (Fig.1). This decay of the discharge rate during high contrast will be referred to as “adaptation” throughout this paper.
After the presentation of the high-contrast grating, the response to the low-contrast stimulus was, in many cases, statistically significantly lower than what it was before (Fig. 1A–C, insets). Cells exhibiting this postadaptation suppression will be referred to as “postadapting cells”. Adaptation during high-contrast stimulation was observed both in postadapting and nonpostadapting cells.
Adaptation of firing during high-contrast stimulation
Comparing the three cell types (simple, complex, and LGN) revealed that the adaptation strength—quantified as shown in Figure2A1—was markedly different between them (Fig.2B; Mann–Whitney U test;p < 0.02 in all cases). The least adaptation occurred in LGN cells (mean adaptation ratio 89.4 ± 12.7%), whereas simple cells adapted to 49.9 ± 23.1% of the initial firing rate, and complex cells showed the strongest adaptation (33.7 ± 22.7%).
The time course of the firing rate decay during high-contrast stimulation was quantified by fitting a single, or in some cases, a double exponential (Fig. 2A1). Eight cells could not be adequately fitted. With single exponential fitting, the mean time constant was 23.1 ± 18.7 sec for dLGN cells (n = 10), 4.4 ± 3.0 sec for simple cells (n = 14), and 4.6 ± 3.1 sec for complex cells (n = 14; Fig.2C). The adaptation time constant was significantly faster in simple and complex cells compared to dLGN cells (Mann–WhitneyU test; p = 0.003 for the two comparisons). Hence, LGN and cortical cells showed not only a difference in terms of adaptation strength, but also in terms of adaptation time course.
Postadaptation suppression of firing rate
The presentation of the high-contrast visual stimulus resulted in a significant reduction in firing rate in 67% of the dLGN neurons, 74% of the simple cells, and 40% of the complex cells. The high incidence of postadapting cells in the dLGN suggests that contrast adaptation is not solely a cortical phenomenon (Mukherjee and Kaplan, 1995; Shou et al., 1996; Smirnakis et al., 1997). The incidence of postadapting cells was not significantly different between the different cell categories (χ2 test,p = 0.27 for dLGN vs complex cells, 0.06 for simple vs complex cells, 0.7 for dLGN vs simple cells).
Examining the distributions of postadaptation ratios—calculated as illustrated in Figure 2A2—for LGN, simple, and complex cells with significant postadaptation firing rate reduction revealed that the decrease in firing was markedly stronger for cortical cells (Fig. 2D). The mean postadaptation ratio for LGN cells was 63.4 ± 15.0% (n = 8) of the control firing rate, whereas simple cells showed a significantly stronger reduction to 22.7 ± 15.8% of preadaptation values (n = 14; p = 0.0003; Mann–WhitneyU test). Complex cells showed a reduction to 33.0 ± 30.1% (n = 8; median 21.5%), and this value is not significantly different from the reduction observed in simple cells (p = 0.7). Changes in the component of action potential discharge that was modulated at the temporal frequency of the drifting grating (the F1 component) were strongly correlated with changes in the average firing rate (F0 component) in dLGN and simple cells (r = 0.91 and slope = 1.06 for dLGN cells;r = 0.92 and slope = 0.96 for simple cells; data not shown).
The average duration of the post-adaptation suppression (Fig.2E) was 21.3 ± 10.0 sec for dLGN cells, 19.8 ± 19.9 sec for simple cells (median 12 sec), and 18.5 ± 8.1 sec for complex cells. There was no significant difference between cell types (Mann–Whitney U test; p> 0.25 for all cases). Taken together, these results indicate that a large part of high-contrast adaptation and postadaptation suppression is indeed genuinely cortical.
Membrane potential changes with high-contrast stimulation
Intracellular recordings allowed us to study the membrane potential changes underlying changes in spike firing in 31 cells. These 31 cells were distributed as 15 simple cells (11 postadapting, 4 with no significant postadaptation) and 16 complex cells (8 postadapting, 8 with no significant postadaptation).
Intracellular recordings often revealed a progressive hyperpolarization during the high-contrast stimulus, and this hyperpolarization persisted as a prolonged afterhyperpolarization after the cessation of the high-contrast visual stimulus (Fig.3).
Averages of the membrane potential (Fig.4Ab,Bb) revealed that the high-contrast stimulation induced a depolarization whose amplitude decreased during adaptation by −2.4 ± 1.7 mV in simple cells and −5.6 ± 3.8 mV in complex cells (which is significantly larger than in simple cells; p = 0.007, Mann–WhitneyU test). This decrement in depolarization was significantly correlated with a decrease in firing rate (Fig. 4Ca; Spearman Rank correlation, ρ = 0.77, p < 0.0001).
Membrane potential changes after high-contrast stimulation
In some cells, the high-contrast period was followed by a very obvious hyperpolarization (Figs. 3A,B,4A), during which the membrane potential could be subthreshold for the generation of action potentials (Fig. 3). Although large hyperpolarizations did not occur in every cell (Fig.4C), in all cases with a significant reduction in the postadaptation firing rate, the high-contrast period was followed by a hyperpolarization with respect to preadaptation value (Fig.4Cb). The average amplitude of the afterhyperpolarization (AHP) for the first five cycles of drifting grating after the high contrast (1.6 or 3.2 sec; see Materials and Methods) varied between −11.9 and −0.8 mV (Fig.4Ab). The average AHP for simple cells was −3.2 ± 3.1 mV (range, −11.9 to −0.8 mV; n = 11 of 15), whereas the average AHP in complex cells was −2.8 ± 2.0 mV (range, −6.8 to −1.3 mV; n = 8 of 16). Note that these values are less than the peak amplitude of the AHP, because they are averages over the first seconds of this hyperpolarization (see Materials and Methods). The duration of the hyperpolarization varied between 5 and 28 sec (simple cells, 12.9 ± 6.5 sec; complex cells, 16.0 ± 6.4 sec; Fig. 4Cc).
For the postadaptation period we also found a significant correlation between firing rate reduction and membrane potential hyperpolarization (Fig. 4Cb; for the whole population: Spearman rank correlation, ρ = 0.80, p < 0.0001; for postadapting cells only: ρ = 0.53, p = 0.02). Furthermore, the duration for the postadaptation hyperpolarization significantly correlates with the duration of the spike response reduction (Fig. 4Cc; ρ = 0.64;p = 0.001).
These results indicate that adaptation during high contrast is paralleled by a progressive repolarization of the membrane potential and that postadaptation suppression is associated with a hyperpolarization of the membrane potential that lasts for >10 sec on average. These data confirm and extend the results obtained byCarandini and Ferster (1997).
Subthreshold adaptation
We examined whether or not action potentials are required to generate the postadaptation hyperpolarization that follows the presentation of the high-contrast grating, and more generally, whether firing of action potentials is required to induce contrast adaptation (Vidyasagar, 1990). For that purpose we ran the contrast adaptation protocol while maintaining the cells (n = 10) sufficiently hyperpolarized with DC injection to prevent action potential generation.
Although action potentials were not generated, a significant hyperpolarization during the postadaptation period still occurred (Fig.4Ac). Comparing the response of neurons that were tested in both the suprathreshold and subthreshold conditions for action potential generation, revealed that most (four of five cells) hyperpolarized in both conditions (Fig. 4Ab,Ac; median subthreshold, −2.8 mV, median suprathreshold, −2.7 mV). One cell did not hyperpolarize under either condition (Fig.4Bb,Bc).
Changes in the modulated component (F1) with high-contrast adaptation
To assess a possible decrease in synaptic drive arriving in cortical neurons during and after contrast adaptation, we examined the first harmonic (F1) of the membrane potential modulation in simple cells during visual stimulation with sinusoidal drifting gratings. This modulated synaptic response reflects the modulated activity of dLGN thalamocortical neurons (Ferster et al., 1996) and perhaps cortical neurons, including simple cells (Ahmed et al., 1994).
The amplitude of the modulated synaptic component (F1) was augmented with increases in the contrast of the visual stimulus, as expected (Fig. 5A,D). In a subset of cells (5 of 15), the amplitude of this modulated component decreased during the period of high-contrast adaptation (Fig. 5A,D), and in a small number of cases (3 of 15), the F1 component was slightly smaller during the period of postadaptation suppression (Fig.5A,D).
Even in cells that showed a decrease in the F1 component, this decrease was significantly smaller than the hyperpolarization of the membrane potential. For example in the cell illustrated in Figure 5, the F1 component was reduced during the high-contrast adaptation period by 1.2 and 0.6 mV during the postadaptation period, whereas the average membrane potential (F0 component) hyperpolarized by a peak of 14 mV (Fig. 5C). This hyperpolarization lasted longer than the reduction in F1 component (Fig. 5C,D).
Of the fourteen simple cells studied with suprathreshold protocols, only five showed a significant decrease in amplitude of the F1 component during high-contrast adaptation (mean decrease of −1.5 ± 1.2 mV; 76.1 ± 17.8%; Fig.6A). The reduction in F1 component may have led to a decrease in spike responses, because there is a significant correlation between the amplitude of the decrease in the F1 component of the spike response and the F1 component of the membrane potential (Fig. 6B; ρ = 0.65;p = 0.02;). However, on average at the population level, there was no significant decrease in the F1 component with high-contrast adaptation (Wilcoxon paired test, p = 0.47; mean difference, −0.3 ± 1.4; 96.8 ± 31.8%), and two cells even showed a significant increase (Fig. 6A). Interestingly, the change in membrane potential F1 component (to 96.8 ± 31.8%) does not differ significantly (Mann–WhitneyU test; p = 0.3) from the change in F1 component for the spike response in dLGN cells (84.4 ± 11.6%;n = 12). This general lack of effect of contrast adaptation on the modulated synaptic component cannot be explained by a variable shunting effect of action potentials for in six cells studied at subthreshold membrane potentials, only one exhibited a significant decrease in the F1 component. At the population level, the mean reduction for the subthreshold protocols was −0.7 ± 0.8 mV, with no significant difference in F1 amplitude between the beginning and the end of the high-contrast period (p = 0.12).
Changes in average firing rate during adaptation (the adaptation ratio, calculated for the F0) were, therefore, not correlated with changes in the amplitude of the F1 component of the membrane potential (ρ = 0.19; p = 0.5; data not shown), whereas it was strongly correlated with changes of the F0 component of the membrane potential (Fig. 4Ca). Although prominent hyperpolarizations during high-contrast adaptation were apparent in some neurons (Fig.3A), in general we found no significant correlation between the decrease in modulated spike response during high-contrast adaptation and the change in membrane potential (Fig. 6C; ρ = 0.27; p = 0.3). It must be cautioned, however, that the measure of average membrane potential during high-contrast visual stimulation is complicated by the occurrence of strong barrages of synaptic and action potentials.
Changes in the modulated component (F1) during the period of postadaptation suppression
In similarity to the small number of cells exhibiting a decrease in the modulated synaptic component during high-contrast visual stimulation, only 3 of 14 simple cells showed a significant decrease of F1 amplitude during the period of postadaptation suppression (−0.6 to −2.5 mV; Fig. 6D, gray dots), and one cell showed a significant increase (+0.5 mV). As a consequence, no significant difference between preadaptation and postadaptation F1 amplitude was observed at the population level (Wilcoxon paired test;p = 0.5; mean difference, −0.23 ± 0.78 mV).
Changes in the modulated component (F1) were not significantly correlated with either changes in the average membrane potential (F0, membrane potential; ρ = 0.23; p = 0.4; Fig.6D) or changes in the modulated component of the spike response (F1, spike response; ρ = 0.40; NS; Fig.6E). In contrast, the amplitude of the hyperpolarization during the period of postadaptation suppression was highly correlated with the decrease in the modulated visually evoked spike responses (Fig. 6F; ρ = 0.84;p = 0.002), supporting the hypothesis that this hyperpolarization was important to the reduced visual responses during this period. This also indicates that the reduction of the modulated component of the spike discharge (F1) is secondary to the slow hyperpolarization, and not to a decrease of the underlying modulated component of the membrane potential (Fig. 6E).
The presence of action potentials during the control period could have led to a shunt in the membrane potential that may have masked the presence of F1 amplitude changes after high-contrast adaptation. However, in subthreshold runs (n = 6 simple cells), only one cell showed a significant F1 amplitude reduction (by 1.1 mV). On average the F1 amplitude was smaller by 0.3 mV in the postadaptation compared to the preadaptation. When expressed as a percentage, the postadaptation F1 amplitude represented 87% of the value before adaptation.
The moderate changes for the modulation of the membrane potential at the level of simple cells (96.0 ± 30.1% when expressed as a percentage) were barely different from the postadaptation changes for the F1 component of the firing rate in dLGN cells (Mann–WhitneyU test, p = 0.051; percent changes, 70.5 ± 32.0% in dLGN cells).
Altogether, these results indicate that changes of F1 component of the membrane potential, when they do occur, are most likely attributable to changes in the activity of LGN cells. These changes are too weak to account for the strong reduction of firing rate at the cortical level, either during or after high-contrast stimulation.
Changes in conductance during contrast adaptation
Possible changes in membrane conductance during the hyperpolarization were examined by measuring the response to the intracellular injection of hyperpolarizing current pulses before, during, and after the presentation of a high-contrast visual stimulus (Fig. 7). In half of the cells (n = 9), the high-contrast stimulus was preceded and followed by a low-contrast (5–20%) one, whereas in the other half (n = 9), the high-contrast visual stimulus was preceded and followed by a uniform gray screen. Of the 18 cells tested in this manner, only three exhibited hyperpolarizations of the membrane potential >3 mV (Fig. 7C), which we considered to be the minimal amplitude of an AHP that would exhibit a consistent change in input conductance in vivo (see Fig. 10C). The average amplitude of the afterhyperpolarization of these three cells was −4.5 (±0.7) mV. The apparent input resistance was significantly decreased in all of these cells, to an average of 82 (±11) % (Fig.7C). Of the other 15 cells examined, the average amplitude of the AHP was −1.2 (±0.9) mV, and the average input resistance was 104 (±10) % (Fig. 7C). When examined on a cell-by-cell basis (Fig. 7C), of the 15 cells that either exhibited a small AHP or no AHP, eleven cells had no significant change in input resistance, whereas four cells actually exhibited a small increase in apparent input resistance.
Contribution of intrinsic mechanisms to adaptation: sinusoidal current injections in cortical neurons
To determine if the change in visually evoked responses with contrast adaptation is mediated by changes in intrinsic membrane mechanisms or synaptic properties, we performed experiments consisting in injecting sinusoidal (2 Hz) current waveforms into cortical neurons intracellularly recorded in vivo (n = 34). The cells were tested with two different protocols: (1) a “sine-sine-sine” protocol (n = 23; Fig.8B), which mimicked the visual protocol of contrast adaptation (Fig. 8A) and allowed us to study the changes in firing during and after a high-intensity stimulation without implicating synaptic mechanisms, and (2) a protocol with hyperpolarizing square current pulses and high-intensity sinusoidal current (n = 23; see Fig.10A). The pulses were used to monitor changes in input resistance. Twelve cells were tested with both protocols.
The intracellular injection of sinusoidal currents resulted in many of the features of contrast adaptation (Fig.8A), including a decrease in neuronal responsiveness during the presentation of a high-intensity stimulus and a prolonged period of reduced responsiveness after the cessation of the high-intensity stimulus (Fig. 8B). We quantified four different features of both the responses to sinusoidal current injection and visually evoked responses and compared them: the percentage of decrease during high-intensity stimulation, the time constant of adaptation, and the amplitude and duration of the postadaptation hyperpolarization (Fig.9).
The main feature that was similar between visually and current-induced adaptation was the time constant. For the intracellular injection of sinusoidal current, the time constant was 5.7 ± 2.7 sec (Fig.9C; n = 17 cells that were well fitted by a single exponential; 3 cells that were well fitted by a double exponential and 14 cells that did not show appreciable adaptation were not included), whereas for the presentation of high-contrast visual stimuli, the adaptation time constant was 4.5 ± 3.0 sec (Fig.9C; n = 28).
In contrast to the similarity in time constants, the two protocols resulted in significantly different degrees of adaptation during the high-intensity stimulus. For sinusoidal current injection, the action potential response decreased to only 87.1 ± 13.8% (n = 34), whereas for visual responses the average decrease was to 41.6 ± 24.01% (n = 39; Fig.9B). This was true, even if the decrease in responsiveness was measured after the same period of current injection or visual stimulation in the same neurons (20 sec; 91.8 ± 14.4% for current injection; 44.6 ± 24.9% for visual stimulation;n = 11; p = 0.008; Wilcoxon paired test). This difference suggests that an intrinsic membrane mechanism cannot account for all the adaptation observed with the high-contrast visual stimulation.
The intracellular injection of sinusoidal current induced a postadaptation suppression that was remarkably similar to that after high-contrast visual stimuli (Fig. 9A,D). This similarity was both in the amplitude of this suppression (current injection, 27.3 ± 23.3%; n = 17 of 23; Fig. 9Dc; visual stimulation, 26.4 ± 22.0% for simple and complex cells together; n = 22 of 33; Fig. 2D), as well as for the duration of this postadaptation suppression (current injection, 14.0 ± 10.5 sec; data not shown; visual stimulation, 19.3 ± 16.3 sec; Fig. 2E).
As with visual stimuli, the postadaptation suppression obtained with the intracellular injection of current was associated with a membrane hyperpolarization (−4.0 ± 1.8 mV; n = 14 of 23; Fig. 9D). The amplitude of this hyperpolarization was highly correlated with the degree of postadaptation suppression after the high-intensity current injection (ρ = 0.90; p < 0.0001; Fig. 9Db).
In comparing the degree of adaptation during the high-intensity current injection with the amplitude of postadaptation suppression in the same cells revealed that these two measures were significantly correlated (ρ = 0.73; p = 0.005; Spearman rank correlation; Fig. 10B). The same relation was observed when sinusoidal current injections were performed in cortical cells in slices in vitro (Sanchez-Vives et al., 2000, their Fig. 4). This suggests that the mechanisms responsible for the adaptation during high-intensity firing are related to those generating the postadaptation reduction. Note however, that the regression line in Figure 10B does not cross the 100%–100% coordinates. This results from the fact that some cells showed significant postadaptation firing rate reduction while showing only little adaptation during the high-intensity current injection. Similar results to the ones described in this section were obtained when instead of sinusoidal current injections the increase in the firing was induced with square depolarizing pulses of 20 sec duration (Sanchez-Vives et al., 2000, their Fig. 12).
Changes in input resistance during the postadaptation suppression with sinusoidal current injections
Possible changes in input resistance during the period of hyperpolarization were assessed with the intracellular injection of hyperpolarizing square current pulses (Fig. 10A). Sixteen of the twenty-three cells in this protocol had a significant afterhyperpolarization that averaged −4.4 mV (±2.2 mV), in similarity to the hyperpolarization occurring in the sine-sine-sine protocol (see above). During the first 2.5 sec of the afterhyperpolarization, the apparent input resistance was reduced to an average of 86.8 ± 10.9% of the preadaptation value. Comparing the apparent input resistance before high-amplitude sinusoidal current injection with that after revealed a statistically significant decrease [Wilcoxon Rank test; p = 0.007 for all cells (n = 23);p = 0.0008 for cells (n = 16) with significant AHP]. On the other hand, the cells that did not show a significant hyperpolarization (n = 7) did not show a significant change of input resistance (to 103.7 ± 15.7%;p = 0.5). Comparing the amplitude of the change in apparent input resistance and the postadaptation hyperpolarization reveals a significant correlation (Fig. 10C; ρ = 0.63; p = 0.003; Spearman rank correlation). Note that cells that have a hyperpolarization of <3 mV are generally associated with a change in apparent input resistance of <15%.
Changes in visual responses strength with small changes of membrane potential
Both visual stimulation and current injection induced postadaptation suppressions that were associated with hyperpolarizations in the range of 0.5–11.9 mV and that averaged 3.0 and 4.0 mV, respectively. To examine the effects of such changes in membrane potential on neuronal responses, we intracellularly injected DC into cortical neurons while presenting a constant high-contrast sinusoidal visual stimulus (Fig. 11;n = 5).
As expected, hyperpolarization or depolarization of cortical neurons with the intracellular injection of DC resulted in decreases and increases, respectively, of the action potential response to both high-contrast (Fig. 11) and low-contrast (see below) sinusoidal visual stimuli. Plotting the amplitude of the average firing rate (F0) as well as the modulated component (F1) versus the average membrane potential revealed that in the range of −58 to −79 mV, the neuronal response was strongly reduced by relatively small (−3 to −10 mV) changes in membrane potential (Fig. 11C). In the cell in Figure 11, a 5 mV hyperpolarization, from −65 to −70 mV resulted in a 18% decrease in the F1 component of the visual response, whereas a 7 mV hyperpolarization from −70 to −77 mV resulted in a 63% decrease. Examining the peristimulus histograms at different membrane potentials revealed that the peak spike response was even more strongly affected by hyperpolarization (Fig. 11B).
Previous studies have demonstrated that adaptation to high contrast is associated with a shift in the contrast that yields 50% of the maximal response to higher contrast levels (C50) and a compression in the contrast response function (Movshon and Lennie, 1979; Dean, 1983;Albrecht et al., 1984; Ohzawa et al., 1985; Saul and Cynader, 1989a;Sclar et al., 1989; Bonds, 1991; Allison et al., 1993). If contrast adaptation is mediated by hyperpolarization of cortical neurons, then manual hyperpolarization through the intracellular injection of DC should have similar effects. Indeed, the hyperpolarization of cortical neurons by an average of −7.9 mV (±2.1 mV; n = 12) resulted in both of these effects: a statistically significant shift in C50to higher contrasts (Fig.12C; from an average of 12.6 ± 5.6 to 16.6 ± 7.6% contrast; Wilcoxon paired test;p = 0.01) and a significant decrease in the maximal response, Rmax (Fig.12B; from an average of 40.9 ± 23.0 to 28.2 ± 20.0 spikes/sec; p = 0.009). In addition, hyperpolarization also caused a change in the slope factor of the contrast response function (from 2.6 ± 1 to 3.6 ± 2.7;n = 12), although this effect was not statistically significant (p = 0.2).
Hybrid protocols: visual stimulation and current injections
If the decrease in responsiveness after contrast adaptation is mediated through an intrinsic membrane mechanism, then the combination of both visual stimuli and sinusoidal current injection should yield similar results to visual stimuli alone. To test this hypothesis, we performed two types of hybrid protocols. In the first (hybrid type I), the high-contrast visual stimulus was replaced with a high-amplitude (0.6–1.8 nA) sinusoidal current injection (Fig.13Ab). In the second (hybrid type II), the low-contrast visual stimulus was replaced with a low-amplitude (0.2–0.8 nA) sinusoidal current injection (Fig.13Ac), whereas the high-contrast visual stimulus remained unchanged.
Interestingly, the intracellular injection of a strong sinusoidal current during the presentation of a constant low-contrast visual stimulus (hybrid protocol type I) resulted in a reduction of the visual response to an average of 39.9% (Fig. 13Ab,Ba;n = 8 of 9) and a hyperpolarization of the membrane potential (−3.2 ± 1.9 mV). Similarly, the presentation of a high-contrast visual stimulus reduced the neuronal response to the intracellular injection of a low-amplitude sinusoidal current (hybrid protocol type II) to an average of 63.7% (Fig. 13Ac,Ca,b;n = 4 of 6). As with the visual stimulation protocol, the presentation of a high-contrast visual stimulus resulted in a prolonged and statistically significant hyperpolarization of the membrane potential (range, −2 to −0.5 mV; mean and SD, −1.4 ± 0.7 mV).
These results from the hybrid type I and hybrid type II protocols provide strong evidence that the activation of postsynaptic mechanisms contribute to the decreases in neuronal responsiveness that follow presentation of a high-contrast visual stimulus.
DISCUSSION
In similarity to previous studies (Maffei et al., 1973; Vautin and Berkley, 1977; Albrecht et al., 1984; Ohzawa et al., 1985; Marlin et al., 1988; Giaschi et al., 1993; McLean and Palmer, 1996), we found that the prolonged presentation of a high-contrast stimulus results in a progressive reduction in the firing rate of cortical neurons over a period of seconds. As already reported in extracellular recording studies (Maffei et al., 1973; Albrecht et al., 1984; Ohzawa et al., 1985; Bonds, 1991; McLean and Palmer, 1996), the responsiveness of these cells to low-contrast visual stimuli is subsequently suppressed for a period of ∼20 sec on average, similar to that of the psychophysically observed decrease in contrast sensitivity (Blakemore and Campbell, 1969; Lorenceau, 1987). Contrast adaptation and postadaptation suppression were stronger in cortical cells and relatively weak in LGN neurons (Maffei et al., 1973; Ohzawa et al., 1985; Saul and Cynader, 1989a; Mukherjee and Kaplan, 1995; Shou et al., 1996; Ahmed et al., 1997; Smirnakis et al., 1997). We found that both adaptation and postadaptation changes in firing rate were associated with a hyperpolarization of the membrane potential, whereas the component of the synaptic response that was modulated at the temporal frequency of the drifting grating was not or only weakly affected (Ahmed et al., 1997; Carandini and Ferster, 1997).
Postadaptation mechanisms
Several mechanisms have been proposed to underlie both the reduction in firing rate and reduced responsiveness to low-contrast stimuli associated with adaptation to high-contrast stimuli: fatigue of neuronal spike generating mechanisms, recruitment of long-lasting inhibitory effects, synaptic depression, and network interaction mechanisms (Dealy and Tolhurst, 1974; Swift and Smith, 1982; Georgeson and Harris, 1984; Ohzawa et al., 1985; Vidyasagar, 1990; Wilson and Humanski, 1993; Finlayson and Cynader, 1995; Ahmed et al., 1997;Carandini and Ferster, 1997; Chance et al., 1998; Adorján et al., 1999). Our results strongly support the hypothesis that hyperpolarization of the membrane potential underlies, at least in part, contrast adaptation and furthermore, they provide evidence that this hyperpolarization is generated through intrinsic membrane mechanisms.
Mechanisms relying on synaptic inhibition have been questioned after it was shown that blockade of GABAA (DeBruyn and Bonds, 1986; Vidyasagar, 1990; McLean and Palmer, 1996) or GABAB receptors (McLean and Palmer, 1996) does not block the firing rate reduction that follows adaptation to a high-contrast visual stimulus. In addition, adaptation aftereffects, if resulting from GABAA receptor-mediated inhibition, should be associated with substantial decreases of input resistance (Carandini and Heeger, 1994; Borg-Graham et al., 1998), which have not been detected (Ahmed et al., 1997; Carandini and Ferster, 1997; present study). Furthermore, in preliminary studies, we have observed that fast-spiking cells, which are probably GABAergic interneurons (McCormick et al., 1985; Azouz et al., 1997), display an activity profile identical to the one observed in other cells: one complex fast-spiking cell did not show significant postadaptation, whereas two simple fast-spiking cells displayed a significant postadaptation suppression of action potential discharge. These findings do not support models of contrast adaptation based on long-lasting changes in interneurons, such that their response to low contrast would be maintained or larger after high contrast than before. Together these findings do not support models of contrast adaptation based on feedforward or feedback GABAergic inhibition.
Synaptic depression in neocortex in vitro displays one component with a slow time course (Finlayson and Cynader, 1995; Varela et al., 1997), which would present a slow recovery and could potentially explain the reduced responsiveness after high-contrast stimulation (Chance et al., 1998). However, in similarity with results by Carandini and Ferster (1997) and Ahmed et al. (1997), we have not consistently observed large changes in the visually modulated component of the synaptic potentials (F1) during the postadaptation period: only 4 of 14 cells showed a significant change, including one increase. Furthermore, synaptic depression observed in vivo with electrical stimulation applied in the LGN or intracortically is small, and, more importantly, recovers within <1 sec in the majority of cases, which is too short to account for the duration of postadaptation firing rate reduction (Sanchez-Vives et al., 1998). Although we cannot rule out a role for synaptic depression in the effects of contrast adaptation, our results suggest that if it is involved, it is not the major mechanism underlying these effects.
Our results strongly support the hypothesis that an intrinsically generated hyperpolarization of the membrane potential contributes to the aftereffects of contrast adaptation. We observed a hyperpolarization of the membrane potential after adaptation in both simple and complex cells. The amplitude and duration of this hyperpolarization were significantly correlated with the amplitude and duration of the reduction in neuronal response to the visual stimulus. Hyperpolarization of neurons by a similar amount through the intracellular injection of current resulted in a decrease in neuronal responsiveness to visual stimuli. In further support for an intrinsic origin, inducing the neuron with intracellular current injection to generate a train of action potentials similar to that observed during visual responses to high-contrast stimuli also resulted in a membrane hyperpolarization that was similar in amplitude and duration to that obtained after high-contrast visual stimulation. Furthermore, this induced hyperpolarization could decrease responses to low-contrast visual stimuli by an amount similar to the reduction obtained after high-contrast adaptation (Fig. 13, Hybrid protocol type I). Because the current injection was limited to a single neuron and because the same visual stimulus was maintained throughout the protocol, it indicates that a modification taking place at the synaptic level was not required to generate postadaptation suppression in these instances. Conversely, stimulation with a high-contrast visual stimulus reduced the response to low-intensity current injection (hybrid protocol type II), a result that would not have been obtained if the postadaptation firing rate suppression resulted only from long-lasting synaptic changes throughout the visual pathway.
In the companion paper (Sanchez-Vives et al., 2000) we provide evidence that this long-lasting hyperpolarization may be generated though the activation of Ca2+ and Na+-activated K+ conductances, although another mechanism that may participate is the activation of an electrogenic sodium/potassium pump (Gustafsson and Wigström, 1983; Thompson and Prince, 1986). If the hyperpolarization was mediated by an increase in K+ conductance, then a hyperpolarization of 3 mV would only require a 7% decrease in apparent input resistance at a membrane potential of −65 mV. In our sample, the three cells that exhibited a >3 mV hyperpolarization after contrast adaptation all exhibited a significant decrease in apparent input resistance (Fig. 7). Although this sample size is very limited, this result is supportive of a role of an increase in membrane conductance in the generation of this AHP.
Interestingly, we observed that the long-lasting hyperpolarization could be generated through mechanisms that do not require the generation of action potentials in the recorded neuron. This finding does not, however, rule out the contribution of postsynaptic conductances in the generation of this effect. In vivo, Na+ and Ca2+permeate into cells through both glutamate receptors and voltage-activated channels (Regehr and Tank, 1992; Malinow et al., 1994; Magee and Johnston, 1995; Crill, 1996; Callaway and Ross, 1997). Activation of hyperpolarizing currents such as Na+- and Ca2+-dependent K+ currents and of electrogenic pumps may be secondary to the intracellular accumulation of these ions resulting from synaptic activity and the activation of subthreshold voltage-sensitive Na+ and Ca2+ channels.
Early psychophysical studies proposed a “fatigue” model to explain contrast adaptation (Swift and Smith, 1982; Georgeson and Harris, 1984). Fatigue, as defined in these studies, needs not imply metabolic exhaustion. Instead, it was more loosely defined as a form of self inhibition determined by the activities of the cell. The intrinsic mechanism that we suggest here to play a role in contrast adaptation fulfills this definition, because the underlying conductance would be turned on by cell activation.
This simple model, however, has been challenged after the discovery that single cells in cortex can display adaptation that is somewhat specific to the spatial frequency of the adapting stimulus (Movshon et al., 1979; Albrecht et al., 1984; Saul and Cynader, 1989a,b; Bonds, 1991; Carandini et al., 1997). For example, the postadaptation reduction of the response to a given spatial frequency is stronger if it was the one used for the adapting stimulus, even if it was not the spatial frequency that yielded the strongest response. These stimulus-specific effects favored models explaining contrast adaptation by synaptic interaction between neurons. It has to be noticed, however, that only a small portion of the reduction in neuronal responsiveness after high-contrast adaptation exhibits such specificity and that a large percentage is nonspecific for different features of the visual stimuli. Quantitative analysis by Albrecht et al. (1984) and Carandini et al. (1997) indicates that only ∼25% of the adaptation strength can be ascribed some stimulus specificity.
Mechanisms of high-contrast adaptation
One important difference between the response to high-contrast visual stimuli and the intracellular injection of current was the amplitude of the decay in firing induced by these two protocols: the decay in firing induced by high-contrast stimulation (adaptation ratio, 41.6%) was significantly greater than that induced by current injections (adaptation ratio, 87.1%; Fig. 9B). One important fact, however, is that the spatial distribution of increases in intracellular Na+ and Ca2+ concentrations is likely to be dramatically different during the adaptation evoked by a visual stimulus in comparison to that evoked by the intracellular injection of current. The intracellular injection of current necessarily occurs at a point source, which is presumably located in the soma, whereas visually evoked activity will involve the arrival of large barrages of synaptic potentials distributed throughout the entire dendritic arbor. If the ionic channels responsible for generation of the hyperpolarization are located largely in the dendrites, then the presentation of a visual stimulus may result in a significant hyperpolarization of this portion of the cell and therefore a reduction in the ability of these synaptic barrages to evoke action potentials.
Another possible explanation of the difference between current and visually evoked adaptation is that additional factors, extrinsic to the recorded neuron, may be involved. Although we did not observe a consistent decrease in the modulated component (F1) of the synaptic drive arriving in simple cells during contrast adaptation, changes in the F0 component could have reflected, in both simple and complex cell, changes in a nonmodulated synaptic drive in addition to the activation of an intrinsic conductance. This leaves open the possibility that synaptic and/or network effects may contribute to high-contrast adaptation. The precise mechanisms underlying the large decrease in neuronal responsiveness during adaptation remain to be examined in detail.
Model of contrast adaptation
Altogether, these results indicate an important contribution for membrane properties in generating some characteristics of visual responses. We propose that the average membrane potential of cortical neurons is dynamically regulated and continuously varying in response to the waxing and waning of barrages of synaptic potentials and the generation of action potentials. With strong activation, cortical neurons may be expected to hyperpolarize, resulting in a reduced sensitivity to smaller barrages of synaptic potentials. The neurons that adapt (hyperpolarize) the most will be those cells that both are most strongly activated by the visual stimulus and possess a strong intrinsic tendency to adapt, because this varies from cell to cell. Thus, the neurons whose receptive fields are most highly tuned to the properties of the adapting stimulus should show among the strongest adaptations. The massive excitatory intraconnectivity of neurons in the cerebral cortex insures that this adaptation will have effects on the visual responses of neighboring neurons. Thus, adaptation to high-contrast stimuli may involve both a membrane hyperpolarization as well as a decreased excitation of neighboring neurons that are also adapting to the same stimulus. This hypothesis predicts that the synaptic barrages activated by the adapting stimulus may be selectively depressed: a prediction that remains to be examined.
In keeping with the role of intracortical interactions, it is important to emphasize that intrinsic mechanisms cannot account for all the effects of contrast adaptation: first, high-contrast adaptation cannot be fully mimicked by intracellular current injection; second, hyperpolarization by DC injection results in a shift of the contrast–response function that is less than the one obtained with contrast adaptation proper: the C50increased by 40% (Fig. 12) whereas changes by a factor of two have been reported with visual contrast adaptation in macaque (Sclar et al., 1989), and even much larger changes were observed in cat (Albrecht et al., 1984; Ohzawa et al., 1985) and bush baby (Allison et al., 1993). Intracortical amplification of changes taking place at the single-cell level (Douglas and Martin, 1991) might contribute to this discrepancy. However, in the transition from high- to low-contrast stimuli, the decreased responsiveness may occur largely through the hyperpolarization of the membrane potential, because the low level of action potential activity in cells at this point may reduce the interaction of intracortical neurons. Another apparent difference between our results obtained in vitro and those in vivo is that with visual stimulation, hyperpolarization of the membrane potential causes a significant decrease in the maximal response in the contrast–response function (Fig. 12), whereas in vitro, hyperpolarization does not decrease the peak response to intracellular injection of current (Sanchez-Vives et al., 2000, their Fig. 13). This result suggests that the maximal response amplitude in the contrast–response function is determined by presynaptic or network properties and not to the intrinsic firing properties of the recorded neuron.
The continual adjustment of the neuronal membrane potential in relation to previous spike and synaptic potential activity is a property of neurons throughout the cerebral cortex, and its effects will extend well beyond those of contrast adaptation. For example, the continual adjustment of neuronal membrane potential and responsiveness is likely to have dramatic effects on the spatial and temporal properties of receptive fields in cortical neurons. Strong activation of a cortical neuron with a high-contrast or strong stimulus may result in hyperpolarization and consequently a shrinking of the spatial extent of the receptive field (Sceniak et al., 1999). It is tantalizing to propose that such dynamic changes in receptive field properties may underlie, in part, those changes associated with changes in sensory stimulation, such as during artificial scotomas in the visual system (Pettet and Gilbert, 1992; Nowak et al., 1999). This hypothesis remains to be fully explored but predicts that the receptive field properties of cortical neurons, and therefore the networks within which they operate, are continually adjusting on the time scale of seconds in accordance to the properties of the sensory stimuli being analyzed. Thus, the cerebral cortex may dynamically adjusts itself to more accurately and sensitively perform analyses of sensory stimuli.
Footnotes
We thank Drs. E. Kaplan and J. Bullier for their comments to this manuscript and Dr. J. Brumberg for his help with the surgeries. This research was supported by grants from National Science Foundation and the National Institute of Health to David A. McCormick. Additional information about these and related findings may be obtained at:http://www.mccormicklab.org.
M.V.S. and L.G.N. contributed equally to this work.
Correspondence should be addressed to David McCormick, Section of Neurobiology, Yale University School of Medicine, 333 Cedar Street, New Haven, Connecticut 06510. E-mail: david.mccormick{at}yale.edu.
Dr. Sanchez-Vives' present address: Instituto de Neurociencias, Universidad Miguel Hernández, Apartado 18, 03550 San Juan de Alicante, Spain. E-mail: mavi.sanchez{at}umh.es.