Salk Institute, La Jolla, California 92037
![]() |
ABSTRACT |
---|
![]() ![]() ![]() ![]() ![]() ![]() |
---|
Zador, Anthony. Impact of synaptic unreliability on the information transmitted by spiking neurons. J. Neurophysiol. 79: 1219-1229, 1998. The spike generating mechanism of cortical neurons is highly reliable, able to produce spikes with a precision of a few milliseconds or less. The excitatory synapses driving these neurons are by contrast much less reliable, subject both to release failures and quantal fluctuations. This suggests that synapses represent the primary bottleneck limiting the faithful transmission of information through cortical circuitry. How does the capacity of a neuron to convey information depend on the properties of its synaptic drive? We address this question rigorously in an information theoretic framework. We consider a model in which a population of independent unreliable synapses provides the drive to an integrate-and-fire neuron. Within this model, the mutual information between the synaptic drive and the resulting output spike train can be computed exactly from distributions that depend only on a single variable, the interspike interval. The reduction of the calculation to dependence on only a single variable greatly reduces the amount of data required to obtain reliable information estimates. We consider two factors that govern the rate of information transfer: the synaptic reliability and the number of synapses connecting each presynaptic axon to its postsynaptic target (i.e., the connection redundancy, which constitutes a special form of input synchrony). The information rate is a smooth function of both mechanisms; no sharp transition is observed from an "unreliable" to a "reliable" mode. Increased connection redundancy can compensate for synaptic unreliability, but only under the assumption that the fine temporal structure of individual spikes carries information. If only the number of spikes in some relatively long-time window carries information (a "mean rate" code), an increase in the fidelity of synaptic transmission results in a seemingly paradoxical decrease in the information available in the spike train. This suggests that the fine temporal structure of spike trains can be used to maintain reliable transmission with unreliable synapses.
A pyramidal neuron in the cortex receives excitatory synaptic inputs from 103-104 other neurons (Shepherd 1990 Physiology
Standard slice recording methods were used to obtain Fig. 1. Briefly, patch-clamp recordings were obtained under visual guidance by using infrared optics from 400-µm slices from Long Evans rats [postnatal day (P)14-P20]. Recordings were performed at 33-35°C. Slices were continuously perfused with a solution containing (in mM) 120 NaCl, 3.5 KCl, 2.6 CaCl2, 1.3 MgCl2, 1.25 NaH2PO4, 26 NaHCO3, and 10 glucose, which was bubbled with 95% O2-5% CO2 and the pH of which had been adjusted to 7.35. All recordings were obtained in the presence of the
Simulations
All simulations were performed using Matlab 4.2.
Model of spiking
We use an integrate-and-fire mechanism to model the transformation of synaptic inputs into spike trains in cortical neurons. Let isyn(t) be the synaptic current driving a leaky integrator with a time constant
INTRODUCTION
Abstract
Introduction
Methods
Results
Discussion
References
). When an action potential invades the presynaptic terminal of one of these synapses, it sometimes triggers the release of a vesicle of glutamate, which causes current to flow into the postsynaptic dendrite. Some of this current then propagates, passively or actively, to the spike generator, where it may contribute to the triggering of an action potential.
). In the cortex, the transformation of somatic current into an output spike train appears to be highly reliable (Mainen and Sejnowski 1995
, see also Bryant and Segundo 1976
), in marked contrast to the unreliability of synaptic transmission (Allen and Stevens 1994
; Dobrunz and Stevens 1997
; Stratford et al. 1996
). In this paper, we use simple biophysical models of spike transduction and stochastic synaptic release to explore the implications of synaptic unreliability on information transmission and neural coding in the cortex. Our goal is to provide a quantitative answer to the question: How much information can the output spike train provide about the synaptic inputs? Our answer will be cast in an information-theoretic framework.
METHODS
Abstract
Introduction
Methods
Results
Discussion
References
-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) receptor antagonist 6-cyano-7-nitroquinoxaline-2,3-dione (CNQX, 50 µM). Recording pipettes were filled with (in mM) 170 K gluconate, 10 N-2-hydroxyethylpiperazine-N
-2-ethanesulfonic acid (HEPES), 10 NaCl, 2 MgCl2, 1.33 ethylene glycol-bis(
-aminoethyl ether)-N,N,N
,N
-tetraacetic acid (EGTA), 0.133 CaCl2, 3.5 MgATP, and 1.0 guanosine 5
-triphosphate (GTP), pH 7.2. Resistance to bath was 3-5 M
before seal formation.
View larger version (27K):
[in a new window]
FIG. 1.
Synaptic variability is dominant source of output variability. Top: spike generator is reliable. Response of a neuron from layer II/III of a slice of rat neocortex to 20 consecutive even-numbered trials in which precisely same synthetic synaptic current was injected through a somatic electrode (see METHODS). Most of spikes are aligned to a precision of ~1 ms, although a few "stray" or "displaced" spikes are also seen. This experiment places a lower bound on precision with which spikes can be generated in response to identically repeated stimuli; remaining variability is due to some combination of experimental noise and intrinsic variability of spike generator. Bottom: noisy synapses introduce output variability. Response to 20 consecutive odd-numbered trials (interleaved with even-numbered trials presented in top) is shown. In this experiment, synthetic currents were generated from same ensemble as in top, using a fixed pattern of presynaptic spikes drawn from a Poisson ensemble, but assuming that, because of synaptic failures, 3/10 spikes failed to elicit an EPSC (Pr = 0.7). [Current repeatedly injected in top is equivalent to assumption that precisely the same 3/10 spikes failed to elicit an excitatory postsynaptic current (EPSC) on every trial.] Under these conditions, effective output reliability is markedly decreased, as seen by poor alignment of spikes giving a haphazard appearance to raster. For this experiment, quantal fluctuations, which would tend to further decrease output reliability, were suppressed (CV = 0). Parameters for synthetic synaptic currents: quantal size (mean), 30 pA; quantal size (coefficient of variation): 0 Pr = 0.7, Nr = 1.
and a threshold Vthresh. As long as the voltage is subthreshold,
(t) < Vthresh, the voltage is given by
where Rn is the input resistance and Vrest is the resting potential. At the instant the voltage reaches the threshold Vthresh, the neuron emits a spike, and resets to some level Vreset < Vthresh. The five parameters of this model, Vthresh, Vreset, Vrest,
(1)
, and Rn, determine its response to a given input current.
(t) exceeded threshold. If time is finely discretized into bins shorter than the shortest interspike interval, so that the number of spikes in each bin is either zero or one (but not greater than one), then the spike train can be represented as a binary string zo(t), with ones at times when the neuron fired and zeros at other times.
Model of synaptic drive
We assume that the synaptic current isyn(t) consists of the sum of very briefessentially instantaneous
individual excitatory postsynaptic currents (EPSCs). This represents a reasonable simplification of the component of the excitatory input to cortical neurons mediated by fast AMPA receptors, which decay with a time constant of 2-3 ms (Bekkers and Stevens 1990
), but not for the component mediated by the slower N-methyl-D-aspartate (NMDA) receptor-gated channels.
), we consider two sources of synaptic variability, or noise. The first is that the probability Pr that a glutamate-filled vesicle is released after presynaptic activation may be less than unity in the hippocampus (Allen and Stevens 1994
; Hessler et al. 1993
; Rosenmund et al. 1993
) and the cortex (Castro-Alamancos and Connors 1997
; Stratford et al. 1996
). The second is that the postsynaptic current in response to a vesicle may vary even at single individual terminals (Bekkers and Stevens 1990
). This quantal variability may arise, for example, from variable amounts of neurotransmitter filling each vesicle (Bekkers and Stevens 1990
); but the results of the present study do not depend on the mechanism underlying this variability.
where the summation index j is over the input neurons, the random process fj(t) representing synaptic failures is a binary string that is one when transmitter is released and zero otherwise and qj(t) is a random variable that determines the quantal size of releases when they occur. The processes isyn(ti), zj(ti), fj(ti), and qj(ti) are discrete-time, but for notational convenience we will often suppress the time index i.
(2)
where the summation index k is over functional contacts, each of which is driven by the same sequence of presynaptic action potentials zj(t). In this model, all the terminals k associated with a single presynaptic axon are activated synchronously, but release failures occur at each contact independently.
(3)
where A is the number of afferent axons, Nr is the number of functional contacts per axon (assumed to be the same for all axons), Fin is the Poisson rate at which each axon fires (assumed to be the same for all axons), and Pr is the release probability at each functional contact (assumed to be the same for all contacts). Snet determines the average postsynaptic current and thereby the output firing rate R.
(4)
Information rate of spike trains
A typical pyramidal neuron in the cortex receives synaptic input from 103-104 other neurons. We define the activity in each of these input neurons as the "signal," and the variability due to the unreliability of synaptic transmission is the "noise."
Methods for estimating spike train information rates
The expression given in Eq. 5 for the mutual information is in practice difficult to evaluate because estimating the distributions P(Zin), P(Zout), and P(Zin, Zout) may require very large amounts of data. For example, suppose that there are 1,000 input spike trains driving the output and that each spike train is divided into segments 100 ms in length and discretized into 1 ms bins. There are then 2100 possible output spike trains, 2100×1,000 sets of input spike trains, and 2100×1,000 × 2100 possible combinations of input and output spike trains forming the space over which the joint distribution P(Zin, Zout) must be estimated. Although this naive calculation is in practice an overestimate (see Buracas et al. 1996 Reconstruction method
One approach to this dilemma (Bialek et al. 1991
View larger version (20K):
[in a new window]
FIG. 3.
Information depends on synaptic release probability. Top: left information rate is plotted as a function of firing rate for 4 values of release probability Pr = 1, 0.9, 0.6, 0.3, in a model integrate-and-fire neuron (top to bottom). Top curve is same as the middle curve shown in Fig. 2, bottom. Bottom: right information rate is plotted as a function of release probability Pr at F = 40 Hz. In each simulation, Pr was same at all synapses. To maintain Poisson input rate Snet constant, Poisson rate at each synapse was increased to compensate for decrease in Poisson rate because of synaptic failures. Thus for all curves, EPSCs arrived at a net rate of 2.4/ms (see Model of synaptic drive for details). Except as indicated, parameters are same as in Fig. 2, bottom.
) in terms of the entropy H(Zin) of the ensemble of input spike trains, the entropy H(Zout) of output spike trains, and their joint entropy H(Zin, Zout),
The entropies H(Zin), H(Zout) and H(Zin, Zout) depend only on the probability distributions P(Zin), P(Zout), and the joint distribution P(Zin, Zout), respectively.
(5)
and de Ruyter van Steveninck et al. 1997
for methods that make use of the fact that most spike trains are very unlikely), it emphasizes the potential problems involved in estimating the mutual information. Below we describe two practical methods for computing information rates.
, 1993
) is to compute a strict lower bound on the mutual information using the reconstruction method. The idea is to "decode" the output and use it to "reconstruct" the input that gave rise to it. The error between the reconstructed and actual outputs is then a measure of the fidelity of transmission and with a few testable assumptions can be related to the information. Formally, this method is based on an expression mathematically equivalent to Eq. 5 involving the conditional entropy H(Zin|Zout) of the signal given the spike train
In the present context, the quantity reconstructed is the sum of the input,
(6)
zj(t). The entropy H(Zin) is just the entropy of the timeseries
zj(t) and can be evaluated directly from the Poisson synthesis equation (Eq. 3). Intuitively, Eq. 6 says that the information gained about the spike train by observing the stimulus is just the initial uncertainty about the synaptic drive (in the absence of knowledge of the spike train) minus the uncertainty that remains about the signal once the spike train is known. The reconstruction method estimates the input from the output and then bounds the errors of the outputs from above by assuming they are Gaussian. This method, which can provide a lower bound on the mutual information, has been used with much success in a variety of experimental preparations (Bialek et al. 1991
; de Ruyter van Steveninck and Bialek 1988
; de Ruyter Van Steveninck and Laughlin 1996
; Rieke et al. 1997
).
Direct method
In this paper we will use a direct method (DeWeese 1995, 1996
; de Ruyter van Steveninck et al. 1997
; Stevens and Zador 1996
) to estimate the mutual information. Direct methods use another form of the expression Eq. 5 for mutual information
![]() |
(7) |
where H(T) are H(T|Zin) are total and conditional entropies, respectively, of the ISI distribution. The information rate (units: bits/second) is then just the information per spike (units: bits/spike) times the firing rate R (units: spikes/second)
(8)
The representation of the output spike train as a sequence of firing times {to, . . . , tn} is entirely equivalent (except for edge effects) to the representation as a sequence of ISIs {To, . . . , Tn}, where Ti = ti+1
(9)
ti. The advantage of using ISIs rather than spike times is that H(T) depends only on the ISI distribution P(T), which is a univariate distribution. This dramatically reduces the amount of data required.
t. The assumption of finite precision keeps the potential information finite. If this assumption is not made, each spike has potentially infinite information capacity; for example, a message of arbitrary length could be encoded in the decimal expansion of a single ISI.
where P(Ti) is the probability that the length of the ISI was between Ti and Ti+1. The distribution of ISIs can be obtained from a single long (ideally, infinite) sequence of spike times.
(10)
where <·> represents average. Here P(Tj|[Zin(t)]m) is the probability of obtaining an ISI of length Tj in response to a particular set of input spikes [Zin(t)]m.
(11)
TP(T|[Zin(t)]17) log2 P(T|[Zin(t)]17). This ISI distribution depends on the amount of synaptic noise assumed; if there is no noise, the output distribution assumes only a single value and the conditional entropy is zero.
Model assumptions
We have assumed a model of neuronal dynamics in which ISIs are independent. This assumption simplifies the estimation of the information rate, because it reduces the estimation of the multidimensional distribution of spike times to the estimation of the one dimensional ISI distributions (P(T) and P(T|Zin(t)), from which the mutual information can be calculated exactly. Under what conditions will ISIs be independent? Because correlated ISIs can arise either from the spike generating mechanism itself or the input signal, we consider the validity of our assumptions about each in turn.
; Dobrunz and Stevens 1997
; Markram and Tsodyks 1996
; Varela et al. 1997
; Zador and Dobrunz 1997
). We have made no attempt to explore the potentially important consequences of such use-dependent effects.
Informative upper bound
The assumption that successive ISIs are independent (i.e., that the spike train is a renewal process) leads to an exact expression (rather than the upper bound provided by the reconstruction method) for the mutual information, subject only to error in the estimation of the ISI distribution. Here we review the well known result that a Poisson process (the special case where the ISI distribution is exponential) leads to the maximum entropy spike train, and give the simple closed-form expression for the entropy in this case.
we have disallowed the possibility of multiple spikes per bin. If the conditional entropy is zero (i.e., if there is no noise whatsoever), then all the entropy is information, and the upper bound on the entropy is equal to the upper bound Iub on the information.
t depends on the firing rate R as P1 = R ×
t and the probability of not observing a spike is P0 = 1
R ×
t. If spikes are independent
that is, if the probability of observing a spike in one bin does not depend on whether there was a spike in any neighboring bin, so that the spike train is a Poisson process
then the entropy per bin is
iPi log2
= P0 log2
+ P1 log2
. At low firing rates, P0
1, and P0 log2
0, so the entropy per bin is approximately P1 log2
= R ×
t log2
. The entropy rate (entropy per time) is then the entropy per bin divided by the time per bin
t or
(12)
![]() |
RESULTS |
---|
![]() ![]() ![]() ![]() ![]() ![]() |
---|
Synaptic variability is the dominant source of output variability
Mainen and Sejnowski (1995) have previously shown that the timing of spikes produced by cortical neurons in response to somatic current injection can be highly reliable. The currents they injected were obtained by passing a Gaussian signal through a low-pass filter representing the time course of an EPSC and adding a constant offset. Although such a Gaussian current is obtained in the limit as the number of inputs becomes large, Mainen and Sejnowski (1995)
did not explicitly relate the current they injected to the underlying synaptic drive.
1 ms, although a few "stray" or "displaced" spikes are also seen. In agreement with the observations of Mainen and Sejnowski (1995)
, these results show that cortical neurons can generate precisely repeated outputs in response to precisely repeated inputs, even when the driving current corresponds to a synthetic synaptic current generated by an ensemble of independent inputs. The small remaining output variability seen in Fig. 1A is due to some combination of experimental instability and the intrinsic imprecision of spike generator. Experiments in which precisely the same current is injected establish a limit on the output precision of which these neurons are capable. The output variability increases as other sources of variability, such as synaptic noise are considered.
in which precisely the same current was injected on each trial
for this experiment a somewhat different waveform, corresponding to the random removal of 3/10 spikes from the input ensemble, was injected on every trial. Figure 1B shows that spikes areno longer well aligned, indicating that under these conditions synaptic failures are the dominant source of output variability.
Information rate depends on firing rate
Experiments like those shown in Fig. 1 suggest that synaptic noise represents an important source of output variability. Such experiments can be used to estimate information rates in cortical neurons by using techniques developed elsewhere (Buracas et al. 1996; de Ruyter van Steveninck et al. 1997
). In an experimental setting, however, information estimates can be distorted by nonstationarity, finite data sizes, variability between neurons, and a number of other factors. Although it is possible to correct for such factors (subject to certain reasonable assumptions), here we focus on the results from a model neuron in which all assumptions are explicit; this permits us to focus specifically on the role of synaptic variability in governing transmitted information.
Information rate depends on release probability
The invasion of a synaptic terminal by an action potential often fails to induce a postsynaptic response both in the hippocampus (Allen and Stevens 1994 Information rate depends on the number of functional contacts per axon
A single axon may sometimes make multiple synapses onto a postsynaptic target, or a single synapse (such as the neuromuscular junction) might have multiple release sites. To avoid ambiguity, we use functional contact to refer to any release site from a presynaptic axon to a postsynaptic target, whether it involves multiple synapses per axon or multiple release sites per buoton. At the neuromuscular junction, functional contacts are counted by the thousands (Katz 1966
Reliability of mean rate coding
It may seem obvious that because multiple functional contacts increase the fidelity with which a presynaptic signal is propagated, it can overcome the noise induced by synaptic failures and quantal fluctuations and thereby increase the fidelity of neuronal signaling. In the previous section we quantified this intuition under the hypothesis that the precise timing of spikes carries information. To what extent does this conclusion depend on the particular assumptions we are making about the neural code?
We have estimated the mutual information between the synaptic drive and the resulting output spike train in a model neuron. We have adopted a framework in which the time at which individual spikes occur carries information about the input. In this formulation, the exact sequence of action potentials arriving at each of the presynaptic terminals is the "signal," and the "noise" is any variability in the response to repeated trials on which precisely the same sequence is presented. We found that the information was a smooth function of both synaptic reliability and connection redundancy: no sharp transition was observed from an "unreliable" to a "reliable" mode. However, connection redundancy can only compensate for synaptic unreliability under the assumption that the fine temporal structure of individual spikes carries information. If only the number of spikes in some relatively long time window carries information (a "mean rate" code), an increase in the fidelity of synaptic transmission results in a seemingly paradoxical decrease in the information available in the spike train.
Related work
Information rates for sensory neurons in a wide variety of experimental systems have now been measured for both static (Golomb et al. 1997 Neural code
Although it is generally agreed that the spike train output by a neuron encodes information about the inputs to that neuron, the code by which the information is transmitted remains unclear (see Ferster and Spruston 1995 Information and synaptic unreliability
The present paper is the first to interpret information rates in single cortical neurons in terms of the underlying biophysical sources of the signal and noise. Here signal is the set of firing times over the ensemble of presynaptic neurons, whereas noise is synaptic variability that leads to variability in the firing times of the postsynaptic neuron.
This work was supported by The Sloan Center for Theoretical Neurobiology at the Salk Institute and by a grant to Charles F. Stevens from the Howard Hughes Medical Institute.
Received 12 September 1997; accepted in final form 19 November 1997.
View larger version (21K):
[in a new window]
FIG. 2.
Dependence of entropy and information firing rate in a model neuron. Left: entropy and information per spike are plotted as a function of firing rate in a model integrate-and-fire neuron. (- - -): total entropy, which quantifies total output variability of spike train. (···): conditional entropy, which quantifies variability that remains when signal is held constant. ( ): mutual information between input and output and is difference between these quantities. Right: corresponding entropy and information rates in bits/ms are shown. Parameters: Vthresh =
40 mV; Rn = 150 M
;
= 50 ms; Vreset =
50 mV; Vrest =
60 mV; quantal size (mean): 30 pA; quantal size (coefficient of variation): 0.2; Pr = 1; and Nr = 1. Spike rate was varied by increasing presynaptic Poisson input rate. Smooth curves shown represent fit of a high-order polynomial to values computed at a large number of firing rates. In this and all other simulations presented, a binsize of 1 ms was used.
R log2 1/R, is increasing (see Eq. 12). Figure 2B illustrates the entropy and information rates (units: bits/second) corresponding to the curves shown in Fig. 2A. Because of our assumption that time is discretized into bins of length
t, each containing only at most one spike, the information declines back to zero at very high firing rates (not shown).
; Dobrunz and Stevens 1997
) and in the cortex (Stratford et al. 1996
). Although the release probability Pr varies across synapses onto the same neuron (Castro-Alamancos and Connors 1997
; Hessler et al. 1993
; Rosenmund et al. 1993
) and as a function of history of use (Abbott et al. 1997
; Dobrunz and Stevens 1997
; Markram and Tsodyks 1996
; Varela et al. 1997
), for simplicity we make the assumption here that the release probability Pr is the same at all terminals.
). At excitatory synapses in the cortex, the number of functional contacts is much smaller, but still sometimes greater than one (Markram and Tsodyks 1996
; Sorra and Harris 1993
). We have therefore explored the consequences of multiple functional contacts on the information rate.
View larger version (22K):
[in a new window]
FIG. 4.
Information rate depends on number of functional contacts. Left: information rate is plotted as a function of release probability Pr for 3 values of number of functional contacts Nr = 1, 5, and 20 (bottom to top) in a model integrate-and-fire neuron. Bottom curve is same as that shown in Fig. 3, bottom. Right: information rate is plotted as a function of number of functional contacts for Pr = 0.5, F = 40 Hz. To maintain Poisson input rate Snet constant, Poisson rate at each synapse was increased to compensate for changes in Snet because of synaptic failures or number of functional contacts; thus for all curves, EPSCs arrived at a net rate of 2.4/ms (see Model of synaptic drive for details). Except as indicated, parameters are same as in Fig. 2.
) to assess the reliability of coding under the mean rate hypothesis. The Fano factor is defined as the variance
2N divided by the mean µN of the spike count N in some time window W. The Fano factor can be viewed as a kind of "noise-to-signal" ratio; it is a measure of the reliability with which the spike count could be estimated from a time window that on average contains several spikes. In fact, for a renewal process like the neuronal spike generator considered here, the distribution PN(N, W) of spike counts can be shown (Feller, 1971
) by the central limit theorem to be normally distributed (asymptotically, as the number of trials becomes large), with µN = W/µisi and
N = W
2isi/µ3isi, where
isi and are, respectively, the mean and the standard deviation of the ISI distribution P(Ti). Thus the Fano factor F is related to the coefficient of variation C
=
isi/µisi of the associated ISI distribution by C
=
.
View larger version (9K):
[in a new window]
FIG. 5.
Information is inversely proportional to number of functional contacts in a mean rate code. In these simulations, input Poisson rate Snet was held constant. Fano factor (variance divided by mean of spike count) during a 250-ms window is plotted as a function of number of functional contacts. This measure can be thought of as an effective "noise-to-signal" ratio for a mean rate code, because it reflects how well spike count can be estimated. A larger ratio indicates that spike count is harder to estimate. Curve illustrates that an increase in number of functional contacts leads to an increase in variance of synaptic current driving neuron and thereby an increase in Fano factor. To maintain rate of Poisson input Snet constant, Poisson rate at each synapse was increased to compensate for changes in Snet because of synaptic failures or number of functional contacts; thus for all curves, EPSCs arrived at a net rate of 2.4/ms (see Model of synaptic drive for details). Except as indicated, parameters are same as in Fig. 2.
DISCUSSION
Abstract
Introduction
Methods
Results
Discussion
References
; Optican and Richmond 1987
; Richmond and Optican 1990
; Tovee et al. 1993
) and time-varying (Bair et al. 1997
; Bialek et al. 1991
; Buracas et al. 1996
; Dan et al. 1996
; de Ruyter van Steveninck and Bialek 1988
; Gabbiani and Koch 1996
; Gabbiani et al. 1996
; Rieke et al. 1997
; Warland et al, 1997
) stimuli. Most of the work on time-varying stimuli used reconstruction methods to obtain a lower bound on the transmitted information; typical values were in the range of 1-3 bits/spike. De Ruyter van Steveninck and Lauglin (1996) applied similar techniques to estimate information rates across graded synapses in the blowfly.
and closely related to that in DeWeese (1996)
. Both used a direct rather than a reconstruction method to estimate the information in a spiking neuron model. In Stevens and Zador (1996)
, the key assumption was that ISIs were independent, whereas in DeWeese (1996)
, the key assumption was that spikes were independent.
; Stevens and Zador 1995
) for recent discussions. One idea (the conventional view in systems physiology) is that it is the mean firing rate alone that encodes the signal and that variability about this mean is noise (Shadlen and Newsome 1994
, 1995
). An alternative view that has recently gained increasing support is that it is the variability itself that encodes the signal, i.e. that the information is encoded in the precise times at which spikes occur (Abeles et al. 1994
; Bialek et al., 1991
; Rieke et al. 1997
; Softky 1995
).
). A comparable role for spike timing in mammalian cortex has been more controversial. It has been suggested that motion-sensitive neurons in area MT of awake monkeys encode only fractions of a bit per second and that all of the encoded information is available in the spike count over a relatively long time window (Britten et al. 1992
). However, more recent experiments (Bair et al. 1997
; Buracas et al. 1996
) suggest that these neurons encode information at rates (1-2 bits/spike) comparable with those of the H1 neuron of the fly, when presented with visual stimuli that have appropriately rich temporal structure. Thus it may be wrong to speak of the neural code: it may well turn out that some components of the input stimulus (e.g., those that are changing rapidly) are encoded by precise firing times, whereas others are not.
). If such principles apply to cortical computation, then the cortex may have evolved strategies to compensate for synaptic unreliability, given other constraints.
) where the number of release sites Nr per terminal is large enough to guarantee a high-fidelity connection under normal conditions. But such multirelease synapses are large and the cortex may be under an additional constraint to minimize size.
; Dobrunz and Stevens 1997
; Fisher et al. 1997
; Magleby 1987
; Markram and Tsodyks 1996
; Tsodyks and Markram 1997
; Varela et al. 1997
; Zador and Dobrunz 1997
; Zucker 1989
). We speculate that a dynamic Pr is essential to cortical computation. A dynamic Pr could function as a form of gain control (Abbott et al. 1997
; Tsodyks and Markram 1997
; Varela et al. 1997
). More generally, it could be used to permit efficient computation on time-varying signals (Maass and Zador 1998). Thus we propose that the "reason" that Pr does not simply approach unity may be that cortical computation requires that a Pr retain a large dynamic range.
) and alert (Buracas et al. 1996
) primate visual cortex are in the same range.
ACKNOWLEDGEMENTS
FOOTNOTES
REFERENCES
Abstract
Introduction
Methods
Results
Discussion
References
0022-3077/98 $5.00 Copyright ©1998 The American Physiological Society