Intrinsic Dynamics in Neuronal Networks. I. Theory

P. E. Latham,1 B. J. Richmond,2 P. G. Nelson,3 and S. Nirenberg1

 1Department of Neurobiology, University of California at Los Angeles, Los Angeles, California 90095;  2Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health; and  3Laboratory of Developmental Neurobiology, National Institute of Child Health and Human Development, National Institutes of Health, Bethesda, Maryland 20892


    ABSTRACT
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
APPENDIX A
APPENDIX B
REFERENCES

Latham, P. E., B. J. Richmond, P. G. Nelson, and S. Nirenberg. Intrinsic Dynamics in Neuronal Networks. I. Theory. J. Neurophysiol. 83: 808-827, 2000. Many networks in the mammalian nervous system remain active in the absence of stimuli. This activity falls into two main patterns: steady firing at low rates and rhythmic bursting. How are these firing patterns generated? Specifically, how do dynamic interactions between excitatory and inhibitory neurons produce these firing patterns, and how do networks switch from one firing pattern to the other? We investigated these questions theoretically by examining the intrinsic dynamics of large networks of neurons. Using both a semianalytic model based on mean firing rate dynamics and simulations with large neuronal networks, we found that the dynamics, and thus the firing patterns, are controlled largely by one parameter, the fraction of endogenously active cells. When no endogenously active cells are present, networks are either silent or fire at a high rate; as the number of endogenously active cells increases, there is a transition to bursting; and, with a further increase, there is a second transition to steady firing at a low rate. A secondary role is played by network connectivity, which determines whether activity occurs at a constant mean firing rate or oscillates around that mean. These conclusions require only conventional assumptions: excitatory input to a neuron increases its firing rate, inhibitory input decreases it, and neurons exhibit spike-frequency adaptation. These conclusions also lead to two experimentally testable predictions: 1) isolated networks that fire at low rates must contain endogenously active cells and 2) a reduction in the fraction of endogenously active cells in such networks must lead to bursting.


    INTRODUCTION
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
APPENDIX A
APPENDIX B
REFERENCES

In the absence of stimuli, neurons in many areas of the mammalian CNS remain active. This activity falls into two main patterns: low, steady firing in the 1- to 5-Hz range and rhythmic bursting. The former has been widely observed in sensory cortex, motor cortex, and spinal cord and has been seen in both awake-behaving and anesthetized animals (Collins 1987; Gilbert 1977; Herrero and Headley 1997; Lamour et al. 1985; Leventhal and Hirsch 1978; Mednikova and Kopytova 1994; Ochi and Eggermont 1997; Salimi et al. 1994; Szente et al. 1988; Zurita et al. 1994); the latter is fundamental to central pattern generators, which generate rhythmic behavior such as respiration, locomotion, and chewing (for reviews see Marder and Calabrese 1996; Rekling and Feldman 1998). Both patterns may occur in the same neural tissue, with neuromodulators inducing a reversible switch between steady firing and bursting (Berkinblit et al. 1978; Kudo and Yamada 1987; Zoungrana et al. 1997).

From a theoretical point of view, it has been relatively easy to understand how large neuronal networks might generate rhythmic bursting (Feldman and Cleland 1982; Perkel and Mulloney 1974; Rekling and Feldman 1998; Rybak et al. 1997; Smith 1997) but difficult to understand how such networks might generate steady, low firing rates (Abeles 1991). Several investigators have explored the latter problem theoretically (Amit and Treves 1989; Buhmann 1989; Treves and Amit 1989). Their approach was to use simulated networks and attempt to generate low firing rates through dynamic interactions between excitatory and inhibitory neurons. The networks they used were (1) highly interconnected, (2) isolated from any external sources of input, and (3) comprised of neurons whose resting membrane potentials were far enough below threshold that several near-synchronous excitatory postsynaptic potentials (EPSPs) were necessary to trigger an action potential. Networks with these seemingly standard properties were unable to produce maintained low firing rates; the lowest numerically generated mean rates were ~20 Hz, significantly larger than the background firing rates observed in biological networks.

These experimental and numerical findings raise three main questions. First, why were networks with the above "standard" properties unable to fire at low rates? Did the numerical studies simply miss a parameter regime that would have generated low firing rates, or were the properties actually too restrictive? Second, what are the conditions that allow networks to fire robustly at low rates? And third, what is the relation between steadily firing networks and rhythmically bursting ones? In particular, is there a single parameter, or a small set of parameters, that allow networks to switch naturally from one state to the other?

To answer these questions, we developed a model that allowed us to explore analytically the intrinsic dynamics of large, isolated networks in the low firing rate regime. Our main assumptions were conventional: excitatory input to a neuron increases its firing rate, inhibitory input decreases it, and neurons exhibit spike-frequency adaptation (their firing rates decrease during repetitive firing). We found theoretically, and verified using large, simulated networks of spiking neurons, that a single parameter plays a dominant role in controlling intrinsic firing patterns. That parameter is the fraction of endogenously active cells; i.e., the fraction of cells that fire without input. Our primary result is that there is a sharp distinction between networks in which the fraction of endogenously active cells is zero and those in which it is greater than zero. When the fraction is zero, a stable, low firing rate equilibrium cannot exist; networks are either silent or fire at high rates. Only when the fraction is greater than zero are low average firing rates possible. In this regime there is a further subdivision into networks that burst and those that fire steadily, the former having fractions below a threshold and the latter having fractions above threshold. Connectivity also plays a role in shaping firing patterns, but to a lesser degree; it determines whether activity occurs at a constant mean firing rate or oscillates around that mean.

These theoretical results imply that an isolated network that fires at low rates must contain endogenously active cells. We tested this strong, parameter-free prediction experimentally in cultured neuronal networks and consistently found a large fraction of endogenously active cells, ~30% on average, in networks that displayed low firing rates. We also investigated experimentally the transition between steady firing and bursting as the fraction of endogenously active cells changed, and we found that networks that fired steadily could be induced to burst by reducing the fraction of endogenously active cells. Both of these experimental results are presented in the following paper (Latham et al. 2000).

The above analysis applies to networks that receive external input as well as to isolated ones. This is because cells that receive sufficient external input to make them fire are effectively endogenously active, in the sense that they can fire without receiving input from other neurons within the network. These pseudoendogenously active cells control firing patterns in the same way that truly endogenously active ones control firing patterns in isolated networks. In particular, for networks receiving external input, firing patterns follow the same set of transitions as discussed above: as activity produced by external input increases there is a transition first from silence to bursting, and then from bursting to steady firing.


    METHODS
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
APPENDIX A
APPENDIX B
REFERENCES

In RESULTS we make a series of predictions about the behavior of large neuronal networks, then test those predictions by performing large-scale network simulations. In this section we describe the single neuron equations and synaptic coupling used in the simulations, discuss our choice of simulation parameters, and provide a particular realization of a reduced network model that is used to illustrate the general principles derived in RESULTS.

Network simulations

Our predictions about the behavior of large neuronal networks are based on a simplified firing rate model. To verify these predictions, we perform large-scale simulations with synaptically coupled model neurons. Following are the single neuron equations used in the simulations, the prescription for choosing single neuron parameters and synaptic coupling, and a complete list of network parameters.

SINGLE NEURON EQUATIONS. Our simulated network consists of N synaptically coupled neurons, NE of which are excitatory and NI of which are inhibitory. The time evolution equation for the membrane potential of neuron i, Vi, may be written
<IT>C</IT><SUB><IT>cell</IT></SUB> <FR><NU><IT>d</IT><IT>V<SUB>i</SUB></IT></NU><DE><IT>d</IT><IT>t</IT></DE></FR><IT>+</IT><IT>I</IT><SUB><IT>spike,</IT><IT>i</IT></SUB><IT>+</IT><IT>I</IT><SUB><IT>fAHP,</IT><IT>i</IT></SUB><IT>+</IT><IT>I</IT><SUB><IT>sAHP,</IT><IT>i</IT></SUB><IT>+</IT><IT>I</IT><SUB><IT>syn,</IT><IT>i</IT></SUB><IT>=</IT><IT>I</IT><SUB><IT>a</IT><IT>,</IT><IT>i</IT></SUB> (1)
where Ccell is the cell capacitance, Ispike consists of the currents responsible for generating action potentials, IfAHP and IsAHP are fast and slow afterhyperpolarization currents, respectively, Isyn represents the synaptic current, and Ia is a constant depolarizing current. The fast afterhyperpolarization current enforces a relative refractory period; the slow afterhyperpolarization current, which introduces a slowly decaying hyperpolarization each time a neuron emits an action potential, is responsible for spike-frequency adaptation.

Rather than using conductance-based currents for Ispike, we model this quantity as
<IT>I</IT><SUB><IT>spike,</IT><IT>i</IT></SUB><IT>=</IT>−<FR><NU>(<IT>V<SUB>i</SUB></IT><IT>−</IT><IT>V<SUB>r</SUB></IT>)(<IT>V<SUB>i</SUB></IT><IT>−</IT><IT>V<SUB>t</SUB></IT>)</NU><DE><IT>R</IT><SUB><IT>cell</IT></SUB><IT>&Dgr;</IT><IT>V</IT></DE></FR> (2)
where Delta V triple-bond  Vt - Vr is the nominal gap between resting (Vr) and threshold (Vt) voltages and Rcell is the membrane resistance of the cell (Fig. 1). Typically, Vr = -65 mV and Vt = -50 mV, producing a nominal gap of 15 mV. In the absence of synaptic drive and afterhyperpolarization currents, Eq. 1 with Ispike,i given by Eq. 2 is identical to the theta -neuron model introduced by Ermentrout and Kopell (1986) and explored further by Ermentrout (1996). This model describes the behavior of type I neurons, neurons that can support arbitrarily low-frequency oscillations (Hodgkin 1948). The advantage of using the theta -neuron rather than a conductance-based model is that relatively large time steps can be used, on the order of 1 ms rather than the 0.1- to 0.2-ms time steps required to accurately represent the rapidly varying membrane potential during a true action potential.



View larger version (11K):
[in this window]
[in a new window]
 
Fig. 1. Current-voltage relation for Ispike, the current that generates action potentials. Arrows indicate voltage trajectories in the absence of synaptic input and afterhyperpolarization currents. Arrows pointing to the right correspond to regions where the voltage increases in time; arrows pointing to the left correspond to regions where it decreases. These arrows indicate that Vr, the resting membrane potential, is a stable equilibrium and Vt, the threshold, is an unstable one. Inset: action potential generated by giving the neuron an initial voltage slightly above threshold.

If we were to adhere strictly to the theta -neuron model [which is related to our model by the change of variables Vi ~ tan (theta /2) + constant] an action potential would occur whenever Vi reaches +infinity (the spike apex), at which point the voltage would be reset to -infinity (the spike repolarization). For numerical work, however, it is necessary to use finite spike apex and repolarization voltages. The apex we use in our simulations, denoted Vapex, is 20 mV, and the reset value, Vrepol, is -80 mV.

The fast and slow afterhyperpolarization currents may be written
<IT>I</IT><SUB><IT>fAHP,</IT><IT>i</IT></SUB><IT>=</IT><IT>g</IT><SUB><IT>K</IT><IT>,</IT><IT>i</IT></SUB>(<IT>V<SUB>i</SUB></IT><IT>−ℰ</IT><SUB><IT>K</IT></SUB>) (3a)

<IT>I</IT><SUB><IT>sAHP,</IT><IT>i</IT></SUB><IT>=</IT><IT>g</IT><SUB><IT>K</IT><IT>−</IT><IT>Ca</IT><IT>,</IT><IT>i</IT></SUB>(<IT>V<SUB>i</SUB></IT><IT>−ℰ</IT><SUB><IT>K</IT></SUB>) (3b)
where E K is the potassium reversal potential and gK,i and gK-Ca,i are potassium conductances. The latter quantities obey the time evolution equations
<FR><NU>d<IT>g</IT><SUB><IT>K</IT><IT>,</IT><IT>i</IT></SUB></NU><DE><IT>d</IT><IT>t</IT></DE></FR><IT>=</IT><FR><NU><IT>g</IT><SUB><IT>K</IT><IT>,∞</IT></SUB><IT>−</IT><IT>g</IT><SUB><IT>K</IT><IT>,</IT><IT>i</IT></SUB></NU><DE><IT>&tgr;</IT><SUB><IT>K</IT></SUB></DE></FR><IT>+&dgr;</IT><IT>g<SUB>K</SUB></IT> <LIM><OP>∑</OP><LL><IT>&mgr;</IT></LL></LIM><IT> &dgr;</IT>(<IT>t</IT><IT>−</IT><IT>t</IT><SUP><IT>&mgr;</IT></SUP><SUB><IT>i</IT></SUB>) (4a)

<FR><NU>d<IT>g</IT><SUB><IT>K</IT><IT>−</IT><IT>Ca</IT><IT>,</IT><IT>i</IT></SUB></NU><DE><IT>d</IT><IT>t</IT></DE></FR><IT>=</IT><FR><NU><IT>g</IT><SUB><IT>K</IT><IT>−</IT><IT>Ca</IT><IT>,∞</IT></SUB><IT>−</IT><IT>g</IT><SUB><IT>K</IT><IT>−</IT><IT>Ca</IT><IT>,</IT><IT>i</IT></SUB></NU><DE><IT>&tgr;</IT><SUB><IT>K</IT><IT>−</IT><IT>Ca</IT></SUB></DE></FR><IT>+&dgr;</IT><IT>g</IT><SUB><IT>K</IT><IT>−</IT><IT>Ca</IT></SUB> <LIM><OP>∑</OP><LL><IT>&mgr;</IT></LL></LIM><IT> &dgr;</IT>(<IT>t</IT><IT>−</IT><IT>t</IT><SUP><IT>&mgr;</IT></SUP><SUB><IT>i</IT></SUB>) (4b)
where gK,infinity and gK-Ca,infinity are the equilibrium value of the fast and slow potassium conductances, tau K-Ca and tau K-Ca are their time constants, tiµ is time of the µth spike on neuron i, and the delta -function, delta (t - tiµ), provides an impulse whenever t = tiµ. Those impulses causes discrete increases, delta gK and delta gK-Ca, in gK,i and gK-Ca,i, respectively. For simplicity we assume that the equilibrium conductances, gK,infinity and gK-Ca,infinity , and the time constants, tau K-Ca and tau K-Ca are independent of voltage. Then, without loss of generality, we may assume that gK,i and gK-Ca,i are zero at rest, which implies that gK,infinity  = gK-Ca,infinity  = 0.

Combining Eqs. 1-4, the network evolution equations become
<FR><NU>d<IT>V<SUB>i</SUB></IT></NU><DE><IT>d</IT><IT>t</IT></DE></FR><IT>=</IT><FR><NU><IT>1</IT></NU><DE><IT>&tgr;<SUB>cell</SUB></IT></DE></FR> <FENCE><FR><NU>(<IT>V<SUB>i</SUB></IT><IT>−</IT><IT>V<SUB>r</SUB></IT>)(<IT>V<SUB>i</SUB></IT><IT>−</IT><IT>V<SUB>t</SUB></IT>)</NU><DE><IT>&Dgr;</IT><IT>V</IT></DE></FR><IT>+</IT><IT><A><AC>I</AC><AC>ˆ</AC></A></IT><SUB><IT>a</IT><IT>,</IT><IT>i</IT></SUB></FENCE> (5a)

<FENCE><IT>−</IT>(<IT><A><AC>g</AC><AC>ˆ</AC></A><SUB>K,i</SUB></IT><IT>+ </IT><IT><A><AC>g</AC><AC>ˆ</AC></A></IT><SUB><IT>K</IT><IT>−</IT><IT>Ca</IT><IT>,</IT><IT>i</IT></SUB>)(<IT>V<SUB>i</SUB></IT><IT>−ℰ</IT><SUB><IT>K</IT></SUB>)<IT>−</IT><IT><A><AC>I</AC><AC>ˆ</AC></A></IT><SUB><IT>syn,</IT><IT>i</IT></SUB></FENCE>

<FR><NU>d<IT><A><AC>g</AC><AC>ˆ</AC></A></IT><SUB><IT>K</IT><IT>,</IT><IT>i</IT></SUB></NU><DE><IT>d</IT><IT>t</IT></DE></FR><IT>=</IT>−<FR><NU><IT><A><AC>g</AC><AC>ˆ</AC></A></IT><SUB><IT>K</IT><IT>,</IT><IT>i</IT></SUB></NU><DE><IT>&tgr;</IT><SUB><IT>K</IT></SUB></DE></FR><IT>+&dgr;</IT><IT><A><AC>g</AC><AC>ˆ</AC></A><SUB>K</SUB></IT> <LIM><OP>∑</OP><LL><IT>&mgr;</IT></LL></LIM><IT> &dgr;</IT>(<IT>t</IT><IT>−</IT><IT>t</IT><SUP><IT>&mgr;</IT></SUP><SUB><IT>i</IT></SUB>) (5b)

<FR><NU>d<IT><A><AC>g</AC><AC>ˆ</AC></A></IT><SUB><IT>K</IT><IT>−</IT><IT>Ca</IT><IT>,</IT><IT>i</IT></SUB></NU><DE><IT>d</IT><IT>t</IT></DE></FR><IT>=</IT>−<FR><NU><IT><A><AC>g</AC><AC>ˆ</AC></A></IT><SUB><IT>K</IT><IT>−</IT><IT>Ca</IT><IT>,</IT><IT>i</IT></SUB></NU><DE><IT>&tgr;</IT><SUB><IT>K</IT><IT>−</IT><IT>Ca</IT></SUB></DE></FR><IT>+&dgr;</IT><IT><A><AC>g</AC><AC>ˆ</AC></A></IT><SUB><IT>K</IT><IT>−</IT><IT>Ca</IT></SUB> <LIM><OP>∑</OP><LL><IT>&mgr;</IT></LL></LIM><IT> &dgr;</IT>(<IT>t</IT><IT>−</IT><IT>t</IT><SUP><IT>&mgr;</IT></SUP><SUB><IT>i</IT></SUB>) (5c)
where tau cell triple-bond  RcellCcell is the cell time constant and quantities with a hat have been multiplied by Rcell: Îa,i triple-bond  RcellIa,i, etc. Note that both Îa,i and Îsyn,i have units of voltage, whereas the conductances, ĝK and ĝK-Ca, and their increments, delta ĝK and delta ĝK-Ca, are dimensionless.

The normalized synaptic current, Îsyn, is written
<IT><A><AC>I</AC><AC>ˆ</AC></A></IT><SUB><IT>syn,</IT><IT>i</IT></SUB><IT>=</IT><LIM><OP>∑</OP><LL><IT>j</IT></LL></LIM> <IT>W<SUB>ij</SUB>s<SUB>ij</SUB></IT>(<IT>t</IT>)(<IT>V<SUB>i</SUB></IT><IT>−ℰ</IT><SUB><IT>j</IT></SUB>) (6)
where Wij is the strength of the connection from neuron j to neuron i, sij(t) is the fraction of channels on neuron i that open when neuron j emits an action potential, and E j is the reversal potential associated with the ligand-gated receptor on the cell that is postsynaptic to neuron j. The weight matrix, Wij, is always nonnegative. This ensures that excitatory drive (activity from neurons with E j greater than the threshold for the emission of an action potential) increases the probability that a postsynaptic neuron fires, whereas inhibitory drive (activity from neurons with E j less than threshold) decreases the probability.

The conductance change on a postsynaptic neuron is mediated by the fraction of open channels, sij. That fraction increases instantaneously when a spike occurs at neuron j and decays exponentially between spikes: for all i
<FR><NU>d<IT>s<SUB>ij</SUB></IT></NU><DE><IT>d</IT><IT>t</IT></DE></FR><IT>=</IT>−<FR><NU><IT>s<SUB>ij</SUB></IT></NU><DE><IT>&tgr;</IT><SUB><IT>s</IT></SUB></DE></FR><IT>+</IT><IT>r<SUB>s</SUB></IT> <LIM><OP>∑</OP><LL><IT>&mgr;</IT></LL></LIM><IT> &dgr;</IT>(<IT>t</IT><IT>−</IT><IT>t</IT><SUP><IT>&mgr;</IT></SUP><SUB><IT>j</IT></SUB>) (7)
The parameter rs determines how many closed channels open each time neuron j fires. If we had included adaptation, rs would have been called the use parameter (Tsodyks and Markram 1997); we adopt that name here.

Equations 5-7, along with the rule that whenever Vi reaches Vapex an action potential is emitted and the voltage is reset to Vrepol, constitute our complete network equations. Implementing this model as it stands, however, is costly numerically. This is because the fraction of open channels, sij(t), must be integrated separately for each synapse. These separate integrations can be avoided by noting that the relevant s-dependent quantity is the sum that appears on the right hand side of Eq. 6. That sum can be divided into two terms
<IT><A><AC>I</AC><AC>ˆ</AC></A></IT><SUB><IT>syn,</IT><IT>i</IT></SUB><IT>=</IT><IT>V<SUB>i</SUB><A><AC>I</AC><AC>˜</AC></A><SUB>i</SUB></IT><IT>−</IT><IT><A><AC>I</AC><AC>˜</AC></A></IT><SUB><IT>ℰ,</IT><IT>i</IT></SUB> (8)
where
<IT><A><AC>I</AC><AC>˜</AC></A><SUB>i</SUB></IT><IT>≡</IT><LIM><OP>∑</OP><LL><IT>j</IT></LL></LIM> <IT>W<SUB>ij</SUB>s<SUB>ij</SUB></IT> (9a)

<IT><A><AC>I</AC><AC>˜</AC></A></IT><SUB><IT>ℰ,</IT><IT>i</IT></SUB><IT>≡</IT><LIM><OP>∑</OP><LL><IT>j</IT></LL></LIM> <IT>W<SUB>ij</SUB>s<SUB>ij</SUB></IT><IT>ℰ</IT><SUB><IT>j</IT></SUB> (9b)
Combining the definitions in Eq. 9 with the time evolution for the fraction of open channels, Eq. 7, we see that Ĩi and Ĩℰ,i evolve according to
<FR><NU>d<IT><A><AC>I</AC><AC>˜</AC></A><SUB>i</SUB></IT></NU><DE><IT>d</IT><IT>t</IT></DE></FR><IT>=</IT>−<FR><NU><IT><A><AC>I</AC><AC>˜</AC></A><SUB>i</SUB></IT></NU><DE><IT>&tgr;</IT><SUB><IT>s</IT></SUB></DE></FR><IT>+</IT><IT>r<SUB>s</SUB></IT> <LIM><OP>∑</OP><LL><IT>j</IT><IT>,&mgr;</IT></LL></LIM> <IT>W<SUB>ij</SUB></IT><IT>&dgr;</IT>(<IT>t</IT><IT>−</IT><IT>t</IT><SUP><IT>&mgr;</IT></SUP><SUB><IT>j</IT></SUB>) (10a)

<FR><NU>d<IT><A><AC>I</AC><AC>˜</AC></A></IT><SUB><IT>ℰ,</IT><IT>i</IT></SUB></NU><DE><IT>d</IT><IT>t</IT></DE></FR><IT>=</IT>−<FR><NU><IT><A><AC>I</AC><AC>˜</AC></A></IT><SUB><IT>ℰ,</IT><IT>i</IT></SUB></NU><DE><IT>&tgr;</IT><SUB><IT>s</IT></SUB></DE></FR><IT>+</IT><IT>r<SUB>s</SUB></IT> <LIM><OP>∑</OP><LL><IT>j</IT><IT>,&mgr;</IT></LL></LIM> <IT>W<SUB>ij</SUB></IT><IT>ℰ</IT><SUB><IT>j</IT></SUB><IT>&dgr;</IT>(<IT>t</IT><IT>−</IT><IT>t</IT><SUP><IT>&mgr;</IT></SUP><SUB><IT>j</IT></SUB>) (10b)
With this formulation there is no need to keep track of the sij individually; Eq. 8 along with the time evolution equations for Ĩi and Ĩℰ,i, Eq. 10, can be used to determine the synaptic current, and sij drops out of the equations.

DISTRIBUTION OF APPLIED CURRENTS. The applied current, Îa,i, determines how close a neuron is to the threshold for generating an action potential, or, if it is above threshold, its endogenous firing rate. This quantity is chosen probabilistically: Îa,i has a boxcar distribution, uniform between 0 and Îmax and 0 outside those two values. We use this distribution because it gives us precise control over the number of neurons that are endogenously active. The precise control arises because neuron i is endogenously active if and only if Îa,i > Delta V/4 triple-bond  Îthresh. Consequently, when Îmax < Îthresh no neurons are endogenously active, and when Îmax > Îthresh the fraction of endogenously active cells is (Îmax - Îthresh)/Îmax.

NETWORK CONNECTIVITY. Connectivity is specified by the weight matrix, Wij. This quantity determines both whether two neurons are connected and, if they are, the strength of that connection. Like the applied current, the weight matrix is chosen probabilistically, and the probability that neuron j connects to neuron i is denoted Pij. If there is a connection, the strength of that connection is set by considerations of EPSP and inhibitory postsynaptic potential (IPSP) sizes. If there is no connection, Wij is set to zero.

We use two models of connectivity, infinite range and local, when constructing Pij. For infinite range connectivity, Pij depends only on the types of neurons i and j
<IT>P<SUB>ij</SUB></IT><IT>=</IT><IT>P</IT><SUP><IT>∞</IT></SUP><SUB><IT>T<SUB>i</SUB></IT><IT>,</IT><IT>T<SUB>j</SUB></IT></SUB>
where T specifies type
<IT>T<SUB>i</SUB></IT><IT>=</IT><FENCE><AR><R><C><IT>E</IT></C><C><IT>neuron </IT><IT>i</IT><IT> is excitatory</IT></C></R><R><C><IT>I</IT></C><C><IT>neuron </IT><IT>i</IT><IT> is inhibitory</IT></C></R></AR></FENCE>
We do not allow autapses, which means Pii = 0.

The connection probabilities consist of four numbers, corresponding to the four combinations of pairs of excitatory and inhibitory neurons. We parameterize these four numbers with the quantities KTj and BTj, where KTj is the mean number of postsynaptic neurons a presynaptic neuron connects to and BTj is the connectivity bias
<IT>K</IT><SUB><IT>T<SUB>j</SUB></IT></SUB><IT>≡</IT><IT>N<SUB>E</SUB>P</IT><SUB><IT>ET<SUB>j</SUB></IT></SUB><IT>+</IT><IT>N<SUB>I</SUB>P</IT><SUB><IT>IT<SUB>j</SUB></IT></SUB> (11a)

<IT>B</IT><SUB><IT>T<SUB>j</SUB></IT></SUB><IT>≡</IT><FR><NU><IT>P</IT><SUP><IT>∞</IT></SUP><SUB><IT>IT<SUB>j</SUB></IT></SUB></NU><DE><IT>P</IT><SUP><IT>∞</IT></SUP><SUB><IT>ET<SUB>j</SUB></IT></SUB></DE></FR> (11b)
where, recall, NE and NI are the number of excitatory and inhibitory neurons, respectively. Note that BTj > 1 indicates a bias toward inhibitory neurons, whereas BTj < 1 indicates a bias toward excitatory ones.

Inverting Eq. 11 yields the expressions for the connection probabilities in terms of the mean number of connections and bias
<IT>P</IT><SUP><IT>∞</IT></SUP><SUB><IT>ET<SUB>j</SUB></IT></SUB><IT>=</IT><FR><NU><IT>K</IT><SUB><IT>T<SUB>j</SUB></IT></SUB></NU><DE><IT>N<SUB>E</SUB></IT><IT>+</IT><IT>N<SUB>I</SUB>B</IT><SUB><IT>T<SUB>j</SUB></IT></SUB></DE></FR> (12a)

<IT>P</IT><SUP><IT>∞</IT></SUP><SUB><IT>IT<SUB>j</SUB></IT></SUB><IT>=</IT><FR><NU><IT>K</IT><SUB><IT>T<SUB>j</SUB></IT></SUB><IT>B</IT><SUB><IT>T<SUB>j</SUB></IT></SUB></NU><DE><IT>N<SUB>E</SUB></IT><IT>+</IT><IT>N<SUB>I</SUB>B</IT><SUB><IT>T<SUB>j</SUB></IT></SUB></DE></FR> (12b)
Equation 12 is sufficient to specify connectivity in the infinite range regime.

For local connectivity, Pij depends on distance between neurons. For "distance between neurons" to make sense we need to assign to each neuron a spatial location. We work in a two-dimensional space, and specify the location of a neuron in polar coordinates, (r, theta ), where r is radius and theta  is azimuthal angle. Neurons are randomly assigned a position according to the probability distribution P(r, theta ) where P(r, theta )rdrdtheta is the probability that a neuron falls within the area bounded by [r, r + dr] and [theta , theta  + dtheta ]. For P(r, theta ) we use the azimuthally symmetric function
<IT>P</IT>(<IT>r</IT><IT>, &thgr;</IT>)<IT>=</IT><FR><NU><IT>1−tanh </IT>[(<IT>r</IT><SUP><IT>2</IT></SUP><IT>−1</IT>)<IT>/&Dgr;</IT><SUB><IT>r</IT></SUB>]</NU><DE><IT>&pgr;&Dgr;</IT><SUB><IT>r</IT></SUB><IT> ln </IT>[<IT>1+exp</IT>(<IT>2/&Dgr;</IT><SUB><IT>r</IT></SUB>)]</DE></FR> (13)
where Delta r is the width of the transition region between approximately constant density and near zero density; i.e., the distribution P(r, theta ) is approximately flat for radius r < - Delta r and drops rapidly to nearly zero in a transition region of width 2Delta r.

Using the probability distribution given in Eq. 13 to assign a position, xi triple-bond  (ri cos theta i, ri sin theta i), to every neuron i, the distance between two neurons is then given simply by the Euclidean norm, dij triple-bond  |xi - xj|. We use a Gaussian profile for local connectivity, for which the probability of making a connection is modulated by the factor exp(-dij2/2sigma Tj2) where sigma Tj is a parameter that determines the axonal spread of neurons of type j. Thus
<IT>P<SUB>ij</SUB></IT><IT>=</IT><IT>P</IT><SUP><IT>∞</IT></SUP><SUB><IT>T<SUB>i</SUB>T<SUB>j</SUB></IT></SUB> <FR><NU><IT>e</IT><SUP><IT>−</IT><IT>d</IT><SUP><IT>2</IT></SUP><SUB><IT>ij</IT></SUB><IT>/2&sfgr;</IT><SUP><IT>2</IT></SUP><SUB><IT>T</IT><SUB><IT>j</IT></SUB></SUB></SUP></NU><DE><IT>Z</IT><SUB><IT>T<SUB>j</SUB></IT></SUB></DE></FR>
where ZTj is chosen so that the mean connection probability of a neuron placed at the origin (r = 0) is equal to PTiTjinfinity . In our calculations we use an approximate expression for the normalization, valid in the limit that Delta r <<  1 [so that P(r, theta ) = 1/pi when r <=  1 and zero when r > 1]. In this limit
<IT>Z</IT><SUB><IT>T<SUB>j</SUB></IT></SUB><IT>=</IT><LIM><OP>∫</OP><LL><IT>0</IT></LL><UL><IT>2&pgr;</IT></UL></LIM> <FR><NU><IT>d&thgr;</IT></NU><DE><IT>&pgr;</IT></DE></FR> <LIM><OP>∫</OP><LL><IT>0</IT></LL><UL><IT>1</IT></UL></LIM> <IT>rdr</IT><IT> exp</IT>(−<IT>r</IT><SUP><IT>2</IT></SUP><IT>/2&sfgr;</IT><SUP><IT>2</IT></SUP><SUB><IT>T<SUB>j</SUB></IT></SUB>)<IT>=2&sfgr;</IT><SUP><IT>2</IT></SUP><SUB><IT>T<SUB>j</SUB></IT></SUB>[<IT>1−exp</IT>(−<IT>1/2&sfgr;</IT><SUP><IT>2</IT></SUP><SUB><IT>T<SUB>j</SUB></IT></SUB>)]
This expression for ZTj is valid as long as ZTj >=  PTiTjinfinity , which can always be satisfied by taking the limit sigma Tj right-arrow infinity  (in this limit ZTj right-arrow 1). We impose the condition ZTj >=  PTiTjinfinity in all our simulations.

We consolidate the infinite range and local connectivity models by using the local connectivity description and setting sigma Tj infinity  whenever we want to recapture the infinite range case.

The strength of a connection, once one is made, is determined by the size of Wij. To ensure that Wij is in a biophysically reasonable range, we need a relation between it and the sizes of both excitatory and inhibitory PSPs. To derive such a relation, we use Eq. 5a to compute, as a function of Wij, the peak voltage in response to a single presynaptic action potential. For definiteness we consider a neuron whose resting membrane potential is the nominal one, Vr (which implies that Îa = 0), and assume that the neuron has not produced an action potential for a sufficiently long time that both ĝK and ĝK-Ca have decayed to zero. Under these conditions, Eq. 5a for the membrane potential can be combined with Eq. 6 for the synaptic current to yield
&tgr;<SUB>cell</SUB> <FR><NU>d<IT>V<SUB>i</SUB></IT></NU><DE><IT>d</IT><IT>t</IT></DE></FR><IT>=</IT><FR><NU>(<IT>V<SUB>i</SUB></IT><IT>−</IT><IT>V<SUB>r</SUB></IT>)(<IT>V<SUB>i</SUB></IT><IT>−</IT><IT>V<SUB>t</SUB></IT>)</NU><DE><IT>&Dgr;</IT><IT>V</IT></DE></FR><IT>−</IT><LIM><OP>∑</OP><LL><IT>j</IT></LL></LIM> <IT>W<SUB>ij</SUB>s<SUB>ij</SUB></IT>(<IT>t</IT>)(<IT>V<SUB>i</SUB></IT><IT>−ℰ</IT><SUB><IT>j</IT></SUB>)
Letting delta Vi triple-bond  Vi - Vr, linearizing the resulting equation, and assuming the synaptic drive is small, we find that delta Vi evolves according to
&tgr;<SUB>cell</SUB> <FR><NU>d&dgr;<IT>V<SUB>i</SUB></IT></NU><DE><IT>d</IT><IT>t</IT></DE></FR><IT>+&dgr;</IT><IT>V<SUB>i</SUB></IT><IT>=</IT>−<LIM><OP>∑</OP><LL><IT>j</IT></LL></LIM> <IT>W<SUB>ij</SUB>s<SUB>ij</SUB></IT>(<IT>V<SUB>r</SUB></IT><IT>−ℰ</IT><SUB><IT>j</IT></SUB>)
If neuron j fires at time t = 0, the fraction of open channels, sij, jumps instantaneously from 0 to rs and subsequently decays exponentially: sij(t < 0) = 0, sij(t >=  0) = rs exp(-t/tau s) (recall that tau s is the decay time of the open channels and rs is the use parameters; see Eq. 7). Combining this time dependence for sij with the initial condition delta Vi (t = 0) = 0, the above equation has the solution
&dgr;<IT>V<SUB>i</SUB></IT>(<IT>t</IT>)<IT>=</IT><IT>W<SUB>ij</SUB></IT>(<IT>ℰ</IT><SUB><IT>j</IT></SUB><IT>−</IT><IT>V<SUB>r</SUB></IT>)<IT>r<SUB>s</SUB></IT> <FR><NU><IT>exp</IT>(−<IT>t</IT><IT>/&tgr;<SUB>cell</SUB></IT>)<IT>−exp</IT>(−<IT>t</IT><IT>/&tgr;</IT><SUB><IT>s</IT></SUB>)</NU><DE><IT>&tgr;<SUB>cell</SUB>/&tgr;</IT><SUB><IT>s</IT></SUB><IT>−1</IT></DE></FR>
This expression implies that |delta Vi(t)| is maximum when t = ln (tau cell/tau s)/(tau s-1 - tau cell-1) triple-bond  t0. Defining VPSPij triple-bond  delta Vi(t0), it is straightforward to show that
<IT>W<SUB>ij</SUB></IT><IT>=</IT><FR><NU><IT>V</IT><SUB><IT>PSP</IT><SUB><IT>ij</IT></SUB></SUB></NU><DE><IT>ℰ</IT><SUB><IT>j</IT></SUB><IT>−</IT><IT>V<SUB>r</SUB></IT></DE></FR> <FR><NU><IT>1</IT></NU><DE><IT>r<SUB>s</SUB></IT></DE></FR> <FR><NU><IT>&tgr;<SUB>cell</SUB></IT></NU><DE><IT>&tgr;</IT><SUB><IT>s</IT></SUB></DE></FR><IT> exp</IT><FENCE><FR><NU><IT>log </IT>(<IT>&tgr;<SUB>cell</SUB>/&tgr;</IT><SUB><IT>s</IT></SUB>)</NU><DE><IT>&tgr;<SUB>cell</SUB>/&tgr;</IT><SUB><IT>s</IT></SUB><IT>−1</IT></DE></FR></FENCE> (14)
In our simulations, rather than specifying the size of Wij directly, we specify the sizes of the excitatory and inhibitory PSPs (VPSPij with the type of neuron j chosen appropriately) and then use Eq. 14 to determine Wij.

PARAMETERS. Table 1 contains a list of all parameters used in our simulations. Parameters followed by an asterisk are the ones we vary; for those, a range or set of values is given. Parameters not followed by an asterisk remain constant throughout the simulations. The grouping into single-cell, synaptic, and network parameters is in most cases clear; the exceptions are VEPSP and VIPSP, the nominal amplitudes of the excitatory and inhibitory postsynaptic potentials. These are usually thought of as synaptic or single-cell properties, but in our simulations we use them only to determine the connectivity matrix, Wij, via Eq. 14. Specifically, VPSPij = VEPSP when neuron j is excitatory and VPSPij = VIPSP when neuron j is inhibitory. Thus they are listed under network parameters.


                              
View this table:
[in this window]
[in a new window]
 
Table 1. Parameters used in the simulations

We performed simulations with two networks, which we denote network A and network B. The former was chosen to simulate networks of cortical neurons, the latter to simulate networks of cultured mouse spinal cord neurons in which we did experiments relevant to the theoretical predictions made here (Latham et al. 2000). They differ primarily in their connectivity and postsynaptic potential amplitude, network A having high, infinite range connectivity and small PSPs and network B having low, local connectivity and large PSPs. The parameters specific to each network are given in Table 2, which contain all the parameters marked with an asterisk from Table 1. A dash in Table 2 indicates parameters that are varied during the course of the simulations.


                              
View this table:
[in this window]
[in a new window]
 
Table 2. Networks used in the simulations

Most of the parameters in Tables 1 and 2 are either dimensionless or have physical units. There are, however, three normalized parameters: Îmax, which has units of voltage, and delta ĝK and delta ĝK-Ca, which are both dimensionless. To convert these to their correct physical units (Amps for current and Siemans for conductance) divide by the cell membrane resistance.

Simulations were performed with a fourth-order Runge-Kutta integration scheme. In all simulations reported we used a time step of 1 ms. We occasionally checked that this was short enough by decreasing the time step to 0.5 ms, and in no cases did we see a change in our results.

Choice of simulation parameters

In choosing parameters we attempted to stay close to values observed in two different systems: networks of cortical neurons, and networks of cultured mouse spinal cord neurons in which we did experiments relevant to the theoretical predictions made here (Latham et al. 2000). These two systems differ primarily in their connectivity and postsynaptic potential amplitude, the former having high connectivity and small PSPs, the latter low connectivity and large PSPs. Thus most parameters were the same for the two model networks used in our simulations, and these were relatively standard; the parameters that differed pertained to connectivity and PSP amplitude, mirroring the differences between cortical and cultured spinal cord networks.

As is well-known, connectivity in cortex is high; each neuron connects to approximately 7,000 others (Braitenberg and Schüz 1991). The size of a local circuit, assuming such a thing exists, is more difficult to determine. However, given that axonal spread is measured in hundreds of micrometers and the density of neurons is ~105/mm3 (Braitenberg and Schüz 1991), a local circuit on the order of 100,000 neurons is a reasonable estimate. Unfortunately, we do not yet have the computational power to model such large networks. Consequently, we considered networks of only 10,000 neurons, each of which made ~1,000 connections. To make up for the reduced connectivity in our model network we increased the size and frequency of the postsynaptic potentials: instead of using the more realistic numbers of 0.2 mV for EPSPs (Komatsu et al. 1988; Matsumura et al. 1996) and -1 mV for IPSPs (Tamás et al. 1998), we typically used 1.0 and -1.5 mV. The frequency of PSPs was increased in our model by ignoring failures, so neurotransmitter was released every time an action potential occurred.

Unlike cortex and spinal cord in vivo, cultured spinal cord networks are intrinsically localized. However, connectivity in our particular preparation has not been characterized. We thus made estimates as follows. The number of connections a neuron makes is Asigma PA, where A is the area of the axonal arborization, sigma  is the number of neurons per area, and PA is the probability that a neuron connects to another neuron within its axonal arborization. We measured axonal arborization from neurons visualized after intracellular injection with horseradish peroxidase (Neale et al. 1978), and found it to be 2.0 ± 0.86 mm2 (mean ± SD, n = 12 neurons). The density of our cultures, sigma , was 23.7 ± 10.2 neurons/mm2 (n = 10 culture dishes). Thus a single neuron is capable of connecting to ~475 others. Finally, in paired recordings (Nelson et al. 1981), approximately one-half the cell pairs tested were connected. This gives a connectivity of ~240 connections/neuron. We used 200 connections/neuron in most of our simulations, although we explored a range of values.

The size of the postsynaptic potentials is large in cultured spinal cord neurons; on the order of 3-10 mV for EPSPs and approximately -6 mV for IPSPs (Nelson et al. 1981, 1983). In contrast to cortex, transmission failures are not observed; this is because a presynaptic neuron makes a large number of connections (~100) on each of its postsynaptic targets (Nelson et al. 1983; Pun et al. 1986). In our simulations we typically used EPSP and IPSP amplitudes of 4 and -6 mV, respectively, and, for simplicity, we did not include variation.

The parameters given in Tables 1 and 2 were the ones we used in the bulk of our simulations. In APPENDIX A we studied the effects of varying these parameters. In no cases did changing parameters produce network behavior that was inconsistent with our model. This suggests that our model made robust predictions and that our simulation results were not due simply to a particular choice of parameters.

Reduced network model

The reduced network model we use in our analysis is based on the Wilson and Cowan equations (Wilson and Cowan 1972), in which the dynamical variables are the mean excitatory and inhibitory firing rates. Those equations are given in generic form in Eq. 16 of RESULTS. Here we write down a particular realization of Wilson and Cowan-type equations that we augment by including spike-frequency adaptation.

Although it would be desirable to start with the equations describing the full network simulations and derive from those a set of mean firing rate equations, this is essentially impossible to do analytically except in highly idealized cases (van Vreeswijk and Sompolinsky 1998), and extremely difficult numerically. Thus we propose a set of reduced equations designed to capture qualitative, but not quantitative, features of large-scale networks. Denoting <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E and <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I as the mean excitatory and inhibitory firing rates (averaged over neurons), and GE and GI as the mean level of spike-frequency adaptation of the excitatory and inhibitory populations, we let these variables evolve according to
&tgr;<SUB><IT>E</IT></SUB> <FR><NU><IT>d<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB></NU><DE><IT>d</IT><IT>t</IT></DE></FR><IT>=</IT><IT>g</IT>[<IT>A</IT>(<IT>C<SUB>EE</SUB></IT><IT><A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB><IT>−</IT><IT>C<SUB>EI</SUB></IT><IT><A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB><IT>+&thgr;<SUB>max</SUB>−</IT><IT>G<SUB>E</SUB></IT>)<IT>/</IT>(<IT>1+<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB>)]<IT>−<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB> (15a)

&tgr;<SUB><IT>I</IT></SUB> <FR><NU><IT>d<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB></NU><DE><IT>d</IT><IT>t</IT></DE></FR><IT>=</IT><IT>g</IT>[<IT>A</IT>(<IT>C<SUB>IE</SUB></IT><IT><A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB><IT>−</IT><IT>C<SUB>II</SUB></IT><IT><A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB><IT>+&thgr;<SUB>max</SUB>−</IT><IT>G<SUB>I</SUB></IT>)<IT>/</IT>(<IT>1+<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB>)]<IT>−<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB> (15b)

&tgr;<SUB><IT>SFA</IT></SUB> <FR><NU><IT>d</IT><IT>G<SUB>E</SUB></IT></NU><DE><IT>d</IT><IT>t</IT></DE></FR><IT>=&dgr;</IT><IT>G</IT><IT><A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB><IT>−</IT><IT>G<SUB>E</SUB></IT> (15c)

&tgr;<SUB><IT>SFA</IT></SUB> <FR><NU><IT>d</IT><IT>G<SUB>I</SUB></IT></NU><DE><IT>d</IT><IT>t</IT></DE></FR><IT>=&dgr;</IT><IT>G</IT><IT><A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB><IT>−</IT><IT>G<SUB>I</SUB></IT> (15d)
where tau E and tau I are the time scales for relaxation to a firing rate equilibrium, tau SFA is the characteristic time scale for changes in the level of spike-frequency adaptation, A is the overall amplitude of the gain functions, the CLM, L, M = E, I, correspond to connectivity among the excitatory (E) and inhibitory (I) populations, delta G corresponds to the coupling between firing rate and spike-frequency adaptation, theta max controls network excitability, and g(x) is a thresholding function that saturates at nu max, the maximum firing rate of the neurons,
<IT>g</IT>(<IT>x</IT>)<IT>≡&ngr;<SUB>max</SUB></IT><FENCE><AR><R><C><IT>0</IT></C><C><IT>x</IT><IT>≤0</IT></C></R><R><C><IT>tanh </IT>(<IT>x</IT><IT>/&ngr;<SUB>max</SUB></IT>)</C><C><IT>x</IT><IT>>0</IT></C></R></AR></FENCE>
The mean levels of spike-frequency adaptation, GE and GI, are related loosely to gK-Ca in the network simulations, delta G is related to delta ĝK-Ca, and theta max is related to Îmax.

The qualitative features that these equations capture are 1) the gain functions, g(...), are generally increasing functions of excitatory firing rate and decreasing functions of inhibitory firing rate [the CLM are all positive and g(x) is a nondecreasing function of x], 2) increasing the firing rate increases spike-frequency adaptation (delta G is positive), 3) increasing the level of spike-frequency adaptation reduces firing rates, 4) the gain functions are approximately threshold-linear, consistent with the numerical gain functions in Fig. 1A, and 5) the gain is reduced by a divisive term proportional to the inhibitory firing rate. The divisive term, which has been proposed as a mechanism for gain control in cortical neurons (Carandini and Heeger 1994; Carandini et al. 1997; Heeger 1992; Nelson 1994), was included for two reasons. First, it was observed in our simulations: Fig. 1A in APPENDIX B shows a pronounced drop in gain as the inhibitory firing rate increases. Second, curved nullclines are essential for the existence of the saddle-node bifurcation that produces bursting (Fig. 5), and the divisive term increases nullcline curvature, thus making bursting more robust.

We are interested in understanding the dynamics represented by Eq. 15 in the regime in which tau E and tau I are on the order of the membrane time constant, ~10 ms, and tau SFA is on the order of the spike-frequency adaptation time, which can be as high as several seconds. Thus tau E, tau I <<  tau SFA, and we can use the fast/slow dissection proposed by Rinzel (1987) to analyze the dynamics. With this approach, the full four-dimensional system is reduced to two subsystems: a fast one corresponding to the population firing rates, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E and <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I, and a slow one corresponding to the level of spike-frequency adaptation, GE and GI. Although this represents a major simplification, it still leaves us with three main behaviors: steady firing, oscillations, and bursting. The first two can be readily understood in terms of the two-dimensional firing rate dynamics with GE and GI held fixed; only the third, bursting, requires the interaction of all four variables. We thus briefly discuss a method for its analysis.

The idea behind the fast/slow dissection is that <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E and <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I relax rapidly to an attractor (either a steady state or oscillations) compared with the time scale over which GE and GI change. Motion in the GE - GI plane is then determined be Eqs. 15c and 15d with <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E and <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I evaluated on their attractor. In the purely bursting regime, for attainable levels of spike-frequency adaptation the network is either silent or fires steadily; the attractor is a fixed point for given values of GE and GI. In this steady-state regime, the mean inhibitory firing rate, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I, is a single-valued function of the mean excitatory rate, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E (see Fig. 3B). The dynamics associated with Eq. 15 can thus be reduced to three dimensions, and the important features of the dynamics are captured by a three-dimensional bifurcation diagram that consists of a sheet whose height above the GE - GI plane represents the value of <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E. Bifurcations (sudden changes in firing rate) occur at folds in the sheet.

For a particular set of parameters, the trajectory in the GE - GI plane is typically a closed curve that exhibits sudden changes of direction at bifurcation points. If the trajectory collapses onto, or almost onto, a single curve, we may make a further, approximate, reduction to a two-dimensional bifurcation diagram of <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E versus GE. For the parameters considered in RESULTS (Fig. 6), the curve in the GE - GI plane collapsed almost to a line, resulting in the two-dimensional bifurcation diagrams shown in Fig. 6, C-E. For each of these diagrams the lines were found numerically using least-squares regression. The equations for the lines were as follows: Fig. 6C (theta max = 0), GI = 0.00 + 0.64GE, R2 = 1.0000; Fig. 6D (theta max = 0.25), GI = 0.01 + 0.65GE, R2 = 0.9996; Fig. 6E (theta max = 0.52), GI = 0.03 + 0.65GE, R2 = 0.9999.


    RESULTS
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
APPENDIX A
APPENDIX B
REFERENCES

In this section we investigate how single neuron and network properties affect intrinsic firing patterns. We use as our model system large, isolated networks with random, sparse connectivity and develop a theory that describes the firing patterns in such networks in the low firing rate regime. We then test specific predictions of this theory by performing simulations with large networks of spiking neurons. Robustness of the theory is verified in APPENDIX A, where we examine a broad range of single neuron and network parameters.

Theory

The theoretical development is divided into three parts: 1) analysis of networks that do not exhibit adaptation, 2) analysis of the more realistic, and more biologically relevant, case of networks that do exhibit adaptation, and 3) analysis of the stability of low firing rate equilibria.

LOW FIRING RATE EQUILIBRIA IN SPARSE, RANDOMLY CONNECTED NETWORKS WITHOUT ADAPTATION. To describe the dynamics of large, sparse, randomly connected networks, we start with the model proposed by Wilson and Cowan (1972). We use as our dynamical variables the mean excitatory and inhibitory firing rates, denoted <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E and <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I, respectively. In terms of these variables, the Wilson and Cowan model can be cast in the form
&tgr;<SUB><IT>E</IT></SUB> <FR><NU><IT>d<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB></NU><DE><IT>d</IT><IT>t</IT></DE></FR><IT>=&PHgr;</IT><SUB><IT>E</IT></SUB>(<IT><A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB><IT>, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB>)<IT>−<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB> (16a)

&tgr;<SUB><IT>I</IT></SUB> <FR><NU><IT>d<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB></NU><DE><IT>d</IT><IT>t</IT></DE></FR><IT>=&PHgr;</IT><SUB><IT>I</IT></SUB>(<IT><A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB><IT>, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB>)<IT>−<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB> (16b)
where tau E and tau I are time constants that determine how fast the network relaxes to its equilibria and Phi E and Phi I are the excitatory and inhibitory gain functions. The gain functions determine the network firing rates at one instant of time given the firing rates at an earlier time. Specifically, if the average excitatory and inhibitory firing rates at time t are <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E and <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I, then Phi E and Phi I are the average excitatory and inhibitory firing rates at times t + tau E and t + tau I, respectively. In the original Wilson and Cowan equations the gain functions consisted of two terms, one associated with the neurons' refractory period and one with the "subpopulation response function" (Wilson and Cowan 1972). Because we are interested in low firing rates, we have ignored the term associated with the refractory period.

The first step in the analysis of Eq. 16 is to examine the equilibria. The equilibrium equations, which are found by setting both d<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E/dt and d<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I/dt to zero, are
0=&PHgr;<SUB><IT>E</IT></SUB>(<IT><A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB><IT>, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB>)<IT>−<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB> (17a)

0=&PHgr;<SUB><IT>I</IT></SUB>(<IT><A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB><IT>, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB>)<IT>−<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB> (17b)
The solutions to Eqs. 17a and 17b are curves in the (<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I) plane. These curves are referred to as the excitatory and inhibitory nullclines, respectively (for a discussion of nullclines similar to ours, see Rinzel and Ermentrout 1998). The aim of this section is to understand how the nullclines depend on single neuron and network properties. Because the shapes of the nullclines are completely determined by the gain functions, we begin by examining how the gain functions depend on these properties.

At high firing rates the gain functions are stereotypically sigmoidal, independent of single neuron and network parameters: when <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I is large, both gain functions approach zero as <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E approaches zero, rise rapidly as <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E increases past some critical value, and approach the maximum neuronal firing rate as <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E becomes large. Similarly, when <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E is large, the gain functions are sigmoidal versus <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I but with negative slope. At very low firing rates, however, the gain functions become sensitive to both single neuron and network properties. It is the single neuron properties, however, that have the largest effect on the gain functions, because at very low firing rates the gain functions are determined primarily by the responses of neurons at rest to a very small number of EPSPs. This led us to divide networks into three regimes based on single neuron response properties. Each regime produces qualitatively different gain functions and thus, as we will show, qualitatively different nullclines. Those regimes are as follows:
1 None of the cells have their resting membrane potential within one EPSP of threshold for the generation of an action potential; i.e., no cell fires in response to a single EPSP, which in turn implies that no cells are endogenously active.
2 Some cells have their resting potential within one EPSP of threshold, but none are endogenously active.
3 Some cells are endogenously active.

In Fig. 2A we plot Phi E(<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E, 0) versus <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E for these three regimes. We chose <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I = 0 in this plot because it accentuates the differences among the gain function. The three regimes are distinguished by the value and the slope of the gain function at the origin of the (<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I) plane. In regime 1 the cells are far enough from threshold that a single action potential cannot cause any neurons to fire, so the gain function and its slope are both zero at the origin. In regime 2, as in regime 1, the neurons cannot fire without input, so Phi E(0, 0) is again zero. However, a single action potential can cause cells to fire and thus ignite network activity; consequently the origin is unstable and the slope there is greater than one. In regime 3 a fraction of the neurons fire without input, so Phi E(0, 0) > 0. 



View larger version (20K):
[in this window]
[in a new window]
 
Fig. 2. Schematic of the excitatory and inhibitory gain functions in the 3 regimes discussed in the main text. A: excitatory gain functions. In regime 1, Phi E(<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E, 0) is zero when <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E = 0 (neurons do not fire without input), and its slope at <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E = 0 is also zero (a single action potential cannot make a neuron fire, and thus cannot produce any activity in a silent network). In regime 2 the neurons still cannot fire without input, so Phi E(0, 0) is again zero. However, because a single action potential can cause cells to fire and thus ignite network activity, the slope at <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E = 0 is greater than one. The condition for a slope greater than one is that at least 1/KEE of the excitatory neurons have their resting membrane potential within one excitatory postsynaptic potential (EPSP) of threshold, where KEE is the mean number of excitatory connections made by an excitatory neuron. Because KEE is large, this could easily occur. In regime 3 some of the neurons fire without input, so Phi E(0, 0) > 0. B: Inhibitory gain functions. In regimes 1 and 2, Phi I(0, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I) = 0, whereas in regime 3, Phi I(0, 0) > 0.

In Fig. 2B we plot Phi I(0, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I) versus <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I, again for the three regimes listed above. The behavior here is somewhat simpler: in regimes 1 and 2, Phi I(0, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I) = 0 for all values of <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I, whereas in regime 3, Phi I(0, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I) is greater than zero when <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I = 0 and decreases monotonically with increasing <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I.

Armed with the shapes of the gain functions in the three regimes, we are now in a position to construct the nullclines. That construction is done graphically: starting with the excitatory nullcline, we plot the function Phi E(<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I) versus <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E for various values of <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I and look for intersections with the line Phi E = <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E. This is shown in Fig. 3A for regime 1 (no cells within one EPSP of threshold). Each of the intersections, including the one at the origin, is a solution to Eq. 17a and thus corresponds to a point on the excitatory nullcline. Dashed lines connect the intersections to the excitatory nullcline, which is drawn in gray in the two dimensional (<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I) plane directly below the gain curves (Fig. 3C). Note that the gain curves in Fig. 3A that correspond to low values of the inhibitory firing rate have three intersections with the line Phi E = <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E. In addition, all gain curves intersect at the origin, which implies that the <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I axis is part of the excitatory nullcline. For a similar construction in an all-excitatory network, see O'Donovan et al. (1998).



View larger version (25K):
[in this window]
[in a new window]
 
Fig. 3. Gain functions and nullclines when none of the cells have their resting membrane potential within one EPSP of threshold (regime 1): A: excitatory gain curves, Phi E(<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I), vs. <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E for various values of <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I. Only the curve with <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I = 0 is labeled; the other curves correspond to increasingly larger values of <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I. Each intersection between a gain curve and the line Phi E = <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E corresponds to a point on the excitatory nullcline, as indicated by the connecting dashed lines. B: inhibitory gain curve, Phi I(<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I), vs. <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I for various values of <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E. Again, only the curve with <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E = 0 is labeled (the straight line at Phi I = 0); the other curves corresponds to increasingly larger values of <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E. Again, each intersection between a gain curve and the line Phi I = <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I corresponds to a point on the excitatory nullcline, as indicated by the connecting dashed lines. C: the associated excitatory (gray) and inhibitory (black) nullclines. These nullclines separate regimes where the time derivatives of the firing rates are positive from regimes where they are negative: d<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E/dt is positive below the excitatory nullcline and negative above it, whereas d<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I/dt is positive to the right of the inhibitory nullcline and negative to its left. This rule is illustrated by the horizontal and vertical arrows, which indicate the signs of d<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E/dt and d<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I/dt, respectively.

The excitatory nullcline in Fig. 3C has the following interpretation: for small enough inhibition the network can either fire stably at high rate (the negatively sloped region of the excitatory nullcline) or be completely silent (the <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I axis). Between those two extremes is a threshold. That threshold, which lies on the region of the nullcline with positive slope, represents an unstable equilibrium at fixed inhibitory drive. These differences in stability naturally divide the excitatory nullcline: the region with positive slope is an unstable branch; the regions with negative slope, including the <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I axis (which is considered to have a slope of -infinity ), are stable branches.

Construction of the inhibitory nullcline is also done graphically: again we plot Phi I(<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I) versus <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I for various values of <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E and look for intersections with the line Phi I = <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I (Fig. 3B). These intersections correspond to points on the inhibitory nullcline, which is drawn in black in the two-dimensional (<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I) plane to the right of the gain curves (Fig. 3C). In contrast to the excitatory one, which has both stable and unstable branches, the inhibitory nullcline has a single branch that is everywhere stable at fixed <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E.

Construction of the nullclines in the other two regimes is a straightforward extension of the above method. We thus turn our attention to how those nullclines affect the stability and location of firing rate equilibria. We begin with regime 1, in which no cells are within one EPSP of threshold.

In regime 1 we find three intersections between the excitatory and inhibitory nullclines, and thus three equilibria (Fig. 4A). A necessary condition for the local stability of an equilibrium is that the inhibitory nullcline intersect the excitatory one from below (Rinzel and Ermentrout 1998) (see also the section STABILITY OF LOW FIRING RATE EQUILIBRIA). Consequently, the lower intersection on the unstable branch (marked "U" in Fig. 4A) is unstable. There are thus two possible locally stable equilibria in this figure, one at zero firing rate (marked "S," meaning absolutely stable because there are no endogenously active cells to ignite activity once the network falls silent) and the other on the rightmost branch of the excitatory nullcline (marked "M" for metastable, meaning the lifetime of the equilibrium is finite). The rightmost branch corresponds to high firing rate, so neither of these equilibria are at low, but nonzero, firing rate. Consequently, this configuration of nullclines cannot generate the low firing rates observed in vivo.



View larger version (26K):
[in this window]
[in a new window]
 
Fig. 4. Nullclines in the 3 regimes described in the main text. Excitatory and inhibitory nullclines are shown with gray and black lines, respectively. As in Fig. 3, d<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E/dt is positive below the excitatory nullcline and negative above it, whereas d<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I/dt is positive to the right of the inhibitory nullcline and negative to its left. S, M, and U indicate stable, metastable, and unstable equilibria, respectively. A: regime 1, none of the cells have their resting membrane potential within one EPSP of threshold. There is a stable equilibrium at zero firing rate and a metastable one at high firing rate, but the low firing rate equilibrium is unstable. B: regime 1, with strong curvature introduced in the inhibitory nullcline to create a metastable, low firing rate state. C: regime 1, in the very high connectivity limit where the nullclines are straight. In this limit 3 properties conspire to eliminate the possibility of a low firing rate equilibrium: the inhibitory nullcline is tied to the origin, there is a gap between the origin and the unstable branch of the excitatory nullcline, and the nullclines are straight. D: regime 2, some cells are within one EPSP of threshold. A metastable state can exist. This state persists even in the high connectivity limit, where the nullclines become straight. E: regime 3, endogenously active cells are present. A single, globally attracting equilibrium can exist. F: same as E except in the very high connectivity regime.

Is it possible to achieve a locally stable equilibrium at low firing rate in regime 1? Reexamining Fig. 4A, we see that such an equilibrium could be achieved by bending the inhibitory nullcline up so that it intersects the unstable branch of the excitatory nullcline from below (Fig. 4B; equilibrium marked M). Although such an intersection is possible, it occurs in an extremely restricted parameter regime: small changes in either network or single neuron parameters would shift the equilibrium to high firing rate or eliminate it altogether. This problem is especially severe in high connectivity networks where the nullclines are straight, as shown in Fig. 4C (van Vreeswijk and Sompolinsky 1996, 1998). Moreover, even if the equilibrium does exist and is locally stable, it is only metastable: relatively small downward fluctuations in firing rate can drive the network to the nearby equilibrium at zero firing rate, where it would remain forever.

The reason it is difficult to achieve robust, low firing rates in regime 1 is that there is a gap between the origin of the (<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I) plane and the unstable branch of the excitatory nullcline, as shown in Fig. 4, A-C. The gap arises because neurons cannot fire below 0 Hz. This lower bound effectively cuts off the bottom portion of the excitatory nullclines, as seen in the nullcline construction, Fig. 3, A and C. Thus it is the lower bound on firing rate that ultimately constrains the nullclines to have the shapes depicted in Fig. 4, A-C. Given those shapes, simple geometrical arguments determine the location and stability of the firing rate equilibria.

In regime II, in which some neurons have their resting membrane potential within one EPSP of threshold, a single action potential can cause a chain reaction that ignites the network. Consequently, an arbitrarily small excitatory firing rate is sufficient to produce network activity. This eliminates the gap in Fig. 4, A-C, resulting in the nullclines illustrated in Fig. 4D. Figure 4D shows a low firing rate equilibrium (marked M, again for metastable) that does not require the nullclines to have strong curvature, so it can exist even in high connectivity networks. However, the parameter range in which the equilibrium is stable is still relatively narrow: small changes in single neuron properties can cause resting membrane potentials to shift downward so that none of the cells are within one EPSP of threshold, or upward so that some cells cross threshold and become endogenously active. In addition, because fluctuations can cause a drop to zero firing rate, where the network will remain forever, the equilibrium is metastable. Again, this effect can be traced to the fact that neurons cannot fire below 0 Hz.

The nullclines for regime 3, in which some cells are endogenously active, are presented in Fig. 4E. The addition of endogenously active cells eliminates the equilibrium along the <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I axis, transforming the excitatory nullcline from two distinct curves into one continuous one. This transformation allows a robust, globally stable, low firing rate equilibrium at the intersection between the excitatory and inhibitory nullclines (marked S in Fig. 4E). The corresponding set of nullclines in the high connectivity regime is shown in Fig. 4F.

The existence of a robust, stable low firing rate equilibrium when endogenously active cells are present, and the absence of such an equilibrium when they are not, leads to the following prediction: an isolated network that fires steadily at low rate for an infinitely long time must contain endogenously active cells. This does not mean that such cells are absolutely essential for finite-lifetime, low firing rate states; long-lived metastable states can exist in networks where all cells have their resting membrane potential below threshold. However, the parameter regime for this is narrow, and, as we will see below, it vanishes altogether when spike-frequency adaptation is taken into account.

These conclusions apply whether endogenous activity arises through an intrinsic membrane property (e.g., a persistent inward current), noise-driven voltage fluctuations, or any other mechanism. They also apply to networks that receive external input. For example, consider a network that has no endogenously active cells, but instead receives sufficient external input that some cells fire without input from neurons within the network. The existence of these pseudoendogenously active cells place the network in regime 3, which allows a robust low firing rate equilibrium to exist. If, on the other hand, the same network receives external input that is too weak to cause any neurons to fire, then a robust low firing rate equilibrium is not possible.

EFFECT OF ADAPTATION ON FIRING PATTERNS. So far we have assumed that the gain curves, and thus the nullclines, are static. In fact, this is not the case for real neurons. Because of spike-frequency and/or synaptic adaptation, nullclines are dynamic objects that depend on the history of activity, not just the activity at a single point in time. To incorporate this history into our analysis, we assume that neurons adapt slowly compared with the time it takes a network to reach equilibrium. Consequently, at any instant in time the equilibrium firing rates occur at the intersection of the excitatory and inhibitory nullclines, just as they did above. However, the equilibria are not fixed: adaptation causes slow changes in the nullclines, and thus in the equilibrium firing rates.

This process, slow changes in the level of adaptation accompanied by rapid relaxation to a local firing rate equilibrium, can have two distinct outcomes. One is that the firing patterns change very little: the firing rates may simply shift slightly, or, perhaps, oscillate slowly as adaptation affects the equilibrium and vice versa. The other is that adaptation-induced shifts in firing rate may be large enough that firing rate equilibria are created and destroyed. This much more dramatic effect occurs as follows. Consider a network that starts in a state characterized by the nullclines given in Fig. 5A. The black dot at the intersection of the excitatory and inhibitory nullclines corresponds to a firing rate equilibrium at a fixed level of adaptation. As the level of adaptation increases, the nullclines shift. Small shifts lead to a bistable state (Fig. 5B) but produce only a small change in firing rate. With larger shifts, however, the nullclines pull apart (Fig. 5C), and bistability gives way to a single stable state at zero firing rate. When this happens the network firing rate crashes to zero, at which point the level of adaptation starts to decrease. When the adaptation has dropped enough that the equilibrium at zero firing rate disappears (Fig. 5A via Fig. 5B), firing resumes and a new cycle starts.



View larger version (16K):
[in this window]
[in a new window]
 
Fig. 5. Excitatory and inhibitory nullclines at fixed levels of adaptation in a regime where adaptation causes bursting. For clarity, only the parts of the nullclines at low firing rate are shown. Because the level of adaptation changes slowly, we refer to the intersections of the excitatory and inhibitory nullclines as equilibria. These "equilibria," which in fact change with time, are marked by black dots. A: the level of adaptation is minimum, and there is only 1 firing rate equilibrium; the equilibrium near zero has just disappeared via a saddle-node bifurcation. B: the level of adaptation has increased, leading to a bistable state. Barring large fluctuations, the network will stay near the higher firing rate state. C: the level of adaptation is large enough that the equilibrium at higher firing rate is eliminated, again via a saddle-node bifurcation. The resulting crash to zero causes the level of adaptation to begin to decrease. With time, this results in a shift to the low firing rate equilibrium in B. With a further decrease in adaptation, the low firing rate equilibrium disappears (A), and the firing rate jumps to a higher value. At this point the cycle repeats.

The second behavior, which results in bursting, is very different from the first, which is characterized by a shift in firing rate and/or slow oscillations. Because it is the endogenously active cells that prohibit an equilibrium at zero firing rate, bursting requires the effective elimination of these cells. Because such elimination cannot happen if the adaptation is purely synaptic, synaptic adaptation alone cannot cause the periodic crashes to zero firing rate that are characteristic of bursting: even with total elimination of synaptic coupling, firing rates remain finite because of the existence of endogenously active cells (but see O'Donovan et al. 1998 for an alternate view in the context of a reduced firing rate model). Periodic transitions between low firing rates (involving only the endogenously active cells) and high rates (involving most of the cells in the network) may be driven by synaptic adaptation, but this kind of bursting is not, to our knowledge, typically observed.

In contrast to synaptic adaptation, spike-frequency adaptation (adaptation that results in a decrease in a neuron's firing rate during repetitive firing; see METHODS) can introduce a hyperpolarizing current sufficient to temporarily eliminate endogenously actove cells, ultimately leading to bursting. Thus, in the remainder of this paper, the only form of adaptation we consider is spike-frequency adaptation.

The probability of spike-frequency adaptation eliminating endogenously active cells is highest if there are few such cells to begin with. Consequently, we expect a transition from steady firing to bursting as the fraction of endogenously active cells decreases. In addition, spike-frequency adaptation should eliminate long-lived metastable states. This is because those states rely on the existence of neurons close to threshold, but neurons that exhibit spike-frequency adaptation experience a drop in their resting membrane potential during repetitive firing, and thus are pushed well below threshold. These two observations lead to the following prediction: at a fixed, but nonzero, level of spike-frequency adaptation, a network fires steadily when there are many endogenously active cells, makes a sudden transition to bursting when the number of endogenously active cells falls below a threshold, and makes a second sudden transition to silence when all endogenously active cells disappear.

To test this prediction in a simplified setting, we investigated a firing rate model based on the Wilson and Cowan equations but augmented by spike-frequency adaptation, Eq. 15 of METHODS. In this model we regulate neuronal excitability with a single variable, theta max, which can be thought of as the amount of constant depolarizing current that is injected into each neuron. As long as theta max > 0, the fraction of endogenously active cells increases monotonically with theta max, when theta max = 0 the fraction of endogenously active cells vanishes, and as theta max becomes negative, neurons are pushed below the threshold for the emission of an action potential. The behavior of the model is shown in Fig. 6. Figure 6A confirms that theta max, and thus the fraction of endogenously active cells, controls firing patterns: the network fires steadily when theta max is large (many endogenously active cells are present), makes a transition to bursting when theta max falls below a threshold, and becomes silent when theta max drops below zero and endogenously active cells vanish from the network. Figure 6B summarizes the behavior of the model in the bursting regime: the mean excitatory firing rate, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E, periodically makes sudden jumps in firing rate, whereas the mean level of spike-frequency adaptation for the excitatory population, denoted GE, increases slowly when the network is active and decreases slowly when the network is silent. Bifurcation diagrams showing the equilibrium firing rate as a function of the level of spike-frequency adaptation (the heavy black curve) and the GE nullclines (the heavy gray line) are given in Fig. 6, C-E for three regimes: silence (C), bursting (D), and steady firing (E). The thin, gray dashed path in the bifurcation diagrams indicates the trajectory in firing rate/spike-frequency adaptation space. It is the sudden jumps in this trajectory, at the "knees" of the heavy black curve, that produce bursting.



View larger version (22K):
[in this window]
[in a new window]
 
Fig. 6. Particular realization of the Wilson and Cowan equations, Eq. 15 of METHODS, showing transitions from steady firing to bursting to silence. A: equilibrium firing rate of the excitatory population as a function of theta max, which regulates network excitability: theta max > 0 corresponds to the existence of endogenously active cells, theta max < 0 corresponds to their absence (see text and METHODS). When theta max > 0.51 there is a steady, low firing rate equilibrium, indicated by the single solid line to the right of the gray region. As theta max decreases, there is a transition to bursting. In the bursting regime (0 < theta max < 0.51) the firing rate periodically jumps between 0 Hz and the solid black line (the mean firing rate averaged over a burst) above the gray region. When theta max < 0, corresponding to the absence of endogenously active cells, the network crashes to zero firing rate and stays there. B: mean excitatory firing rate (<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E, black line) and mean level of excitatory spike-frequency adaptation (GE, gray line) vs. time in the bursting regime, theta max = 0.25. Note that the level of spike-frequency adaption, GE, increases when the network is active and decreases when it is silent. C-E: approximate 2-dimensional bifurcation diagrams (see METHODS). Black solid curves in each plot denote the stable branches of the excitatory firing rate equilibrium; black dashed curve that connects them denotes the unstable one. Thick gray line is the spike-frequency adaptation nullcline associated with Eq. 15c. Thin dashed gray line is the trajectory. C: theta max = 0, so there is a steady-state equilibrium at zero firing rate. A trajectory is shown that starts near <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E = 6, GE = 0, traverses the bifurcation diagram, and ends at the point marked "S" (for stable) at <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E = GE = 0. D: theta max = 0.25, placing the network in the bursting regime. The trajectory in this regime, which cycles indefinitely, reveals a relaxation oscillator; we refer to the resulting dynamics as bursting because individual neurons fire repetitively during the active phase. E: theta max = 0.52, placing the network just inside the steady firing rate regime. A trajectory is shown that starts near <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E = 0, GE > 0, traverses the bifurcation diagram, and ends at the point marked "S." The transition between bursting and steady firing occurs at the point predicted by the bifurcation diagram: the spike-frequency adaptation nullcline (the heavy gray line) passes onto the stable branch of the excitatory firing rate equilibrium at theta max approx  0.52. Parameters (see Eq. 15): tau E = tau I = 10 ms, tau SFA = 2,000 ms, A = 20, CEE = 0.9, CEI = 1.0, CIE = 1.0, CII = 1.4, G = 0.2, and nu max = infinity  (chosen large because the firing rate is far from saturation). Plots were made using XPP, developed by G. B. Ermentrout.

STABILITY OF LOW FIRING RATE EQUILIBRIA. Although endogenously active cells are necessary for the existence of a stable, low firing rate equilibrium, such cells do not guarantee either of these properties. As parameters change, a low firing rate equilibrium can become unstable via a Hopf bifurcation (Marsden and McCracken 1976), or it can be driven to high firing rate. In this section we investigate stability, then discuss conditions under which a network is both stable and fires at low rate.

The stability of a firing rate equilibrium can be determined by linearizing around a fixed point of Eq. 16 and solving the resulting eigenvalue equation (Rinzel and Ermentrout 1998). Consider first the case without adaptation. In that case, the two eigenvalues of the linearized firing rate equation, denoted lambda ±, are
&lgr;<SUB>±</SUB>=<FR><NU><IT>T</IT><IT>±</IT><RAD><RCD><IT>T</IT><SUP><IT>2</IT></SUP><IT>−4</IT><IT>D</IT></RCD></RAD></NU><DE><IT>2</IT></DE></FR>
where T and D are the trace and determinant, respectively, of the matrix
<IT>D</IT><IT>&PHgr;≡</IT><FENCE><AR><R><C><IT>&tgr;</IT><SUP><IT>−1</IT></SUP><SUB><IT>E</IT></SUB>(<IT>&PHgr;</IT><SUB><IT>E</IT><IT>,</IT><IT>E</IT></SUB><IT>−1</IT>)</C><C><IT>&tgr;</IT><SUP><IT>−1</IT></SUP><SUB><IT>E</IT></SUB><IT>&PHgr;</IT><SUB><IT>E</IT><IT>,</IT><IT>I</IT></SUB></C></R><R><C><IT>&tgr;</IT><SUP><IT>−1</IT></SUP><SUB><IT>I</IT></SUB><IT>&PHgr;</IT><SUB><IT>I</IT><IT>,</IT><IT>E</IT></SUB></C><C><IT>&tgr;</IT><SUP><IT>−1</IT></SUP><SUB><IT>I</IT></SUB>(<IT>&PHgr;</IT><SUB><IT>I</IT><IT>,</IT><IT>I</IT></SUB><IT>−1</IT>)</C></R></AR></FENCE>
Phi L,M triple-bond  partial Phi L/partial <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>M, L, M = E, I, and the partial derivatives are evaluated at the equilibrium. This last quantity, Phi L,M, is proportional to coupling from neurons of type M to neurons of type L; e.g., Phi I,E is proportional to excitatory to inhibitory coupling.

For an equilibrium to be stable both lambda + and lambda - must be negative. This requires that the determinant of the above matrix, D, be positive and the trace, T, negative. After minor manipulations of the determinant, we find that to satisfy the first condition, D > 0, we must have
(&PHgr;<SUB><IT>E</IT><IT>,</IT><IT>E</IT></SUB><IT>−1</IT>)(<IT>1−&PHgr;</IT><SUB><IT>I</IT><IT>,</IT><IT>I</IT></SUB>)<IT><</IT>(<IT>&PHgr;</IT><SUB><IT>I</IT><IT>,</IT><IT>E</IT></SUB>)(−<IT>&PHgr;</IT><SUB><IT>E</IT><IT>,</IT><IT>I</IT></SUB>) (18)
(The above form was adopted because both Phi I,I and Phi E,I are negative.) It is not hard to show that Eq. 18 is satisfied as long as the slope of the excitatory nullcline is less than the slope of the inhibitory one. As can be seen in Fig. 4E, an equilibrium with this property is guaranteed to exist in networks with endogenously active cells.

The second condition for stability, T < 0, may be written
<FR><NU>1−&PHgr;<SUB><IT>I</IT><IT>,</IT><IT>I</IT></SUB></NU><DE><IT>&tgr;</IT><SUB><IT>I</IT></SUB></DE></FR><IT>></IT><FR><NU><IT>&PHgr;</IT><SUB><IT>E</IT><IT>,</IT><IT>E</IT></SUB><IT>−1</IT></NU><DE><IT>&tgr;</IT><SUB><IT>E</IT></SUB></DE></FR> (19)
This condition implies that excitatory-excitatory (E-E) coupling is destabilizing. It also tells us that inhibitory-inhibitory (I-I) coupling is stabilizing. The latter result is somewhat counterintuitive: why should I-I coupling, which reduces the level of inhibition, have a stabilizing effect? The reason is that I-I coupling is the only coupling that provides a restoring force near an equilibrium: E-E coupling is locally repelling, whereas E-I and I-E coupling produce only rotation around the equilibrium. Consequently, I-I coupling must be strong enough to drive the network back to equilibrium. When it becomes too weak, the trace, T, becomes positive and the previously stable fixed point turns into a stable limit cycle via a Hopf bifurcation. Such a limit cycle is shown in Fig. 11 of Wilson and Cowan (1972).

How does spike-frequency adaptation modify this picture? Although it would be straightforward to formally incorporate such adaptation into the above stability analysis, the resulting eigenvalue equation is not especially informative. However, the qualitative effects of spike-frequency adaptation are relatively straightforward to understand. This kind of adaptation results in negative feedback: increasing firing rates raises the level of spike-frequency adaptation, which leads to a reduction in firing rates; decreasing firing rates lowers the level of spike-frequency adaptation, which leads to an increase in firing rates. Because the negative feedback enters with a delay, spike-frequency adaptation makes a network more likely to oscillate. Its primary effect, then, is to change the threshold for oscillations; qualitatively, the strict inequality that guarantees stability, Eq. 19, should be replaced by an inequality of the form (1 - Phi I,I)/tau I - (Phi E,E - 1)/tau E > epsilon , where epsilon  depends primarily on the level of spike-frequency adaptation.

Coupling affects overall firing rate as well as stability. We can determine the relation between coupling and firing rate by first categorizing its effect on the nullclines and then examining how shifts in the nullclines affect firing rate. This is a straightforward application of previous analysis [e.g., increasing I-I coupling lowers the inhibitory nullcline in the (<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I) plane, which in turn increases the excitatory firing rate]. The results, however, depend on the location of the equilibrium; i.e., whether it occurs on the unstable (positively sloped) branch of the excitatory nullcline, as in Fig. 4E, or on the stable (negatively sloped) branch, as would occur if the inhibitory nullcline in Fig. 4E were raised slightly. In APPENDIX B we show that the excitatory nullcline has its minimum at a very small value of the excitatory firing rate. The precise value depends on single neuron and network properties, but for high connectivity networks typical of the mammalian cortex, we estimate in APPENDIX B that the minimum occurs below ~0.1 Hz. This implies that if excitatory firing rates above ~0.1 Hz are observed in a network firing at low rates, then the equilibrium must be on the unstable branch of the excitatory nullcline. In fact, networks exhibiting purely inhibitory activity are unusual: whole cell recordings in vivo (Calabresi et al. 1990; Metherate and Ashe 1993; Volgushev et al. 1992) and in our cultures (Latham et al. 2000) show a preponderance of EPSPs. In addition, there is recent experimental evidence indicating that the equilibrium in cortex occurs on the unstable branch (Tsodyks et al. 1997). Thus here and in the remainder of this paper we assume that all low firing rate equilibria occur on the unstable branch of the excitatory nullcline, as in Fig. 4E.

With the location of the equilibrium set, it is straightforward to show the effect of coupling on firing rate. That effect can be summarized as follows: increasing same-type coupling (E-E or I-I) raises firing rates, increasing opposite-type coupling (E-I or E-I) lowers firing rates. Combining these effects with the above results on stability, we see that the existence of a low firing rate equilibrium that does not oscillate requires low E-E coupling, high opposite-type coupling, and I-I coupling in an intermediate range: not so high that it drives up firing rates, but not so low that it causes oscillations.

SUMMARY OF THEORETICAL ANALYSIS. The tool we used to analyze network behavior was a simplified firing rate model, in which the dynamics of networks containing greater than 104 neurons was reduced to a small number of equations describing average quantities. Rather than using a particular firing rate model, we used geometrical arguments to derive generic network behavior. Our primary assumptions were that, on average, excitatory input to a neuron increases its firing rate, inhibitory input decreases it, and neurons exhibit spike-frequency adaptation. With these assumptions, we were able to show that 1) endogenous activity is necessary for low firing rates, 2) decreasing the fraction of endogenously active cells causes a network to burst, and 3) firing patterns, especially transitions between steady firing and oscillations, are dependent on network connectivity. A specific realization of a simplified firing rate model corroborated these predictions.

Simulations

To test the predictions made in the previous section, we performed simulations with large networks of spiking model neurons, as described in METHODS. We examined 1) the effect of the distribution of applied depolarizing currents, Ia, on firing patterns and 2) how relative connection strengths among excitatory and inhibitory populations affect network behavior.

We used two networks, denoted A and B. These networks differed primarily in their connectivity and the sizes of the PSPs of their constituent neurons. Network A was designed to simulate cortical networks. It contained 8,000 excitatory and 2,000 inhibitory neurons, and each neuron connected, on average, to 1,000 others. Connectivity was infinite range (meaning the probability of 2 neurons connecting did not depend on the distance between them; see METHODS), and PSPs were on the order of 1 mV. Network B was chosen to be consistent with our experiments (Latham et al. 2000) and to test whether the predictions made for infinite range connectivity apply also to local connectivity, for which the connection probability depends on distance between neurons. It had fewer connections per neuron than Network A (200 instead of 1,000), local rather than infinite range connectivity, larger PSPs (on the order of 5 mV rather than 1 mV), and a higher fraction of inhibitory neurons (30% instead of 20%). A complete list of the parameters for each network is given in Tables 1 and 2 of METHODS.

Networks A and B behaved similarly, so in the bulk of this section we concentrate on Network A. At the end of the section we summarize the results of Network B.

ENDOGENOUS ACTIVITY. For networks without spike-frequency adaptation, the theoretical analysis presented above led to the following predictions. If endogenously active cells are present, then a stable, low firing rate equilibrium is possible. If endogenously active cells are absent, a low firing rate equilibrium is still possible. However, it is metastable; the network will eventually decay to a zero firing rate state. In this case there is a further distinction between networks in which some cells have their resting membrane potential within one EPSP of threshold and networks in which none do. This distinction has primarily qualitative bearing on firing patterns. In networks containing cells that are one EPSP from threshold, a single action potential can ignite activity in a silent network, so finite firing rate states tend to be long lived. In networks with no cells within one EPSP of threshold, finite firing rate states tend to have shorter lifetimes.

Networks with spike-frequency adaptation differ from those without it in two ways. First, as discussed in detail in the previous section, networks with spike-frequency adaptation exhibit a transition from steady firing to bursting as the fraction of endogenously active cells decreases. Second, in networks containing cells that are one EPSP from threshold but no endogenously active ones, the lifetime of the metastable state is extremely short---on the order of the spike-frequency adaptation timescale, which is the inverse of the mean network firing rate.

To test this set of predictions, and especially to examine how networks with and without spike-frequency adaptation differed, we performed simulations in which we varied the distribution of applied current, Ia, both with and without spike-frequency adaptation. We used a boxcar distribution in which Ia was restricted to values between 0 and Îmax. This distribution, which is completely determined by Îmax, may be divided into three regimes. These correspond to the three regimes just discussed and to the ones listed in Theory.
1 Îmax < ÎE1, where ÎE1 is the normalized applied current above which the resting membrane potential is within one EPSP of threshold. None of the cells have their resting membrane potential within one EPSP of threshold for the generation of an action potential, which implies that none are endogenously active.
2 ÎE1 < Îmax < Îthresh, where Îthresh is the threshold for endogenous activity. Some cells have their resting potential within one EPSP of threshold, but none are endogenously active.
3 Îthresh < Îmax. Endogenously active cells are present.

For values of Îmax ranging from below ÎE1 to above Îthresh we ran simulations for 100 s and determined the excitatory and inhibitory firing rates as follows. For networks that fired for the full 100 s of the simulations, firing rates were simply averaged over neurons and time. For networks that burst, the averages were again over neurons, but only over the active phase of the burst. For networks that crashed to zero firing rate, we extrapolated to infinite time and assigned a value of zero to the firing rate.

Figure 7A shows a plot of firing rate versus Îmax for networks whose neurons do not exhibit spike-frequency adaptation. As predicted, the networks fired steadily for the full 100 s of the simulations when Îmax exceeded Îthresh (which placed the networks in regime 3, endogenously active cells present). In addition, long lived (>100 s) metastable states were observed both in regime 2, where some cells are within one EPSP of threshold, and regime 1, where no cells are within one EPSP of threshold. Also as predicted, the parameter regime that supported these long-lived metastable states was small; it corresponded to a range in maximum applied current of 0.07/Rcell, or 2.3 pA for a 30 MOmega cell. This is only 2% (0.07/Îthresh) of the rheobase current for a model cell whose resting membrane potential is 15 mV below threshold.



View larger version (18K):
[in this window]
[in a new window]
 
Fig. 7. Mean excitatory firing rate () vs. Îmax for Network A. (The inhibitory rate, which is not shown, is ~2.5 times higher for all data points.) The gray region in B indicates bursting networks; for such networks, the top and bottom squares show the firing rates in the active and silent phases of the burst, respectively. Squares at zero that are not in the gray region indicate networks that crashed to zero and remained there. Each firing pattern has associated with it a set of nullclines with a particular underlying structure (see Fig. 4). Firing patterns are matched to generic examples of their associated nullclines with arrows. Bursting networks alternate between a set of nullclines with a stable, low firing rate equilibrium, and a set of nullclines in which the only equilibrium is at zero firing rate. As discussed in the main text (see especially Fig. 5B), between these 2 extremes is a bistable state supporting both steady firing and silence. A: no spike-frequency adaptation. The networks either fire steadily for 100 s, the length of the simulations, or crash to zero firing rate. Some equilibria with Îmax < Îthresh (no endogenously active cells) are metastable but relatively long-lived. B: the cells exhibit spike-frequency adaptation. This abolishes the long-lived metastable states and allows bursting. For both A and B the parameters were taken from Network A of Table 2 with BE = 1.4 and BI = 1.1; the difference between the 2 networks lies in the amount, delta ĝK-Ca, the normalized potassium conductance increased on each spike. In A, delta ĝK-Ca = 0; in B, delta ĝK-Ca = 0.01. For these network parameters, ÎE1 approx  3.715 (determined numerically) and Îthresh = 3.75 (determined analytically; see the section DISTRIBUTION OF APPLIED CURRENTS in METHODS). To convert Îmax, ÎE1, and Îthresh from their normalized units, mV, to nA, divide by the cell membrane resistance in MOmega .

With spike-frequency adaptation present (Fig. 7B), network behavior differed in two respects. First, the distinction between regimes 1 and 2 became irrelevant; in both regimes the lifetimes of the metastable states dropped considerably, to <5 s (data not shown). This was much shorter than the metastable lifetimes observed in Fig. 7A, which were >100 s. Second, for Îmax above Îthresh the network burst. Bursting occurred because there is an intermediate, bistable state (e.g., the nullclines illustrated in Fig. 5B) that supports both steady firing and silence. Transitions between steady firing and silence occur when the bistability gives way to a single stable state, as indicated by the nullclines in Fig. 5, A and C. The theoretical prediction is that these transitions should be network-wide; this is confirmed in Fig. 8, which shows spike rasters for 200 neurons along with a single-neuron trace of membrane voltage.



View larger version (34K):
[in this window]
[in a new window]
 
Fig. 8. Firing patterns in the bursting regime. Top: raster plots of 200 randomly chosen neurons in the burst regime. The 1st 40 (1-40) are inhibitory and the last 160 are excitatory. Bottom: voltage trace for neuron 40. During the burst, voltage fluctuations are PSP driven; between bursts, the whole network is silent so the voltage decays to the resting membrane potential, -50 mV. Because there is no noise in these simulations, between bursts the voltage is constant. Parameters are taken from Fig. 7 with Îmax = 3.82.

Comparing Fig. 7B with 6A, we see that the network simulations and the reduced firing rate model, Eq. 15, produced similar results. There was, however, a difference in behavior at the transition from bursting to steady firing: in the network simulations the firing rate varied smoothly at this transition, whereas in the reduced model the firing rate exhibited a downward jump. Thus, although the particular realization of the simplified firing rate model captured the main qualitative result (that there is a transition from steady firing to bursting to silence as the fraction of endogenously active cells decreases), it did not capture all the quantitative details. This is not surprising, because we made no attempt to tune the parameters of the reduced model to match the network simulations.

The boxcar distribution of applied current used in the above network simulations is convenient because it produces a sharp transition between networks with and without a sufficient number of endogenously active cells to ignite network activity. However, it raises the possibility that the agreement we saw between the predictions of the Wilson and Cowan model and the network simulations was an artifact of a current distribution with a sharp cutoff. To test this we performed two additional sets of simulations. In one we used a smoother distribution of applied currents than the boxcar used to produce the results summarized in Fig. 7: the distribution was flat for 0 <=  Îmax <=  3.5 and had a Gaussian tail for Îmax > 3.5. In the other, we used a boxcar distribution but introduced noise-driven fluctuations in the membrane voltage capable of generating endogenous activity. Both simulations produced identical results, and those results were largely in agreement with the sharp cutoff/no noise simulations. When there was no spike-frequency adaptation, as the width of the Gaussian tail or the noise increased we saw a transition from networks in which essentially only the endogenously active cells fired to networks in which most of the neurons were active. When spike-frequency adaptation was present, we observed transitions from silence to bursting to steady firing with increasing tail width or noise. The only new phenomenon was a narrow regime in which the network displayed irregular bursts. These bursts were associated with the small number of endogenously active cells, either in the tail of the distribution or produced by a relatively low level of noise. Those cells caused random transitions between the equilibria labeled "S" and "M" in Fig. 4B.

CONNECTIVITY. The theoretical predictions concerning the effect of coupling on stability and firing rate were 1) decreasing I-I and increasing E-E coupling both lead to oscillations in firing rate and 2) increasing same-type coupling (E-E or I-I) causes mean firing rates to go up, whereas increasing opposite-type coupling (E-I or I-E) causes mean firing rates to go down.

To test these predictions we performed simulations in which we varied the relative number of connections among the excitatory and inhibitory populations. This was done by adjusting the bias parameters, BE and BI (see METHODS), defined by BE triple-bond  PIEinfinity /PEEinfinity and BI triple-bond  PIIinfinity /PEIinfinity , where PLMinfinity is the probability of making a connection from a neuron of type M to a neuron of type L. Increasing BE increases E-I (excitatory to inhibitory) coupling and decreases E-E coupling; increasing BI increases I-I coupling and decreases I-E coupling. Both occur without changing the mean number of connections per neuron.

In terms of the bias parameters, the above predictions imply that 1) increasing BI and decreasing BE cause firing rates to go up and 2) decreasing both BE and BI lead to oscillations. In addition, because spike-frequency adaptation is most pronounced at high firing rates, we should see a transition to bursting when the firing rate is high enough.

Mean excitatory and inhibitory firing rates are plotted in Fig. 9 versus time for a range of bias parameters. These plots confirm the above predictions: increasing BI and decreasing BE both produced higher firing rates, networks with small BI oscillated, and high firing rates were accompanied by bursting. Bursting, however, was not guaranteed: we increased BI to 2.4 with BE fixed at 1.0 without observing bursting, even though the excitatory firing rate at these parameters approached 40 Hz.



View larger version (28K):
[in this window]
[in a new window]
 
Fig. 9. Mean firing rate vs. time for excitatory (---) and inhibitory (- - -) populations at different value of the bias parameters, BE and BI. A and D: because the I-I coupling is low (small BI), these networks oscillate. B, C, and E: with increasing I-I coupling (increasing BI), oscillations are eliminated and average firing rates rise. The small fluctuations in these plots are due to the finite size of the network. F: larger I-I coupling further raises average firing rates and produces bursting. The gray line on this plot is the normalized slow potassium conductance averaged over excitatory neurons (see Eq. 5c). This quantity corresponds to the mean level of spike-frequency adaptation, so it increases when the neurons are firing and decreases when they are silent. (To clearly show the bursting, the time scale in this panel is different from the other ones and the inhibitory firing rate, which was ~30% lower, is not shown.) The difference between oscillations (A and D) and bursting (F) is that in the former the rise and fall of the firing rate is slow compared with the period, whereas in the latter it is fast. Parameters for all panels were taken from Network A of Table 2 with Îmax = 5.0 and delta ĝK-Ca = 0.01. Firing rates were computed by convolving the spike trains with a Gaussian of width 10 ms.

For the bursting network, Fig. 9F, we plotted in gray the slow potassium conductance averaged over excitatory neurons. This quantity, which we denote <A><AC>g</AC><AC>&cjs1171;</AC></A>K-Ca,E, corresponds to the mean level of excitatory spike-frequency adaptation (<A><AC>g</AC><AC>&cjs1171;</AC></A>K-Ca,E triple-bond  NE-1 Sigma iin E ĝK-Ca,i, where the sum is over excitatory neurons; see Eq. 5c); <A><AC>g</AC><AC>&cjs1171;</AC></A>K-Ca,E is analogous to GE in the reduced firing rate model (Eq. 15c). Like GE in Fig. 6B, <A><AC>g</AC><AC>&cjs1171;</AC></A>K-Ca,E increases when the network is active and decreases when the network is silent.

The transition from oscillations to steady firing was gradual and never quite complete; even at high values of BI, where the fixed points should be attracting, there was an oscillatory component to the firing rates. However, the amplitude of the oscillations decreased when the number of neurons increased: in Fig. 9, B, C, E, and F, the variance in the firing rate (for Fig. 9F the variance only during bursts) dropped by a factor of ~2 when we doubled the number of neurons (data not shown). This indicates that the oscillations at large BI result from the finite size of the network, which allows fluctuations that are converted to oscillations by the dynamics. In contrast, doubling the number of neurons had virtually no effect on the variance in the firing rate when the inhibitory-inhibitory coupling was low enough to produce oscillations, as in Fig. 9, A and D.

Finally, to ensure that the oscillations in Fig. 9A were not caused by spike-frequency adaptation, we performed simulations with the same parameters except that spike-frequency adaptation was eliminated (delta ĝK-Ca was set to zero; see METHODS). We found a decrease in the oscillation period, from 180 to 120 ms, but no other change in the firing pattern.

LOCAL CONNECTIVITY. The above simulations were repeated with parameters corresponding to Network B, in which the connectivity is local rather than infinite range (meaning neurons that are close together are more likely to connect than those that are far apart), the number of connections per neuron is smaller than in Network A, and the PSPs are larger (see Tables 1 and 2). The results are summarized in Figs. 10 and 11.



View larger version (17K):
[in this window]
[in a new window]
 
Fig. 10. Mean excitatory firing rate () vs. Îmax for Network B. (The inhibitory rate, which is not shown, is a factor of ~2.5 higher for all data points.) See Fig. 7 caption for details. For both A and B the parameters were taken from Network B of Table 2 with BE = 1.4 and BI = 1.0; the difference between the 2 networks is that in A, delta ĝK-Ca = 0, whereas in B, delta ĝK-Ca = 0.08. For these network parameters, ÎE1 approx  3.230 (determined numerically) and Îthresh = 3.75 (determined analytically).



View larger version (30K):
[in this window]
[in a new window]
 
Fig. 11. Mean firing rate vs. time for excitatory (---) and inhibitory (- - -) populations at different value of the bias parameters, BE and BI. In contrast to Network A, Fig. 9, this network (Network B) did not oscillate at any of the parameters shown, although oscillations were observed at lower values of the I-I coupling (smaller BI). A, D, and E: steady firing. B, C, and F: bursting. As in Fig. 9F, the gray line is the mean value of the normalized slow potassium conductance averaged over excitatory neurons. Note the different time scales than in A, D, and E, and that the inhibitory firing rate is not shown. The parameters for all panels were taken from Network B of Table 2 (local connectivity) with Îmax = 5.0 and delta ĝK-Ca = 0.08. The firing rates were computed by convolving the spike trains with a Gaussian of width 10 ms.

Figure 10 shows network firing rate versus the distribution of applied currents, which, as in Fig. 7, is parameterized by Îmax. The results were substantially the same as for the network with infinite range connectivity: with no spike-frequency adaptation (Fig. 10A), the network fired for the full 100 s of the simulation for a range of Îmax extending somewhat below ÎE1. With spike-frequency adaptation present (Fig. 10B), the network burst for Îmax close to, but slightly above, Îthresh, and the lifetimes of the metastable state dropped considerably: for Îmax < Îthresh, the lifetimes were <14 s (data not shown).

Mean excitatory and inhibitory firing rates are plotted in Fig. 11 versus time for a range of bias parameters. For the networks that burst (Fig. 11, B, C, and F), we plotted in gray the mean level of spike-frequency adaptation averaged over excitatory neurons, <A><AC>g</AC><AC>&cjs1171;</AC></A>K-Ca,E. Again the results were substantially the same as for the network with infinite range connectivity. There were, however, differences in the details. For example, we did not see regular oscillations for any of the examples shown: for all plots the variance in the firing rate dropped by a factor of ~2 when we doubled the number of neurons, indicating that the observed fluctuations were caused by the finite size of the network. Oscillations did occur when we reduced I-I coupling, but not until BI was below ~0.6. In addition, the local connectivity network burst more easily than the infinite range one and the bursting was more irregular. Bursting was observed for BE = 1.0 and BI >=  1.0, whereas in Fig. 9 bursting was not observed at all for BE = 1.0.

These plots indicate that, in spite of the differences between Networks A and B, the two networks generated strikingly similar results. This indicates that our model is robust to changes in connectivity (especially infinite range vs. local) as well as the number of connections per neuron and PSP size.

SUMMARY OF SIMULATION RESULTS. We performed simulations with two very different networks: an infinite range, high connectivity network with small PSPs, and a local, low connectivity network with large PSPs. We also examined a broad range of parameters, including the distribution of applied currents, spike-frequency adaptation, and network connectivity. The results of all simulations were as predicted: for networks without spike-frequency adaptation, we observed transitions from silence to steady firing as the fraction of endogenously active cell increased. For networks with spike-frequency adaptation, there was an intermediate regime in which the networks burst. Finally, when the I-I coupling was strengthened, we observed transitions from oscillations to steady firing to bursting and an increase in firing rate.


    DISCUSSION
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
APPENDIX A
APPENDIX B
REFERENCES

We have investigated, both theoretically and through simulations with spiking model neurons, the intrinsic dynamics in large networks of neurons. The goal of this work was twofold: 1) to understand how dynamic interactions between excitatory and inhibitory neurons lead to stable, low firing rates, such as those widely observed in the mammalian central nervous system, and 2) to determine the conditions under which networks switch from steady firing to bursting.

An understanding of the mechanism for generating stable, low firing rates in large, isolated neuronal networks has been elusive (Abeles 1991; Amit and Treves 1989; Buhmann 1989; Treves and Amit 1989). The elusiveness stems from the difficulty in controlling the powerful recurrent excitation that exists in such networks. To compensate for this recurrent excitation, inhibitory feedback is required. To stabilize low firing rates, that feedback must be strong enough that the inhibitory firing rate is more sensitive to input from excitatory cells than the excitatory firing rate. Or, in the language of dynamics, the inhibitory nullcline in average firing rate space must be steeper than the excitatory one at the equilibrium (e.g., Fig. 4E). We found that for isolated networks, the above condition occurs robustly at low firing rate only when endogenously active cells are present; without such cells networks are either silent or fire at high rate. This led to the following prediction: if low firing rates are observed in an isolated network, then that network must contain endogenously active cells. This prediction was confirmed by large-scale network simulations, which included the exploration of a broad range of parameters to ensure that the simulations were robust (APPENDIX A). It was also corroborated by experiments in cultured neuronal networks, as described in the accompanying paper (Latham et al. 2000).

Although endogenously active cells are necessary for the existence of a low firing rate equilibrium, they do not guarantee its stability. In particular, we found that a high level of spike-frequency adaptation leads to bursting: repetitive firing can introduce a hyperpolarizing current sufficient to temporarily eliminate endogenously active cells, resulting in a crash to zero firing rate; after the cells stop firing, the hyperpolarizing current decays and firing resumes. (Interestingly, our analysis implies that synaptic adaptation, because it does not affect the fraction of endogenously active cells, cannot produce bursting.) The probability of bursting is highest if there are few endogenously active cells to begin with, and for that reason the fraction of such cells plays a key role in determining firing patterns. Specifically, networks with no endogenously active cells are typically silent; at some finite fraction of endogenously active cells there is a transition to bursting; and at an even higher fraction there is a second transition to steady firing at low rate. This scenario was also confirmed by network simulations, and it was corroborated by experiments in neuronal cell culture (Latham et al. 2000).

Although the fraction of endogenously active cells plays the dominant role in determining the primary firing patterns (silence, bursting, or steady firing), another parameter, network connectivity, influences secondary features of the firing patterns. Specifically, when inhibitory-inhibitory coupling becomes too weak or excitatory-excitatory coupling becomes too weak or excitatory-excitatory coupling becomes too strong, a low firing rate equilibrium can be destabilized via a Hopf bifurcation. This leads to oscillations in firing rate that can occur in both the "steady" firing and the bursting regimes. In the latter case, the oscillations occur during the active phase of the burst.

These theoretical findings were based on the analysis of isolated networks with random, infinite-range connectivity. However, theoretical considerations indicate that they apply to networks that receive external input, and simulations indicate that they are valid for networks with local connectivity, in which the connection probability is a decreasing function of distance between neurons. The validity of the model for local connectivity suggests that these findings will hold for the more structured architecture that exists in the cortex.

Implications

Two firing patterns that are ubiquitous in the mammalian CNS are steady firing at low rates and rhythmic bursting. We have shown that, in isolated networks, both firing patterns are controlled largely by a single parameter, the fraction of endogenously active cells. As long as the fraction of such cells is above a threshold, steady firing is possible; below that threshold the network bursts. The threshold depends on the degree of spike-frequency adaptation, a property of neurons that has been shown to be modulatable [e.g., by neuromodulators (Burke and Hablitz 1996; Cole and Nicoll 1984; McCormick et al. 1993; Pedarzani and Storm 1993; Spain, 1994)]. Thus the model described here both accounts for the low firing rates observed in networks throughout the mammalian CNS and provides a natural mechanism for switching between steady firing and bursting. A switch between these two patterns is necessary for behavior that is activated episodically, such as locomotion, scratching, and swallowing (Berkinblit et al. 1978; Kudo and Yamada 1987; Zoungrana et al. 1997).

Although endogenously active cells may account for the observed firing patterns in mammalian neuronal networks, they are not necessarily responsible for those firing patterns. This is because the brain is neither isolated nor comprised of a single network. Instead, it receives sensory input and consists of distinct areas that interact through long-range connections. Because external input to a network can drive cells even when they are not receiving input from within the network (making those cells effectively endogenously active), it is possible that the low firing rates observed in cortex are generated by sensory input, or, as has been proposed by Amit and Brunel (1997), by input from other brain areas. The source of the input does not affect our model, however; the model applies whether the input is external or endogenous.

The theory developed here provides a framework for understanding intrinsic firing patterns and their dynamics in large networks of neurons. This does more than simply explain observed firing patterns; it provides a link between basic properties of networks and a way to understand how firing patterns could be modified by patterned external input. This link is critical for developing models of how computations are performed by real neuronal networks.


    APPENDIX A: ROBUSTNESS TO VARIATION IN PARAMETERS
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
APPENDIX A
APPENDIX B
REFERENCES

To exhaustively check the robustness of our model would require a complete exploration of the full parameter space, which is impractical for the 26-dimensional space that describes our model. Instead, we varied a selected subset of the parameters. Our starting point was a set of parameters taken from two networks, Networks A and B. Recall that most of the parameters of those networks are given in Tables 1 and 2; the parameters whose values are not listed there were given the following values: Îmax = 5.0, BE = BI = 1.0, and delta ĝK-Ca = 0. This set of parameters resulted in mean excitatory firing rates of 5.59 and 4.51 Hz for Networks A and B, respectively, and inhibitory rates of 5.59 and 4.46 Hz. The firing patterns were steady; i.e., no bursting or oscillations. Starting with this initial set of parameters, the ones we varied to explore the robustness of our model were as follows: 1) the amplitude of the excitatory and inhibitory postsynaptic potentials, VEPSP and VIPSP, 2) the fraction of inhibitory cells, 3) the voltage threshold, Vt, 4) the cell time constant, tau cell, 5) the mean number of connections per neurons, K, 6) the number of neurons, N, 7) the distribution of applied current, and 8) for Network B (the one with local connectivity), the axonal spreads of the excitatory and inhibitory populations, sigma E and sigma I.

The first four items in this list had the strongest effect on firing rate: for both networks, a 50% increase in EPSP amplitude, a 50% decrease in IPSP amplitude, and a 25% decrease in the fraction of inhibitory cells all caused an increase in firing rate of ~50%; increasing the distance between the nominal resting membrane potential and threshold, Vt - Vr, from 15 to 25 mV while keeping the fraction of endogenously active cells fixed resulted in a drop in firing rate of ~50%, and doubling the time constant resulted in a drop in firing rate of ~40%.

Increasing the mean number of connections per neuron, K, also had a moderately strong effect on firing rate: a 20% increase in K resulted in a decrease in firing rate of 12% for Network A and 14% for Network B. Somewhat surprisingly, increasing the amplitudes of both EPSPs and IPSPs did not have the same effect as increasing the number of connections: a 20% increase in VEPSP and VIPSP led to a 7% decrease in firing rate for Network A and a 4% increase in firing rate for Network B. The reason for the difference is as follows. Although increasing PSP amplitude and connectivity have approximately the same effect on mean synaptic drive, the former has a larger effect on variance. Because larger input variance leads to higher firing rate, increasing PSP amplitude is more effective in causing a cell to fire than increasing connectivity. This is why firing rates were higher with a 20% increase in PSP amplitude than they were with a 20% increase in the number of connections.

Increasing the number of neurons had almost no effect on firing rate (<7% change when the number of neurons doubled). However, there was a substantial reduction in fluctuations: doubling the number of neurons in Network A caused the variance in the firing rate to drop by a factor of 2.3, and doubling the number in Network B reduced the variance by a factor of 1.93. Both of these are close to the factor of 2 suggested by 1/N scaling.

The analysis in the main text was based on gain curves constructed from a unimodal distribution of applied depolarizing currents, Ia. Such a distribution implies that the neurons in the network have a continuous range of resting membrane potentials, and, if there are endogenously active cells, the active neurons have a continuous range of firing rates. Different distributions, especially multimodal ones, can lead to different nullclines and thus different network behavior (Wilson and Cowan 1972). We thus considered a bimodal distribution in which the normalized applied current, Îa, at each neuron was either 0 mV (resting membrane potential 15 mV below threshold) or 5 mV (endogenously active). For the two bimodal distributions we looked at, 30 and 50% of the cells endogenously active, the firing rates changed by <5%. Thus, at least for these parameters, changing the distribution of applied currents had little effect on firing rate.

The effect of changing the axonal spread depended on whether we changed the spread for the inhibitory or the excitatory neurons. Increasing the normalized radius of the inhibitory axonal spread, sigma I, from 0.12 to 0.25 and then to 0.50 increased the firing rate by 50 and 132%, respectively. (Recall that the normalized radius of the network is 1.) These results are consistent with the view that inhibition is acting to control local hot spots of excitatory activity, so overly diffuse inhibition is not so effective. Increasing the excitatory axonal spread, sigma E, on the other hand, led to a decrease in firing rate that was fairly small: increasing sigma E from 0.12 to 0.25 and 0.50 dropped the firing rate by 4 and 13%, respectively. Our interpretation here is that longer range excitation allows the recruitment of additional inhibitory neurons, thus lowering firing rates.

In all cases, the above parameter changes produce changes in firing rate consistent with our model. This is indicative that the model is relatively robust. This should not be surprising, in view of Fig. 4E, which guarantees an equilibrium at reasonably low firing rate.

We end with a simulation closer to cortical parameters than we have been using: Network A with IPSP and EPSP amplitudes of -1 and 0.2 mV, respectively, 20,000 neurons of which 15% were inhibitory, a threshold voltage of -40 mV, corresponding to a nominal gap between resting and threshold of 25 mV, and a maximum for the boxcar distribution, Îmax, equal to 10 mV. The network with these parameters fired steadily with a mean rate of 0.9 Hz.


    APPENDIX B: ESTIMATE OF THE MINIMUM OF THE EXCITATORY NULLCLINE
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
APPENDIX A
APPENDIX B
REFERENCES

In this appendix we estimate the size of <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>*E, the average excitatory firing rate at the minimum of the excitatory nullcline. To do this we derive a differential equation for Phi E(<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I) valid when <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E is small, solve it to determine the shape of the excitatory gain function, and then find the tangential intersection with the line Phi E(<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I) <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E. We start with the relation
<FR><NU>∂&PHgr;<SUB><IT>E</IT></SUB>(<IT><A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB><IT>, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB>)</NU><DE><IT>∂<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB></DE></FR><IT>=</IT><FR><NU><IT>∂&PHgr;</IT><SUB><IT>E</IT></SUB>(<IT><A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB><IT>, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB>)</NU><DE><IT>∂</IT><IT>I</IT><SUB><IT>syn,</IT><IT>E</IT></SUB></DE></FR> <FR><NU><IT>∂</IT><IT>I</IT><SUB><IT>syn,</IT><IT>E</IT></SUB></NU><DE><IT>∂<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB></DE></FR> (B1)
where Isyn,E is the average excitatory synaptic drive to the neurons in the network. Because Isyn,E is equal to the rate of incoming EPSPs times the charge transfered to the postsynaptic neuron per EPSP, and the rate of incoming EPSPs is the firing rate times the average number of connections, we have
<FR><NU>∂<IT>I</IT><SUB><IT>syn,</IT><IT>E</IT></SUB></NU><DE><IT>∂<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB></DE></FR><IT>≈</IT><IT>Q</IT><SUB><IT>EPSP</IT></SUB><IT>×</IT><IT>K<SUB>EE</SUB></IT> (B2)
where QEPSP is the average charge per EPSP and KEE has the same definition as in the Fig. 2, the mean number of excitatory connections made by an excitatory neuron. The charge, QEPSP, is computed by integrating the voltage transient induced by the arrival of an EPSP and then dividing by the postsynaptic cell's membrane resistance
<IT>Q</IT><SUB><IT>EPSP</IT></SUB><IT>=</IT><LIM><OP>∫</OP></LIM><IT> d</IT><IT>t</IT> <FR><NU><IT>&Dgr;</IT><IT>V</IT>(<IT>t</IT>)</NU><DE><IT>R</IT><SUB><IT>cell</IT></SUB></DE></FR><IT>≡</IT><FR><NU><IT>V</IT><SUB><IT>EPSP</IT></SUB><IT>&Dgr;</IT><IT>t</IT></NU><DE><IT>R</IT><SUB><IT>cell</IT></SUB></DE></FR> (B3)
where Delta V(t) is the change in membrane potential caused by the arrival of an EPSP, Rcell is the cell membrane resistance, VEPSP is the amplitude of a typical EPSP, and Delta t is the characteristic width of an EPSP. This last quantity, Delta t, is defined by the second equality in Eq. B3.

When the response to an EPSP is linear, it is not hard to show that Delta t >=  tau cell. We will assume that EPSPs are small enough that the linear approximation holds, in which case Eq. B2 can be replaced with the inequality
<FR><NU>∂<IT>I</IT><SUB><IT>syn,</IT><IT>E</IT></SUB></NU><DE><IT>∂<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB></DE></FR><IT>≳&tgr;<SUB>cell</SUB> </IT><FR><NU><IT>V</IT><SUB><IT>EPSP</IT></SUB><IT>K<SUB>EE</SUB></IT></NU><DE><IT>R</IT><SUB><IT>cell</IT></SUB></DE></FR> (B4)
The other term in Eq. B1, partial Phi E/partial Isyn,E, is the rate of change of the average firing rate of a network of neurons with respect to the synaptic drive. This can be estimated as follows. For a single cell, the slope of the frequency-current (F-I) curve is small below some threshold current and approximately linear above it. [For the type I neurons used here, spike initiation occurs via a saddle-node bifurcation, which leads to a square-root F-I curve with respect to constant injected current (Rinzel and Ermentrout 1998). However, the square-root singularity is considerably flattened both by synaptic variability and by the heterogeneity of the network. This results in an approximately threshold-linear F-I curve, as indicated by the approximately threshold-linear excitatory gain curves illustrated in Fig. 1A.] For a network of cells, the value of partial Phi E/partial Isyn,E is the slope above threshold, denoted F', times the fraction of cells above threshold, denoted rho . We thus have
<FR><NU>∂&PHgr;<SUB><IT>E</IT></SUB>(<IT><A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB><IT>, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB>)</NU><DE><IT>∂</IT><IT>I</IT><SUB><IT>syn,</IT><IT>E</IT></SUB></DE></FR><IT>≈</IT><IT>F</IT><IT>′&rgr;</IT>
Inserting this expression into Eq. B1, we arrive at
<FR><NU>∂&PHgr;<SUB><IT>E</IT></SUB>(<IT><A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB><IT>, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB>)</NU><DE><IT>∂<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB></DE></FR><IT>≈&bgr;&rgr;</IT> (B5)
where
&bgr;≡<FR><NU>∂<IT>I</IT><SUB><IT>syn,</IT><IT>E</IT></SUB></NU><DE><IT>∂<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB></DE></FR> <IT>F</IT><IT>′</IT> (B6)
To solve for Phi E(<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I) as a function of <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E, we need to know how rho  depends on the firing rates, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I and <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E. We can estimate this dependence as follows. Let Delta Isyn,E be the minimum current that will make almost all cells in the network fire. Then, the excitatory firing rate that will make almost every cell fire is Delta Isyn,E/(partial Isyn,E/partial <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E). Defining
&Dgr;&ngr;≡<FR><NU>&Dgr;<IT>I</IT><SUB><IT>syn,</IT><IT>E</IT></SUB></NU><DE><IT>∂</IT><IT>I</IT><SUB><IT>syn,</IT><IT>E</IT></SUB><IT>/∂<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB></DE></FR> (B7)
we see that as <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E increases from 0 to Delta nu , the fraction of active cells increases from its value at <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E = 0 Hz, which we denote rho 0(<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I), to 1. For clarity, in the remainder of this analysis we will drop the dependence of rho 0 on <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I. Assuming that the fraction of active cells increases linearly for <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E < Delta nu , we can write rho  = rho 0 + (1 - rho 0)<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E/Delta nu . Then, Eq. B5 becomes, for small values of <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E
<FR><NU>∂&PHgr;<SUB><IT>E</IT></SUB>(<IT><A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB><IT>, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB>)</NU><DE><IT>∂<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB></DE></FR><IT>=&bgr;</IT>[<IT>&rgr;<SUB>0</SUB>+</IT>(<IT>1−&rgr;<SUB>0</SUB></IT>)<IT><A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB><IT>/&Dgr;&ngr;</IT>] (B8)
Solving Eq. B8 yields
&PHgr;<SUB><IT>E</IT></SUB>(<IT><A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB><IT>, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB>)<IT>=&PHgr;</IT><SUB><IT>E</IT></SUB>(<IT>0, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB>)<IT>+&bgr;&rgr;<SUB>0</SUB><A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>E</IT></SUB><IT>+&bgr;</IT>(<IT>1−&rgr;<SUB>0</SUB></IT>) <FR><NU><IT><A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUP><IT>2</IT></SUP><SUB><IT>E</IT></SUB></NU><DE><IT>2&Dgr;&ngr;</IT></DE></FR> (B9)
The minimum of the excitatory nullcline occurs where the curve given in Eq. B9 intersects tangentially with the line Phi E <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>E. It is straightforward to show that the value of the average excitatory firing rate at that tangential intersection, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>*E, is given by
<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A><SUP>*</SUP><SUB><IT>E</IT></SUB><IT>=</IT><FR><NU><IT>&Dgr;&ngr;</IT>(<IT>1−&bgr;&rgr;<SUB>0</SUB></IT>)</NU><DE><IT>&bgr;</IT>(<IT>1−&rgr;<SUB>0</SUB></IT>)</DE></FR> (B10)
Finally, when beta  > 1, the right hand side of Eq. B10 is maximum when rho 0 = 0. Consequently, we can bound <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>*E by
<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A><SUP>*</SUP><SUB><IT>E</IT></SUB><IT>≤</IT><FR><NU><IT>&Dgr;&ngr;</IT></NU><DE><IT>&bgr;</IT></DE></FR> (B11)
Recall that Delta nu depends on Delta Isyn,E, the minimum current needed to make all the cells in the network fire (Eq. B7). The minimum current consists of two parts: the minimum rheobase current plus the total inhibitory synaptic current delivered to a cell. The latter quantity is <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>Ipartial Isyn,I/partial <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I. Denoting the minimum rheobase current as Irh and using the same argument as above to express partial Isyn,I/partial <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>I in terms of physiological quantities, we arrive at
&Dgr;<IT>I</IT><SUB><IT>syn,</IT><IT>E</IT></SUB><IT>=</IT><IT>I</IT><SUB><IT>rh</IT></SUB><IT>+</IT><FR><NU><IT><A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB><IT>∂</IT><IT>I</IT><SUB><IT>syn,</IT><IT>I</IT></SUB></NU><DE><IT>∂<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB></DE></FR><IT>≳</IT><IT>I</IT><SUB><IT>rh</IT></SUB><IT>+<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB><IT>&tgr;<SUB>cell</SUB> </IT><FR><NU><IT>V</IT><SUB><IT>IPSP</IT></SUB><IT>K<SUB>EI</SUB></IT></NU><DE><IT>R</IT><SUB><IT>cell</IT></SUB></DE></FR> (B12)
where KEI is the mean number of excitatory connections made by an inhibitory neuron. Combining Eqs. B4, B7, and B12 we arrive at the expression for Delta nu
&Dgr;&ngr;≲<FR><NU><IT>I</IT><SUB><IT>rh</IT></SUB><IT>R</IT><SUB><IT>cell</IT></SUB></NU><DE><IT>&tgr;<SUB>cell</SUB></IT><IT>V</IT><SUB><IT>EPSP</IT></SUB><IT>K<SUB>EE</SUB></IT></DE></FR><IT>+<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB> <FR><NU><IT>V</IT><SUB><IT>IPSP</IT></SUB><IT>K<SUB>EI</SUB></IT></NU><DE><IT>V</IT><SUB><IT>EPSP</IT></SUB><IT>K<SUB>EE</SUB></IT></DE></FR>
Thus the condition on the minimum of the equilibrium, Eq. B11, becomes
<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A><SUP>*</SUP><SUB><IT>E</IT></SUB><IT>≤</IT><FR><NU><IT>&ngr;<SUB>rh</SUB>+&xgr;<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A></IT><SUB><IT>I</IT></SUB></NU><DE><IT>&bgr;</IT></DE></FR> (B13)
where we have defined
&ngr;<SUB>rh</SUB>≡<FR><NU><IT>I</IT><SUB><IT>rh</IT></SUB><IT>R</IT><SUB><IT>cell</IT></SUB></NU><DE><IT>&tgr;<SUB>cell</SUB></IT><IT>V</IT><SUB><IT>EPSP</IT></SUB><IT>K<SUB>EE</SUB></IT></DE></FR> (B14a)

&xgr;≡<FR><NU><IT>V</IT><SUB><IT>IPSP</IT></SUB><IT>K<SUB>EI</SUB></IT></NU><DE><IT>V</IT><SUB><IT>EPSP</IT></SUB><IT>K<SUB>EE</SUB></IT></DE></FR> (B14b)
To evaluate numerically <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>*E, we need to know the sizes of beta , nu rh, and xi . A bound on the first of these, beta , is found by combining Eq. B4 with the definition of beta  in Eq. B6, yielding
&bgr;≥&tgr;<SUB>cell</SUB><IT>F</IT><IT>′ </IT><FR><NU><IT>V</IT><SUB><IT>EPSP</IT></SUB><IT>K<SUB>EE</SUB></IT></NU><DE><IT>R</IT><SUB><IT>cell</IT></SUB></DE></FR> (B15)
For a 30 MOmega cell with a time constant of 10 ms, tau cellVEPSP/Rcell = 3.3 × 10-4 nA/Hz × VEPSP (mV). Because KEEVEPSP (mV) is typically at least 103, the first three terms on the right hand side of Eq. B15 should be larger than ~0.33 nA/Hz. The remaining term in Eq. B15, the slope of a cell's F-I curve, is typically on the order of 250 Hz/nA (McCormick et al. 1985). (This is consistent with our model neurons: for cells with a membrane resistance of 30 MOmega , the slope of the F-I curve was ~150 Hz/nA; simulations not shown.) Using 250 Hz/nA for the F-I curve, we arrive at beta  >=  83. 



View larger version (20K):
[in this window]
[in a new window]
 
Box 1. Gain curves and an excitatory nullcline from the network simulations. A: excitatory gain curves for Network A of Table 2 with Îmax = 5.0, BE = 1.4, BI = 1.0, and delta ĝK-Ca = 0 (no adaptation). To construct these curves we expanded the recurrent network into a 2-layer, feed-forward network, with the feed-forward connections from the input to the output layer identical to the recurrent connections in the original network. We then computed the mean output firing rate of the excitatory neurons (the gain functions) vs. the input firing rates, assuming Poisson statistics. The horizontal axis corresponds to the excitatory input firing rate; the numbers on each line correspond to the inhibitory input firing rates in Hz. B: the excitatory nullcline for the same parameters as in A. To construct this curve we allowed the excitatory and inhibitory neurons to have different values of the maximum applied depolarizing current, Îmax: we fixed Îmax at 5 for the excitatory neurons, then varied Îmax for the inhibitory neurons to sweep out the curve. Values of the excitatory and inhibitory firing rates at the minimum are <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>*E = 0.014 Hz and <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>*I = 0.436 Hz.

The quantity nu rh given in Eq. B14a has almost the same factors as beta ; the only difference is that F' is replaced by 1/Irh and the expression is inverted. For typical cells, Irh is at most 0.2 nA (McCormick et al. 1985). Combining this value with the above estimate for tau cellVEPSPKEE/Rcell, we find that nu rh ~ 0.6 Hz.

Finally if excitatory and inhibitory neurons have similar PSP sizes and connectivities, xi  ~ 1. Thus Eq. B13 can be written
<A><AC>&ngr;</AC><AC>&cjs1171;</AC></A><SUP>*</SUP><SUB><IT>E</IT></SUB><IT><0.01 Hz+</IT><FR><NU><IT><A><AC>&ngr;</AC><AC>&cjs1171;</AC></A><SUP>*</SUP></IT><SUB><IT>I</IT></SUB></NU><DE><IT>83</IT></DE></FR> (B16)
where we have replaced the inhibitory firing rate that appeared in previous expressions with its value at the minimum of the excitatory nullcline, denoted <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>*I. Unless <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>*I is very large, <A><AC>&ngr;</AC><AC>&cjs1171;</AC></A>*E will be small, almost certainly below 0.1 Hz.

Testing the validity of the above analysis requires that we construct the excitatory nullcline. This can be done in our simulations by slowly increasing the value of Îmax for the inhibitory neurons only, which shifts the inhibitory nullcline up without affecting the excitatory one. By recording the equilibrium firing rates as we shift the inhibitory nullcline, we can sweep out the excitatory nullcline. An excitatory nullcline constructed in this manner is shown in Fig. 12B. Consistent with the bound given in Eq. B16, the minimum of the inhibitory nullcline occurs at an excitatory firing rate of 0.014 Hz.


    ACKNOWLEDGMENTS

We thank E. Neale for providing Epon-embedded cultures containing the HRP-injected neurons used to measure axonal arborization and C. Del Negro, J. Rinzel, and M. Wiener for insightful discussions and comments on this manuscript.


    FOOTNOTES

Address for reprint requests: P. E. Latham, Dept. of Neurobiology, UCLA, Box 95-1763, Los Angeles, CA 90095-1763.

The costs of publication of this article were defrayed in part by the payment of page charges. The article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.

Received 5 March 1999; accepted in final form 30 September 1999.


    REFERENCES
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
APPENDIX A
APPENDIX B
REFERENCES

0022-3077/00 $5.00 Copyright © 2000 The American Physiological Society