Modeling Inhibitory Plasticity in the Electrosensory System of Mormyrid Electric Fish

Patrick D. Roberts

Neurological Sciences Institute, OHSU, Portland, Oregon 97209


    ABSTRACT
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
APPENDIX
REFERENCES

Roberts, Patrick D.. Modeling Inhibitory Plasticity in the Electrosensory System of Mormyrid Electric Fish. J. Neurophysiol. 84: 2035-2047, 2000. Mathematical analyses and computer simulations are used to study the adaptation induced by plasticity at inhibitory synapses in a cerebellum-like structure, the electrosensory lateral line lobe (ELL) of mormyrid electric fish. Single-cell model results are compared with results obtained at the system level in vivo. The model of system level adaptation uses detailed temporal learning rules of plasticity at excitatory and inhibitory synapses onto Purkinje-like neurons. Synaptic plasticity in this system depends on the time difference between pre- and postsynaptic spikes. Adaptation is measured by the ability of the system to cancel a reafferent electrosensory signal by generating a negative image of the predicted signal. The effects of plasticity are tested for the relative temporal correlation between the inhibitory input and the sensory input, the gain of the sensory signal, and the presence of shunting inhibition. The model suggests that the presence of plasticity at inhibitory synapses improves the function of the system if the inhibitory inputs are temporally correlated with a predictable electrosensory signal. The functional improvements include an increased range of adaptability and a higher rate of system level adaptation. However, the presence of shunting inhibition has little effect on the dynamics of the model. The model quantifies the rate of system level adaptation and the accuracy of the negative image. We find that adaptation proceeds at a rate comparable to results obtained from experiments in vivo if the inhibitory input is correlated with electrosensory input. The mathematical analysis and computer simulations support the hypothesis that inhibitory synapses in the molecular layer of the ELL change their efficacy in response to the timing of pre- and postsynaptic spikes. Predictions include the rate of adaptation to sensory stimuli, the range of stimulus amplitudes for which adaptation is possible, the stability of stored negative images, and the timing relations of a temporal learning rule governing the inhibitory synapses. These results may be generalized to other adaptive systems in which plasticity at inhibitory synapses obeys similar learning rules.


    INTRODUCTION
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
APPENDIX
REFERENCES

The importance of plasticity at inhibitory synapses has only recently been recognized. The restructuring of the cortical somatosensory maps following lesions or changes in use, for example, has been found to be dependent on the presence of GABA activity. This observation lead to the conclusion that the restructuring involves changes in the strength of inhibitory synaptic input (Jacobs and Donoghue 1991; Lane et al. 1997; Mower et al. 1984). Similar results have been found in the inferior colliculus of the barn owl where the sensory maps integrating auditory and visual stimuli converge (Zheng and Knudsen 1999). These studies suggest that the inhibitory pathways are an important component to adaptation of neuronal responses to changing sensory conditions.

It is not always clear in these system level studies whether the plasticity occurs at excitatory synapses onto inhibitory interneurons or at the inhibitory synapses themselves. Plasticity localized to inhibitory synapses themselves has been demonstrated, however, in a few different systems including inhibitory synapses in the visual cortex (Komatsu and Iwakiri 1993), inhibitory neurons on Mautner cells (Korn et al. 1992), and inhibitory neurons onto Purkinje cells in the cerebellum (Kano et al. 1992). Plasticity at such inhibitory synapses is likely to be important in central processing of sensory information as well as of other types of information. But few published modeling studies of inhibitory plasticity (Marshall 1990; Nelson and Paulin 1995; Sirosh and Miikkulainen 1994) have elucidated the potential roles of plasticity at inhibitory synapses. Furthermore, the interactions between inhibitory plasticity and plasticity at excitatory synapses are only poorly understood.

The present study uses techniques of mathematical and computer modeling to examine the possible roles and contributions of plasticity at inhibitory synapses in the electrosensory lateral line lobe (ELL) of mormyrid electric fish. This structure, and other cerebellum-like sensory structures in electroreceptive fish, have been shown to be adaptive sensory processors that subtract out predictable features of the sensory inflow following a period of association between centrally originating predictive signals and particular patterns of sensory input (Bell et al. 1997a). Synaptic plasticity at excitatory synapses has been demonstrated experimentally in the ELL (Bastian 1998; Bell et al. 1997c; Bodznick et al. 1999). A previous modeling study has shown how this cellular level plasticity can yield the system level adaptive properties of the ELL (Roberts and Bell 2000). The ELL is rich in inhibitory neurons and inhibitory synapses. Although plasticity at these inhibitory synapses has not yet been demonstrated, an adaptive system level function for the structure as a whole, the well demonstrated plasticity at excitatory synapses, and the presence of extensive inhibitory interactions within the ELL suggest that this region is a good candidate to examine the potential contributions of inhibitory synaptic plasticity.

Mormyrid electrosensory system

Mormyrid electric fish have an electric organ in their tail that generates a brief pulse of electric current, an electric organ discharge (EOD). The EOD generates electric field pulses in the near vicinity of the fish in response to a centrally originating motor command. The fish can navigate without vision by detecting distortions caused by external objects in its self-generated electric field (Bastian 1986).

Mormyrid fish have three classes of electroreceptors that are used for three different purposes: mormyromasts, knollenorgans, and ampullary receptors. Mormyromasts are used for active electrolocation, knollenorgans are used to sense the EODs of other electric fish in electro-communication, and ampullary receptors are used to sense the low-frequency external fields that all animals, electric and nonelectric, generate in the water.

Primary afferent fibers from mormyromast and ampullary electroreceptors terminate in separate regions of the cortex of the electrosensory lateral line lobe. The ELL is a laminar structure, and neurons of interest for this study, the medium ganglion (MG) cells, have their cell bodies in the ganglion cell layer (see Fig. 1). These neurons have a large dendritic tree of apical dendrites that reach into the molecular layer. There they receive synaptic contact from excitatory parallel fibers and inhibitory interneurons, the largest population of which are referred to as stellate cells (Grant et al. 1996; Meek et al. 1996). This study is restricted to the region of the ELL cortex that receives ampullary receptor input.



View larger version (37K):
[in this window]
[in a new window]
 
Fig. 1. Neural organization. Cellular organization of the electrosensory lateral line lobe (ELL) is shown. The medium ganglion (MG) cells, with cell bodies in the ganglion cell layer, have apical dendrites that reach into the molecular layer and basilar dendrites that invade the granular layer. Parallel fibers in the molecular layer respond to corollary discharge signals, proprioceptive signals, and signals from other sensory modalities. The parallel fibers have excitatory synapse onto MG cells and stellate cells. The stellate cells are inhibitory and synapse onto MG cells. Electrosensory information from primary afferents reaches the MG cells through interneurons in the granule layer.

Recordings from the cell bodies of MG cells reveal two types of spikes; a narrow, presumably axonal, spike is evoked by moderate depolarization, and a large, broad spike is evoked at stronger depolarizations. Field recordings suggest that the broad spikes propagate into the apical dendrites of the molecular layer (Grant et al. 1998).

The motor command that initiates an EOD originates in the command nucleus and traverses the spinal cord to the electric organ. Simultaneously, a corollary discharge signal projects to the ELL to intersect with the afferent electrosensory information from the receptors (Bell 1982; Bell et al. 1983). These signals converge on medium ganglion cells, where basilar dendrites receive the afferent inputs via interneurons in the granular layer, and the apical dendrites receive corollary discharge inputs via the parallel fibers and stellate cells in the molecular layer (Bell et al. 1992).

Mormyrid electric fish can sense the subtle external electric fields of interest over the background of its own electric discharge. Since the fish generates the signal, it might be advantageous to develop an adaptive filtering mechanism that eliminates the predicted electrosensory image to emphasize subtle novelties in the environment.

The MG cell responses adapt to eliminate the predicted electrosensory image within the sensory signal (Bell 1982). Recordings near fibers known to excite granule cells that project into the molecular layer as parallel fibers suggest that their responses to the corollary discharge are not simultaneous, but are distributed in time following the command signal (Bell et al. 1992). Corollary discharge timing information of the electric discharge arrives through parallel fibers, so a likely candidate for adaptation is the synapse between the parallel fibers and the apical dendrites.

The synaptic efficacy of the parallel fibers onto the MG cells has been shown to change depending on the relative timing of the presynaptic volley evoked excitatory postsynaptic potential (EPSP) and the postsynaptic broad spike (Bell et al. 1997c). The synaptic efficacy is depressed following a pairing period in which the postsynaptic spike follows the beginning of the EPSP within a narrow time window of about 50 ms. This effect is referred to as associative depression. If the postsynaptic broad spike occurs at any other delay, then the synaptic efficacy is enhanced. This increase does not depend on the occurrence of the postsynaptic spike and is referred to as nonassociative enhancement. Modeling studies have shown (Roberts and Bell 2000) that the exact form of the learning rule is critical for the system level adaptive function of the ELL's ampullary region. If the experimentally established learning rule is used in the model, then the parallel fiber inputs adapt to generate a negative image of the previously paired sensory input. Adding this negative predictable component to the actual input eliminates modulation of the MG cell responses to predictable electrosensory signals. If other learning rules with other forms of temporal dependence on the relative timing of pre- and postsynaptic events, are used, then the model demonstrates that the negative image is not as faithful a copy of the original sensory input because the learning is dynamically unstable (Roberts and Bell 2000).

A similar adaptive filter system has been found in another electrosensory system. In the gymnotid ELL, fibers carrying proprioceptive information adjust their synapses in a way that can cancel the predictable changes in electroreceptor signal intensity due to bending of the fish's body (Bastian 1995). In addition, experiments on synaptic plasticity (Bastian 1998) suggest that some of the adaptation is caused by plasticity at inhibitory synapses.

Slice experiments in the mormyrid ELL have suggested but not yet demonstrated plasticity at inhibitory synapses (Bell et al. 1997b). The associative depression of the EPSP evoked by a parallel fiber stimulus appeared to be accompanied by an increase in the inhibitory postsynaptic potential (IPSP) evoked by the same stimulus, and the nonassociative increase in EPSP size appeared to be accompanied by a decrease in the IPSP (Fig. 2A). These changes appear to be due to plasticity at inhibitory synapses, but could also reflect an IPSP of unchanging size that is masked or unmasked by accompanying changes in the EPSP.



View larger version (17K):
[in this window]
[in a new window]
 
Fig. 2. Synaptic plasticity. A: the change in postsynaptic potential following pairing of a parallel fiber stimulus with postsynaptic broad spikes. Trace a shows the postsynaptic potential following a test stimulus to the parallel fibers before pairing. After 360 pairings with broad spikes that precede the parallel fiber stimulus by 20 ms, the test stimulus produced a postsynaptic potential shown by trace b. After another 360 pairings, but this time the broad spikes follow the parallel fiber stimulus by 20 ms leads to a postsynaptic potential produced by the test stimulus shown by trace c. Modified from Bell et al. (1997b). B: temporal learning rules used in the simulations. The solid trace shows the change in excitatory synaptic weight induced by the delay between the beginning of the excitatory postsynaptic potential (EPSP) and a broad spike. The dashed trace represents the same for the inhibitory weights.

The present study explores the hypothesis that plasticity is indeed present at inhibitory synapses in the ELL and examines the consequences of such plasticity. The learning rule that is assumed to control inhibitory plasticity follows from the hypothesis that pairing delays between pre- and postsynaptic spikes that cause the EPSP to decrease cause the IPSP to increase and pairings at other delays that cause the EPSP to increase cause the IPSP to decrease. The main feature of this form of plasticity is that it follows the same timing as the learning rules for excitatory synapses, but in the opposite direction (see Fig. 2B).


    METHODS
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
APPENDIX
REFERENCES

Stochastic model neuron

The model of the MG cell is constructed to represent the simplest observable dynamics in response to externally applied synaptic input. This simplified model allows for analytic results that confirm conclusions drawn from computer simulations for a wide range of parameter settings.

The MG cell is modeled as a single compartment, stochastic threshold device. All synaptic inputs are summed to yield a "noiseless membrane potential" that represents the excitation level of the neuron at each point in time. The total membrane potential is a combination of the noiseless potential with a background noise term representing uncorrelated synaptic activity. If the total membrane potential exceeds a specified threshold, a spike is generated. This modeling approach is similar to the "spike response" model (Gerstner and van Hemmen 1992) that has been used to study the auditory system in barn owls (Gerstner et al. 1996). The MG cell model has two thresholds: a lower threshold that generates a narrow (axonal) spike, and a higher threshold that generates a broad (dendritic) spike. Only the broad spike influences synaptic plasticity.

Two time scales are of importance to the model: a fast scale and a slow scale. The fast scale characterizes the response of the MG cell to the EOD over the course of tens of milliseconds and is limited to the duration of each electric EOD. The slow scale represents the adaptation of synaptic strengths due to synaptic plasticity over the course of several minutes and lasting many EOD cycles. To represent these processes independently, we separate these time scales into two separate components. The x-component represents the time in milliseconds following the EOD, and the t-component represents the number of EOD cycles. The x-component is discretized with xn = n(Delta x) for n an integer. In the simulations, Delta x = 1 ms, and the number of time steps N = 150. Thus the dynamical variables in the model are dependent on two temporal variables. For instance, the noiseless membrane potential, denoted by V(xn, t), is a function of both xn and t. The probability of a broad spike during cycle t at time xn is a threshold (sigmoid) function of the noiseless membrane potential. With threshold theta , and noise parameter µ, the spike probability is given by the expression
<IT>f</IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)<IT>=</IT><FR><NU><IT>1</IT></NU><DE><IT>1+exp</IT>{−<IT>&mgr;</IT>[<IT>V</IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)<IT>−&thgr;</IT>]}</DE></FR> (1)
For low membrane potentials, the spike probability saturates near zero, and for high-input levels, the spike probability saturates at unity. The instantaneous spike frequency is obtained by multiplying the spike probability by the maximum spike frequency. The model contains no relative refractory period, so the maximum spike frequency is the inverse of the absolute refractory period. The refractory period used in the model is 2 ms for narrow spikes, and 30 ms for broad spikes (C. Bell, personal communication).

Network architecture

The model MG cell receives three inputs (see Fig. 3): parallel fiber and stellate cell postsynaptic potentials representing inputs from the molecular layer, and deep layer inputs that represent the electrosensory image. The electrosensory image, Vel(xn), is based on recordings from the ampullary region of the ELL and is designed to duplicate the MG cell response to an EOD before adaptation takes place (Fig. 3).



View larger version (23K):
[in this window]
[in a new window]
 
Fig. 3. Model organization. The model MG cell is a threshold device with noise. If the combination of the noiseless membrane potential plus Gaussian noise is greater than a threshold, then a spike is generated. The model generates 2 types of spikes: narrow spikes and broad spikes. The broad spikes are the postsynaptic events that are required for associative synaptic plasticity. Two types of inputs are considered: electrosensory inputs that represent the electric organ discharge (EOD) signal, Vel(xn), and adaptable synaptic inputs that represent parallel fiber and stellate cell inputs. Each parallel fiber input contributes an EPSP to the membrane potential represented by an EPSP waveform weighted by w(xn, t). Each stellate cell input contributes an inhibitory postsynaptic potential (IPSP) to the membrane potential represented by an EPSP waveform weighted by upsilon (xm, t). The noiseless membrane potential is the sum of all inputs, sensory and synaptic.

The parallel fiber inputs are modeled as a time-delayed series of excitatory postsynaptic potentials. Each EPSP begins at a specified delay following the beginning of each EOD cycle. The sequence of delayed EPSPs is represented in Fig. 3 as the weighted parallel fiber inputs each beginning at a different x-delay (x1, x2, x3, ...). There is one EPSP beginning at each discretization step xn. The waveform, E(xn), used for all the EPSPs is shown in Fig. 4. The EPSP waveform was obtained from recordings in vitro of MG cells while inhibitory inputs were pharmacologically blocked (Grant et al. 1998). The contribution of each EPSP to the membrane potential is obtained by multiplying the waveform by a synaptic weight, w(xm, t). The slow time scale t-dependence is due to the adaptability of the synaptic weights resulting from the rules of synaptic plasticity. The total contribution of the parallel fiber synapses, Vpf(xn, t), to the membrane potential is the sum of all individual contributions
<IT>V<SUB>pf</SUB></IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)<IT>=</IT><LIM><OP>∑</OP><LL><IT>m</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM> <IT>w</IT>(<IT>x<SUB>m</SUB></IT><IT>, </IT><IT>t</IT>)<IT>E</IT>(<IT>x<SUB>n</SUB></IT><IT>−</IT><IT>x<SUB>m</SUB></IT>) (2)
The sum runs over the time interval from the beginning of the EOD cycle to the most delayed EPSP that is correlated with the cycle (N = 150 in the simulations). The EPSP waveform is normalized to have unit area, such that Sigma nE(xn) = 1. 



View larger version (20K):
[in this window]
[in a new window]
 
Fig. 4. Postsynaptic waveforms. Three waveforms that are used to determine the synaptic input in the simulations. The scale for the postsynaptic potentials (PSP) is on the left and the scale for the shunting current is on the right. The PSP units are normalized such that the integral of the functions is unity. The EPSP waveform, E(xn), is given by the solid trace (Grant et al. 1998), the IPSP waveform, I(xn), is given by the dashed trace. The waveform representing the shunting current induced by inhibitory synapses in dimensionless units, G(xn) = xn exp(-xn/2), is given by the dotted trace (Otis and Mody 1992).

The stellate cell inputs are modeled similarly to the parallel fiber inputs, but as a series of IPSPs. The contribution of each IPSP is the negative of the product of an IPSP waveform, -I(xn), with a synaptic weight, upsilon (xm, t). The IPSP waveform is positive [I(xn>=  0 for all xn] and is based on the difference between a postsynaptic potential measured with and without inhibition blocking agents from experiments in vitro (Grant et al. 1998). However, in the analyses and simulations, IPSP initiations are not always delayed by regular series of intervals that are correlated with the EOD. Because the timing of IPSP inputs from stellate cells with respect to the EOD has not yet been experimentally confirmed, the model is used to determine the effects of different delay schemes. The contribution for all of the stellate cells, Vst(xn, t), to the MG cell membrane potential is the sum of all of the individual inputs (with N = 150 in the simulations)
<IT>V<SUB>st</SUB></IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)<IT>=</IT>−<LIM><OP>∑</OP><LL><IT>m</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM><IT> &ugr;</IT>(<IT>x<SUB>m</SUB></IT><IT>, </IT><IT>t</IT>)<IT>I</IT>[<IT>x<SUB>n</SUB></IT><IT>−</IT><IT>x<SUB>m</SUB></IT><IT>−&dgr;</IT><SUB><IT>m</IT></SUB>(<IT>t</IT>)] (3)
where the xn are identical to those of Eq. 2, xn - nDelta x (Delta x + 1 ms in the simulations). The delay offset term, delta m(t), is used to vary the delay of each inhibitory input relative to the beginning of each EOD cycle. If the IPSPs are perfectly correlated with the EOD cycle as the EPSPs, then delta m(t) = 0 for all t. However, if there is no correlation, then delta m(t) is assigned a random number for each cycle within the range of other correlated inputs. The IPSP waveform is also normalized to have unit area, Sigma nI(xn) = 1.

Shunting inhibition

The linear summation of EPSPs and IPSPs given above may not reflect the complete contribution of inhibitory synaptic inputs. Current from the excitatory synaptic inputs can be shunted through inhibitory ion channels, thereby preventing the excitatory current from depolarizing the site of spike initiation (Koch et al. 1983; Rall 1964; Tuckwell 1986). The effects of shunting inhibition are compared with calculations with simple linear inhibition to determine what effects, if any, shunting has on the learning dynamics.

Since the effect of shunting inhibition is to reduce the effective injected current from the excitatory synapses, inhibitory synapses will reduce the overall weight of EPSPs during the time course of the open inhibitory receptors (see Fig. 4). For this purpose we use the open time of GABAA receptors (Otis and Mody 1992) because IPSPs in the ELL are mediated by GABAA. The time course of the IPSP differs from the time course of the normalized conductance, G(xn), due to the electrical properties of the neuron. This algorithm for this representation of the shunting has been chosen because it is in the spirit of the spike response model, that is, the model yields the change in the response of the postsynaptic neuron's output due to presynaptic spikes.

The shunted weight, ws(xn, t), is reduced by an amount that is dependent on the strength of inhibition. Thus shunting is proportional to the inhibitory weights, upsilon (xn, t). Since these inhibitory synapses can change in a use-dependent manner, the amount of shunting also changes under synaptic plasticity of inhibitory synapses. A scaling factor, sigma , is used in the model to control the maximum amount of shunting by the inhibitory synapses. Excitatory synaptic weights are reduced by shunting over the time course of the GABAA receptors' open time, so that the shunted weights are computed by
<IT>w<SUB>s</SUB></IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)<IT>=</IT><LIM><OP>∑</OP><LL><IT>m</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM> <IT>w</IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)[<IT>1−&sfgr;&ugr;</IT>(<IT>x<SUB>m</SUB></IT><IT>, </IT><IT>t</IT>)<IT>G</IT>(<IT>x<SUB>m</SUB></IT><IT>−</IT><IT>x<SUB>n</SUB></IT>)] (4)
There are many excitatory and inhibitory inputs distributed along the dendrites in the molecular layer, and there is no preferred spatial location of the stellate cells throughout the apical dendrites of the MG cells. Thus the spatial component of shunting is ignored in this model. This method of including shunting inhibition into the model has been compared with a two-compartment conductance-based model using the neural modeling software package NEURON (Hines and Carnevale 1994). In addition to Hodgkin-Huxley currents, one compartment included synaptic currents from a kinetic model of alpha -amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) and GABA receptors (Destexhe et al. 1994). The second compartment was used for membrane potential measurements, and there agreement between the methods was satisfactory.

The noiseless membrane potential is the sum of all inputs (Eqs. 2-4)
<IT>V</IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)<IT>=</IT><IT>V<SUB>pf</SUB></IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)<IT>+</IT><IT>V<SUB>st</SUB></IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)<IT>+</IT><IT>V<SUB>el</SUB></IT>(<IT>x<SUB>n</SUB></IT>)<IT>=</IT><LIM><OP>∑</OP><LL><IT>m</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM> <IT>w<SUB>s</SUB></IT>(<IT>x<SUB>m</SUB></IT><IT>, </IT><IT>t</IT>)<IT>E</IT>(<IT>x<SUB>n</SUB></IT><IT>−</IT><IT>x<SUB>m</SUB></IT>) (5)

<IT>−</IT><LIM><OP>∑</OP><LL><IT>m</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM><IT> &ugr;</IT>(<IT>x<SUB>m</SUB></IT><IT>, </IT><IT>t</IT>)<IT>I</IT>[<IT>x<SUB>n</SUB></IT><IT>−</IT><IT>x<SUB>m</SUB></IT><IT>−&dgr;</IT><SUB><IT>m</IT></SUB>(<IT>t</IT>)]<IT>+</IT><IT>V<SUB>el</SUB></IT>(<IT>x<SUB>n</SUB></IT>)
The nonlinear shunting effects can be removed by setting sigma  = 0 in the expression for the shunted excitatory synaptic weight (Eq. 4).

Temporal learning rules

A previous study of this system characterized the consequences of synaptic plasticity at excitatory parallel fiber synapses (Roberts and Bell 2000). The learning rule governing synaptic plasticity was based on an experimentally determined learning rule and depends on the precise timing of the pre- and postsynaptic spikes during repetitive pairings. These experimentally determined temporal learning rules form the basis of the model's implementation of synaptic change. During each EOD cycle, the MG cell is activated by different parallel fibers in a series of delays indexed by xn. The change of the excitatory synaptic weights, Delta w(xn, t) is functionally dependent on the time, xb, of a broad spike in the postsynaptic neuron following the beginning of each cycle. During each EOD cycle, there is a nonassociative enhancement of each synapse that is set by the nonassociative learning rate parameter, alpha w. If a postsynaptic broad spike occurs during a narrow time window following the beginning of the EPSP, the synaptic weight is reduced proportionally to a learning function, Lw(xn), scaled by the associative learning rate, beta w
&Dgr;<IT>w</IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)<IT>=&agr;</IT><SUB><IT>w</IT></SUB><IT>−&bgr;</IT><SUB><IT>w</IT></SUB><IT>L<SUB>w</SUB></IT>(<IT>x<SUB>b</SUB></IT><IT>−</IT><IT>x<SUB>n</SUB></IT>) (6)
where Lw(xn) is normalized to have a unit area. Thus after each EOD (at time t) the timings of the broad spikes are used to determine the change in synaptic weights. The new magnitudes of the weights are used in the next EOD (at time t + 1) to compute the broad spike probability.

The change in the inhibitory synaptic weights, Delta upsilon (xn, t), are similarly treated, but with opposite sign and a different learning function, L&ugr;(xn)
&Dgr;&ugr;(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)<IT>=</IT>−<IT>&agr;<SUB>&ugr;</SUB>+&bgr;<SUB>&ugr;</SUB></IT><IT>L</IT><SUB><IT>&ugr;</IT></SUB>(<IT>x<SUB>b</SUB></IT><IT>−</IT><IT>x<SUB>n</SUB></IT>) (7)
Previous studies of the requirements for dynamical stability on the learned sensory image of this system (Roberts and Bell 2000) suggest that the learning functions should be equivalent to the postsynaptic potential waveforms: the EPSP waveform for the excitatory synapses, Lw(xn) = E(xn), and the IPSP waveform for the inhibitory synapses, L&ugr;(xn) = I(xn). This form of the learning function will be relaxed in the simulations to test for measurable instabilities. The match between the temporal learning rule and the EPSP waveform means that the occurrence of a broad spike during the postsynaptic potential results in an associative weight change for that synapse.

To make comparisons with experimental results on the rate of adaptation, it is essential to use realistic values for the parameters of the learning rules. Realistic values for the synaptic learning rates can be obtained from recent data that plot the time course of EPSP enhancement and depression in slice preparations for different delays between the pre- and postsynaptic stimulations (Han and Bell 1999). These data constrain the values of alpha w and beta w, the learning rates for non-associative enhancement and associative depression.


                              
View this table:
[in this window]
[in a new window]
 
Table 1. Model parameters used in simulations

The dynamics of the system were investigated by calculating the average weight change in a continuum approximation of the formalism presented above (cf. APPENDIX). The analysis was used to characterize the general dynamics of the system independent of exact parameter choices.

Computer simulations were used to illustrate the dynamics of the system and to test explicit examples of parameter choices. The above formalism was implemented in a custom software package that could generate the relevant variables and display the spike output of the MG cell (the simulation software can be obtained by anonymous FTP from reed.edu/ftp/reed/users/proberts). Edge effects were handled by applying periodic boundary conditions to the xn component. In the simulations, the noiseless membrane potential was computed for each EOD cycle. The weights were randomized in a uniform distribution within 4% of their mean value. The assignment of broad during each time step following the command signal was based on the computed spike probability (Eq. 1) using a pseudo-random number generator. The synaptic weights were updated following each cycle as determined by the timing of the broad spikes and the learning rules, Eqs. 6 and 7.

A measure of the sensory image cancellation was required to compare different conditions and their effects on the system. We used the mean square contingency, chi 2(t)/N, to obtain the difference between the membrane potential, V(xn, t), and the time average of V(xn, t) over the cycle length, V(t)
<FR><NU>&khgr;<SUP>2</SUP>(<IT>t</IT>)</NU><DE><IT>N</IT></DE></FR><IT>=</IT><FR><NU><IT>1</IT></NU><DE><IT>N</IT></DE></FR> <LIM><OP>∑</OP><LL><IT>n</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM> <FR><NU>[<IT>V</IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)<IT>−</IT><OVL><IT>V</IT>(<IT>t</IT>)</OVL>]<SUP><IT>2</IT></SUP></NU><DE><OVL><IT>V</IT>(<IT>t</IT>)</OVL></DE></FR> (8)
where <OVL><IT>V</IT>(<IT>t</IT>)</OVL> = (1/N) Sigma n=1N V(xn, t), and N = 150 is the number of time steps in the simulated EOD cycle. Low values of chi 2(t)/N indicate an average spike frequency that is nearly constant during the EOD cycle representative of sensory image cancellation by a negative image generated by the synaptic inputs of parallel fibers and stellate cells.


    RESULTS
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
APPENDIX
REFERENCES

A spike response model (Gerstner and van Hemmen 1992) of a medium ganglion cell was used to determine the adaptive properties of its spike output due to synaptic plasticity. The amplitude of both EPSPs and IPSPs would change depending on the relative timing of pre- and postsynaptic spikes. The learning rules would drive the output broad spike frequency to an equilibrium level that is a function of the synaptic learning rates.

As shown in the APPENDIX, the final broad spike frequency, &fcirc;, of the model neuron after adaptation takes place is given by the sum of the non-associative learning rates divided by the sum of the associative rates
<IT><A><AC>f</AC><AC>ˆ</AC></A></IT><IT>=</IT><FR><NU><IT>&agr;</IT><SUB><IT>w</IT></SUB><IT>+&agr;<SUB>&ugr;</SUB></IT></NU><DE><IT>&bgr;</IT><SUB><IT>w</IT></SUB><IT>+&bgr;<SUB>&ugr;</SUB></IT></DE></FR> (9)
If the learning rates for excitatory and inhibitory synapses are equal (alpha  = alpha w = alpha &ugr; and beta  = beta w beta &ugr;), then this expression reduces to the ratio of the non-associative learning rate to the associative learning rate (alpha /beta ) as derived previously (Roberts and Bell 2000) for plasticity at only excitatory synapses.

However, if the learning rates differ, then the average of the weights continue to drift even though the broad spike probability has attained a constant value, &fcirc;. The ensemble average change in the synaptic weights is (see APPENDIX)
⟨&Dgr;<IT>w</IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)⟩<IT>=</IT>⟨<IT>&Dgr;&ugr;</IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)⟩<IT>=</IT><FR><NU><IT>&agr;</IT><SUB><IT>w</IT></SUB><IT>&bgr;<SUB>&ugr;</SUB>−&agr;<SUB>&ugr;</SUB>&bgr;</IT><SUB><IT>w</IT></SUB></NU><DE><IT>&bgr;</IT><SUB><IT>w</IT></SUB><IT>+&bgr;<SUB>&ugr;</SUB></IT></DE></FR> (10)
The excitatory synaptic weights drift at a rate equivalent to the drift of the inhibitory weights, confirming that the broad spike probability remains constant. The drift-rate expression implies that the synaptic weights saturate at their highest values if the ratio alpha w/beta w is greater than the ratio alpha &ugr;/beta &ugr;, and the weights saturate at their lowest value if alpha w/beta w is less than alpha &ugr;/beta &ugr;.

Uncorrelated inhibitory synaptic input

If the stellate cells in the molecular layer do not fire in response to the parallel fibers that are time locked with the EOD, their input will be uncorrelated with the electric organ cycle. The first simulation investigates whether any measurable effects would result from synaptic plasticity of uncorrelated stellate inputs. The EPSPs arrive in a delayed series of adaptable inputs for up to 150 ms following the onset of the EOD. However, each of the 150 stellate cell-induced IPSPs arrive at a different time during each EOD cycle. The delay, delta m(t), assigned to each IPSP is randomly distributed throughout the first 150 ms of each cycle, with one IPSP beginning at each time step. When this delay is t-dependent, it changes with each EOD cycle. Thus the IPSPs are here not correlated with the EOD.

In this case of randomly timed inhibitory inputs, the plasticity of inhibitory synapses adds no observable dynamics to the system other than contributing to the background noise. The rate of adaptation to changing sensory stimuli is the same, and the range of adaptability is the same.

If the plasticity is only at excitatory synapses, it can be shown that conditions must be imposed on the excitatory learning rule to ensure stability of the negative image. There must be a nonassociative enhancement component to the learning rule, and associative depression must be close to the form of the epsp waveform (Roberts 2000; Roberts and Bell 2000). If the inhibitory synaptic inputs arrive at random delays, then the same conditions on the excitatory learning rule apply as without inhibitory plasticity.

An interesting result of inhibitory plasticity with randomly timed inhibitory inputs is that saturation of the weights caused by synaptic drift (Eq. 10) distorts the negative image of the sensory pattern. The noiseless membrane potentials for two simulations are shown in Fig. 5A, where the solid line represents the results of a simulation at t = 600 and the learning rates of the excitatory (parallel fiber) synapses were set equal to the learning rates of the inhibitory (stellate cell) synapses. The input of the parallel fibers plus stellate cells cancel the sensory input [chi 2(t)/N = 1]. The dashed line in Fig. 5A shows the resulting noiseless membrane potential at t = 600, where alpha wbeta &ugr; alpha &ugr;beta w. Here the weights have saturated at their lowest value (Fig. 5C) so that the inputs are unable to cancel the highest peak of the sensory image [chi 2(t)/N = 63]. Thus synaptic plasticity at inhibitory synapses that have a random delay can be detrimental to the fidelity of the negative image generated by the parallel fiber inputs unless the learning rates are finely tuned.



View larger version (20K):
[in this window]
[in a new window]
 
Fig. 5. Randomly timed IPSPs: membrane potential and weights. A: the noiseless membrane potential generated by simulations following 600 cycles of adaptation to a "electrosensory stimulus" represented by the dotted trace. The solid [chi 2(t)/N = 1] trace shows the noiseless membrane potential generated by the weight configuration of B. The dashed trace [chi 2(t)/N = 63] is generated by the weight configuration of C. B: weight configuration following 600 cycles of adaptation where the ratio of the learning rates are equal. Excitatory synaptic weights are represented by the dotted trace, and inhibitory weights by the solid trace. The weights are labeled by the presynaptic spike time following the beginning of the EOD cycle. C: weight configuration following adaptation with an inequal ratio of the learning rates.

Correlated inhibitory synaptic input

When inhibitory inputs are correlated with the EOD, in contrast to the uncorrelated condition considered in the previous section, plasticity at inhibitory synapses can contribute to the formation of a negative image. In particular, plasticity at inhibitory synapses allows the sum of IPSPs to complement the contribution of the EPSPs when the weights of excitatory inputs are saturated.

The results of two simulations demonstrating this phenomenon are shown in Fig. 6. The noiseless membrane potential is shown by the two horizontal traces in Fig. 6A. In the first simulation (weights shown in B) the ratio of learning rates is alpha w/beta w alpha &ugr;/beta &ugr;. After 400 cycles, the inhibitory synaptic weights are reduced to their lowest values except for an interval between 60 and 85 ms following the command signal. It is during this interval that the IPSPs contribute to the total membrane potential during the depolarizing sensory input (the peak of the dotted trace in Fig. 6A). This interval of increased inhibitory current subtracts the residue to form a negative image that the excitatory current cannot effect because its weights are saturated at their zero level. During the remainder of the EOD cycle, the excitatory inputs adjust to cancel the sensory image. This effect is independent of the starting conditions for the weights. On possible advantage of this saturation effect would be to minimize he synaptic output required to generate a negative image, thereby reducing the use of synaptic resources, such as neurotransmitters.



View larger version (14K):
[in this window]
[in a new window]
 
Fig. 6. Saturation of synaptic weights. A: in this simulation, the IPSPs are a series of delayed inputs that are correlated with the beginning of each cycle. The dotted trace represents the "electrosensory stimulus" that is paired with the delayed series of EPSPs and IPSPs. The solid [chi 2(t)/N = 1] trace shows the noiseless membrane potential generated by the weight configuration of B. The dashed trace [chi 2(t)/N = 1] is generated by the weight configuration of C. B: weight configuration following 400 cycles of adaptation with alpha wbeta &ugr; < alpha &ugr;beta w. The inhibitory weights (---) are reduced to their lowest values except where the excitatory weights (···) are saturated. C: weight configuration following 400 cycles of adaptation with alpha wbeta &ugr; < alpha &ugr;beta w.

The second simulation shows the result of setting the ratio of inhibitory learning rates less than the excitatory rates, alpha &ugr;/beta &ugr; alpha w/beta w. In this case, the weights saturate near their greatest values (Fig. 6C). The inhibitory synaptic weights are reduced during the interval where they contribute to canceling the hyperpolarizing sensory input. In this case, the system maximizes its use of synaptic resources.

Range of adaptability

Inhibitory plasticity introduces adaptable postsynaptic potentials that can allow the neuron to generate a negative image to cancel a much broader range of sensory input intensity. This is seen analytically in the added term of the summation over IPSPs (Eq. 5). The first two terms on the right hand side must combine to level the variations of sensory image, Vel(xn), over xn. Inhibitory plasticity allows the weights of the IPSPs, upsilon (xn, t), to adapt so that hyperpolarizing regions of Vel(xn) can be canceled for higher peaks in the sensory image. Since there are more inputs to adjust through synaptic plasticity, a greater range of input intensities can be canceled. Although the adaptive range to hyperpolarizing sensory input could be increased with the addition of more excitatory inputs, the increased range of adaptation to depolarizing sensory input requires plasticity at inhibitory synapses.

Two simulations depicted in Fig. 7 demonstrate the increased range of adaptability. The first simulation increased the gain of the sensory input, Vel(xn) (···, Fig. 7A). The system could not adapt to the large stimulus gain. The total membrane potential is seen to deviate from a flat line (- - -, Fig. 7A). Under these conditions, the cancellation of the sensory input by the molecular layer inputs is incomplete. The range of the system's adaptability is limited because the inhibitory weights were constant in t. Saturation of the excitatory synaptic weights can be seen in Fig. 7B.



View larger version (17K):
[in this window]
[in a new window]
 
Fig. 7. Range of adaptability. A: the "electrosensory stimulus" (- - -) has been increased by a factor of 7/4 compared with the previous simulations (···). The curved [chi 2(t)/N = 321] solid trace shows the noiseless membrane potential generated by the weight configuration of B. The flat [chi 2(t)/N = 2] solid trace is generated by the weight configuration of C. B: weight configuration following 400 cycles of adaptation without plasticity at inhibitory inputs (---). The excitatory weights (···) are saturated. C: weight configuration following 400 cycles of adaptation with inhibitory inputs correlated with the EOD cycle.

The second simulation is run with the larger gain, but the IPSPs are now plastic and correlated with the EOD cycle and EPSPs (Fig. 7C). Here the inhibitory inputs are able to contribute to the formation of a negative image so that the total membrane potential is nearly constant during the EOD cycle (Fig. 7A, solid trace).

Rate of adaptation

The system level rate of adaptation measures the time it takes from an abrupt change in the predictable sensory image to be canceled by the generation of a negative image. As derived in the APPENDIX, the rate at which deviations in the membrane potential flatten is a monotonic increasing function of both excitatory and inhibitory learning rates (Eq. A16). If a is the ratio of inhibitory learning rates to excitatory learning rates (a = alpha &ugr;/alpha w beta &ugr;/beta w), plasticity at inhibitory synapses increases the rate of adaptation by a factor of (1 + a)-1.

A simulation was run with the inhibitory learning rates, alpha &ugr; and beta &ugr;, set equal to the excitatory learning rates based on the physiological values for excitatory plasticity (Han and Bell 1999). The value of chi 2(t)/N is plotted in Fig. 8A for three conditions of the inhibitory synapses: no plasticity (···), plasticity according to the learning rule in Fig. 2B, and random timing with respect to the EOD (---), and plasticity with the timing in a series of delays following the beginning of the EOD (- - -). The adaptation time course for the serially delayed plastic inhibitory synapses is considerably shorter than the other two schemes. Fitting an exponential curve [A + B exp(-t/tau ), where A is an offset parameter, B is an overall scale factor, and tau  is a decay constant] to the plot, we find that the decay constant for the simulation with excitatory plasticity only is tau E = 641 cycles. For the serially delayed, inhibitory plasticity simulation the decay constant is tau E+I = 168 cycles.



View larger version (15K):
[in this window]
[in a new window]
 
Fig. 8. Rate of adaptation. A: the mean square contingency, chi 2(t)/N, measures the deviation of the noiseless membrane potential from a constant during each cycle, thus measures the progress of adaptation over the many cycles. The solid (noisy) trace shows the progress of adaptation from a simulation where the inhibitory inputs are plastic, but randomly correlated with the EOD cycle. The dotted trace shows the progress with no inhibitory plasticity. The (faster adapting) dashed trace shows the result of both inhibitory plasticity and ipsps correlated with the EOD cycle. B: data from 2 experiments in vivo. The solid lines show the response to the command signal measured by taking average number of spikes between 20 and 60 ms after the EOD command, and subtracting the average number of spikes between 60 and 100 ms. The dashed traces are best-fit exponential curves discussed in the text. Figure modified from Bell (1982).

These decay constants can be converted into the rate of adaptation in the ELL by considering that in preparations in vivo, spontaneous electric organ discharges occur at intervals of 150-400 ms. Thus the ranges of decay constant values predicted by our simulations are tau E = 1.6-4.3 min and tau E+I = 0.4-1.2 min. The adaptation rate measured by the difference in spike rate between the pause and burst phase of the electric organ cycle is plotted in Fig. 8B. Although this is not the same method of measuring the deviation from a constant spike rate as our chi 2(t)/N analysis, the rates are comparable because they differ only in an overall scale factor and offset parameter. Fitting these graphs to an exponential curve yields the decay constants, tau exp1 = 0.9 min and tau exp2 = 0.5 min. Only the simulation with a series of delayed synapses and inhibitory plasticity has a range of tau  values that is consistant these data.

However, there is a discrepancy between how much the decay constant is reduced by inhibitory plasticity as predicted by the analysis presented in the APPENDIX and the simulation. The analysis predicts that tau E = 2tau E+I, but the exponential fit of the simulation yields tau E = 3.8tau E+I. The reason for this difference is that the analysis linearized the equation for synaptic change by expanding the broad spike probability near the (constant) equilibrium level (Eq. A10). Thus we can only expect the analysis to be accurate when the system is near the constant spike probability. If we fit only the regions of the graph in Fig. 8A where the mean square contingency chi 2(t)/N <=  40, then we find the relationship between the decay constants to be tau E = 2.1tau E+I, bringing the analysis into close agreement with the simulation.

Another important result that follows from calculations of the decay constant, tau , is an analysis of instabilities in the learning dynamics. If the associative depression learning function does not closely resemble the postsynaptic potential, then oscillations can develop in the spike activity that interfere with the generation of a negative image (Roberts and Bell 2000). We find this to be true of the learning rules for both the excitatory synapses and the inhibitory synapses. Analytically, instabilities appear if the real part of the decay constant becomes negative [Re(1/tau ) < 0]. We have also used the simulations to test several timing relations for pairing of parallel fiber spikes and postsynaptic broad spikes. Simulations were run for 4,000 EOD cycles; long enough for unstable oscillations to develop. The window of associative depression was shifted to different delays from the beginning of the EPSP for each simulation. Instabilities developed for shifts outside the range from -9 to 12 ms. These simulations have confirmed that very few learning rules are stable. Thus if there is inhibitory plasticity in this system, the model predicts that only a narrow range of learning rules will replicate the results of experiments in vivo.

Shunting inhibition

Since inhibitory synapses are able to shunt depolarizing currents, we used the model to determine whether any new dynamics were introduced by such nonlinear inhibition. Simulations were run for all of the above results with the excitatory weights reduced by the inhibitory shunting as described by Eq. 4. The results indicate that no new dynamics were introduced by the addition of this nonlinear form of inhibition. No instabilities developed, and the rate of adaptation was unchanged.

The main result is that the effective strength of the inhibitory inputs was increased because they not only reduced the membrane potential by subtracting the IPSPs, but also reduced the weight of the EPSPs. When the series of adaptable IPSPs were correlated with the EOD cycle, the system generated a stable negative image to cancel the sensory input (Fig. 9, A and B). Because of the increased strength of the IPSPs due to shunting, the larger depolarizing actions of the sensory image sensory image could be effectively canceled.



View larger version (19K):
[in this window]
[in a new window]
 
Fig. 9. Effects of shunting inhibition. A: the noiseless membrane potentials where the weight of the excitatory inputs are diminished by "currents" induced by the inhibitory synapses. The solid [chi 2(t)/N = 3] trace shows the noiseless membrane potential generated by the weight configuration of B. The dashed trace [chi 2(t)/N = 15] is generated by the weight configuration of C. The dotted trace represents electrosensory input. B: weight configuration following 400 cycles of adaptation with inhibitory inputs correlated with the EOD cycle (excitatory weights are represented by the dotted trace and inhibitory weights be the solid trace). C: weight configuration following 400 cycles of adaptation with inhibitory inputs randomly timed with respect to the EOD cycle.

If the IPSPs were uncorrelated with the EOD so that each IPSP began at a random delay following the beginning of the cycle, then the excitatory inputs were unable to cancel the sensory stimuli without saturating, as shown in Fig. 9C. Except for the shunting effects, this simulation used the same parameter settings as the run that generated the data for Fig. 5. Thus the shunting inhibition would require a greater range of the excitatory synaptic weights to cancel the same magnitude of sensory stimuli.


    DISCUSSION
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
APPENDIX
REFERENCES

Summary of results

The results presented here lend support to the hypothesis that the inhibitory synapses from stellate cells to the medium ganglion cells of the ELL exhibit a form of plasticity that depends on the timing of the pre- and postsynaptic spikes. These results follow only if there are inhibitory inputs that are correlated with the EOD cycle in a series of delays following the discharge. In simulations that include these inhibitory inputs along with experimentally based learning rates for synaptic plasticity, the system level adaptation to a change in sensory stimuli occurs at a rate comparable to the rate measured in experiments in vivo.

The reason that such a simple model can accurately predict the system level rate of adaptation is that the learning dynamics depend primarily on the synaptic learning rates and the timing of broad spikes during each EOD cycle. The complex internal dynamics of MG cells do not contribute prominently to the learning dynamics on the relevant time scale of 10-100 ms, except to ensure that a few broad spikes appear every cycle at a rate that increases with depolarization.

Our results show advantages to having plasticity at both excitatory and inhibitory synapses. Advantages include an increased rate of adaptation and an ability to adapt to a wider range of stimulus intensities. These results of the model could be tested experimentally by blocking inhibition in the ELL and measuring the rate and range of adaptation of MG cells due to changing electrosensory stimuli. In addition, the combination of excitatory plus inhibitory plasticity can provide a means of regulating the overall synaptic current injected into the apical dendrites by taking advantage of the "drift" of the synaptic weights when the ratio of the excitatory learning rates (alpha w/beta w) is less than the ratio of the inhibitory learning rates (alpha &ugr;/beta &ugr;). Under these conditions the injected current will be reduced to the lowest level that is still capable of sculpting a negative image to cancel the predictable sensory input. The actual ratios of learning rates have not been measured experimentally for inhibition, but one would not expect the values for excitation and inhibition to match exactly.

The introduction of plasticity at inhibitory synapses increases the number of storage sites for learning. Thus the computational capacity is expanded by allowing temporal patterns to be encoded and stored in the strengths of inhibitory as well as excitatory synapses (Kano 1996). In addition, plasticity at inhibitory synapses provide a wider-range control of postsynaptic neuronal activity. We have shown this with our model by the expanded range of adaptability acquired using inhibitory plasticity.

The benefits of inhibitory plasticity can only be reaped if the inhibitory inputs are correlated with the EOD in a series of delays. In fact, if the inhibitory inputs are not correlated with the EOD, and the learning rate ratios are not perfectly equal, then inhibitory plasticity would reduce the effectiveness the sensory image cancellation. In addition, there is no increased rate of adaptation, and the range of adaptation is actually reduced relative to what it would be without inhibitory synaptic plasticity, particularly if shunting inhibition is present in the model.

The study of uncorrelated inhibitory input reveals a situation where there is a gradual decay of inhibitory synaptic strength that is counteracted by randomly distributed broad spikes relative to stellate cell spikes. As seen in Fig. 5, this has the effect of normalizing the inhibitory input.

The treatment of shunting inhibition in this study did not introduce any new dynamics to the model. The shunting inhibition only increased contributions of inhibitory input relative to that of excitatory input. With shunting inhibition there is not only the linear contribution of the weighted sum of IPSPs, but also a divisive effect (Carandini and Heeger 1994) due to the reduction of the EPSPs by a multiplicative factor. However, because the stellate cells are distributed diffusely throughout the molecular layer, they are excited by parallel fibers that also excite the medium ganglion cells. Thus in this model the nonlinearity of the shunting is proportional to the linear effects of inhibition, and no marked change in system dynamics is observed.

Further research

The present model represents the activity and adaptation of a single MG cell in the ELL. However, the ELL is a cortical structure with complicated interconnections between the resident neurons. Physiological and morphological studies (Grant et al. 1998; Han et al. 1999) have suggested a basic modular structure with excitatory (E) modules and inhibitory (I) modules. The efferent neurons of the E-modules are excited by the electrosensory stimuli in the center of their receptive fields, and the efferent neurons of the I-modules are inhibited by stimuli in the center of their receptive fields. Recent anatomical studies (Han et al. 1999) suggest that the MG cells of each of these modules are synaptically interconnected, thus inhibiting each other. One inhibition of the present model is that such mutual inhibition and other circuitry features have not been included.

One possible explanation of the data on inhibitory plasticity in the ELL that has not been addressed in this model is that plasticity at the synapse from parallel fibers onto stellate cells could be responsible for the apparent plasticity of IPSPs. This type of plasticity has been observed in the hippocampus (Fortunato et al. 1996; Gupta et al. 2000). Slice experiments could be used in the ELL to isolate the inhibitory plasticity to the synapse from stellate cells onto MG cells. A paring paradigm similar to that used for excitatory plasticity (Bell et al. 1997c) could measure the change of IPSPs with glutamate blockers in the bath. The presynaptic stimulation in the molecular layer would have to be strong enough to elicit an IPSP in the MG cell. This type of experiment could show if inhibitory plasticity really exists in this system. However, it would be very difficult to eliminate the possibility that there is also plasticity at the synapse from parallel fibers onto stellate cells in vivo.

There is a theoretical argument against the relevance of this latter type of plasticity to the effect presently investigated: the adaptation of MG cell responses to changes in predictable electrosensory stimuli. The learning rules investigated here are triggered by the timing of broad spikes in MG cells. The broad spikes are the carriers of information about the electrosensory stimuli. For the synapses from parallel fibers onto stellate cell to change in concert with the synapses onto MG cells, one would need to hypothesize another information pathway to signal the stellate synapse about the predictable aspects of electrosensory stimuli. Although it is possible that such a pathway exists, this would not lead to a parsimonious description of the system dynamics with the known anatomy.

Another possible limitation to the present model is the assumption that the stellate cells fire only once per EOD cycle. No recordings of identified stellate cells have been made in support this assumption. As we saw in the results, completely uncorrelated synapses tend to adjust to a level that contributes proportionally to the equilibrium broad spike frequency. If there were several uncorrelated stellate cell spikes per cycle, but one spike per cycle was consistently at the same delay following the EOD, then that one spike would be able to drive the synaptic input to cancel the predicted sensory pattern. The present model tested two extreme conditions: stellate cells fire perfectly correlated with the EOD or perfectly uncorrelated. The true timing of stellate cells with respect to the EOD most likely lies somewhere in between these extreme cases.

The relevance of the spike timing of stellate cells becomes more apparent when one considers the responsiveness of stellate cells to parallel fiber spikes compared with that of MG cells. Although data are not available for the mormyrid ELL, some indication appears in the gymnotiform ELL (Berman and Maler 1998) and the mammalian cerebellum (Barbour 1989) that stellate cells in the molecular layer are much more sensitive to parallel fiber spikes than the principal neurons they inhibit (pyramidal cells in the gymnotiform ELL and Purkinje cells in the cerebellum). The difference in responsiveness could be a result of stellate cells being more electrotonically compact that the principal cells, a condition that would generalize to the mormyrid ELL. Granule cells that give rise to parallel fibers receive input from many sources besides corollary discharge signals, so it is likely parallel fibers are active, and therefore so are stellate cells, even when there is no EOD. The present model restricts the activity of stellate cells to the first 150 ms following the EOD. In the absence of MG cell broad spikes, the stellate cell synapses onto MG cells would be depressed to their lowest possible levels due to these asynchronous stellate spikes. However, the MG cells that are driven by ampullary electroreceptor afferents would respond with randomly timed broad spikes. In analogy with the uncorrelated stellate cell spikes (Fig. 5), the learning rule acts to normalize the inhibitory inputs to a constant broad spike output.

An extension of the model that was studied for excitatory plasticity (Roberts and Bell 2000) is the effects of different temporal learning rules on the dynamics of sensory adaptation. In contrast to the excitatory learning rule, the choice of learning rule used for the inhibitory synapses was not measured experimentally, but hypothesized to be the inverse of the temporal learning rule governing excitatory synapses. Similar results apply here as in a previous modeling study for excitatory synaptic plasticity alone (Roberts and Bell 2000); only a near match between the postsynaptic potential and the learning function can lead to a stable negative image. That is, if the postsynaptic potential is excitatory, then the associative component of the learning rule must depress the synapse and have a time course that closely matches the EPSP. This is the theoretical reason for our choice of an associative component for the learning rule that enhances the inhibitory synapses by an amount that is proportional to the IPSP.

In simulations where other temporal learning rules were used at either the excitatory or inhibitory synapses, oscillations developed that prevented the cancellation of the predictable sensory signal. In addition, to generate a negative image, the non-associative component must have the same sign as the contribution of the postsynaptic potential: enhancement for the excitatory synapses and depression for the inhibitory synapses. Thus the present modeling study suggests not only the existence of synaptic plasticity at inhibitory synapses from stellate cells onto MG cells, but also predicts the temporal form of the learning rule that changes the synaptic efficacy depending on the exact timing between the pre- and postsynaptic spike.


    APPENDIX
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
APPENDIX
REFERENCES

In this APPENDIX we derive the analytic results reported in RESULTS. The equilibrium spike probability can be calculated by considering the situation when the noiseless membrane potential is constant in t so that Delta V(xn, t) = 0. From the definition of the membrane potential (Eq. 5), the only variable that changes as a function of t are the synaptic weights. Thus the membrane potential is stationary when
<LIM><OP>∑</OP><LL><IT>m</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM> ⟨<IT>&Dgr;</IT><IT>w</IT>(<IT>x<SUB>m</SUB></IT><IT>, </IT><IT>t</IT>)⟩<IT>E</IT>(<IT>x<SUB>n</SUB></IT><IT>−</IT><IT>x<SUB>m</SUB></IT>)<IT>−</IT><LIM><OP>∑</OP><LL><IT>m</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM> ⟨<IT>&Dgr;&ugr;</IT>(<IT>x<SUB>m</SUB></IT><IT>, </IT><IT>t</IT>)⟩<IT>I</IT>(<IT>x<SUB>n</SUB></IT><IT>−</IT><IT>x<SUB>m</SUB></IT>)<IT>=0</IT> (A1)
The ensemble average (denoted by the brackets, < > ) of the weight change is found by averaging over the probability of the occurrence of a broad spike, f(xp, t), at each time following the EOD
⟨&Dgr;<IT>w</IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)⟩<IT>=&agr;</IT><SUB><IT>w</IT></SUB><IT>−&bgr;</IT><SUB><IT>w</IT></SUB> <LIM><OP>∑</OP><LL><IT>p</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM> <IT>L<SUB>w</SUB></IT>(<IT>x<SUB>p</SUB></IT><IT>−</IT><IT>x<SUB>n</SUB></IT>)<IT>f</IT>(<IT>x<SUB>p</SUB></IT><IT>, </IT><IT>t</IT>) (A2)
and similarly for the inhibitory synapses
⟨&Dgr;&ugr;(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)⟩<IT>=</IT>−<IT>&agr;<SUB>&ugr;</SUB>+&bgr;<SUB>&ugr;</SUB> </IT><LIM><OP>∑</OP><LL><IT>p</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM> <IT>L</IT><SUB><IT>&ugr;</IT></SUB>(<IT>x<SUB>p</SUB></IT><IT>−</IT><IT>x<SUB>n</SUB></IT>)<IT>f</IT>(<IT>x<SUB>p</SUB></IT><IT>, </IT><IT>t</IT>) (A3)
where the broad spike probability f(xp, t) is defined in Eq. 1. The learning rates, alpha w, beta w, alpha &ugr;, and beta &ugr; are nonnegative real numbers, and the learning functions, Lw(xn) and L&ugr;(xn), represent the amount of associative change for different delays between the pre- and postsynaptic spikes (see Eqs. 6 and 7). In the analysis of this study, the learning functions Lw(xn) and L&ugr;(xn) are equivalent to the EPSP and IPSP wave functions, respectively.

As has been shown previously (Roberts and Bell 2000), the stationary membrane potential implies that the broad spike probability is a constant of both x and t, f(x, t) = &fcirc;. Substituting into Eq. A1 the expressions for the average change in synaptic weights per cycle (Eqs. A2 and A3), we arrive at the condition
<LIM><OP>∑</OP><LL><IT>m</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM> <FENCE><IT>&agr;</IT><SUB><IT>w</IT></SUB><IT>−&bgr;</IT><SUB><IT>w</IT></SUB><IT><A><AC>f</AC><AC>ˆ</AC></A></IT> <LIM><OP>∑</OP><LL><IT>p</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM> <IT>L<SUB>w</SUB></IT>(<IT>x<SUB>p</SUB></IT><IT>−</IT><IT>x<SUB>n</SUB></IT>)</FENCE><IT>E</IT>(<IT>x<SUB>n</SUB></IT><IT>−</IT><IT>x<SUB>m</SUB></IT>) (A4)

<IT>−</IT><LIM><OP>∑</OP><LL><IT>m</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM> <FENCE>−<IT>&agr;<SUB>&ugr;</SUB>+&bgr;<SUB>&ugr;</SUB></IT><IT><A><AC>f</AC><AC>ˆ</AC></A></IT> <LIM><OP>∑</OP><LL><IT>p</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM> <IT>L</IT><SUB><IT>&ugr;</IT></SUB>(<IT>x<SUB>p</SUB></IT><IT>−</IT><IT>x<SUB>n</SUB></IT>)</FENCE><IT>I</IT>(<IT>x<SUB>n</SUB></IT><IT>−</IT><IT>x<SUB>m</SUB></IT>)<IT>=0</IT>
Since the learning functions and the postsynaptic potential waveforms have been normalized to unity, the summations drop out and we arrive at
&agr;<SUB><IT>w</IT></SUB><IT>−&bgr;</IT><SUB><IT>w</IT></SUB><IT><A><AC>f</AC><AC>ˆ</AC></A></IT><IT>+&agr;<SUB>&ugr;</SUB>−&bgr;<SUB>&ugr;</SUB></IT><IT><A><AC>f</AC><AC>ˆ</AC></A></IT><IT>=0</IT> (A5)
which can be solved for the constant broad spike probability
<IT><A><AC>f</AC><AC>ˆ</AC></A></IT><IT>=</IT><FR><NU><IT>&agr;</IT><SUB><IT>w</IT></SUB><IT>+&agr;<SUB>&ugr;</SUB></IT></NU><DE><IT>&bgr;</IT><SUB><IT>w</IT></SUB><IT>+&bgr;<SUB>&ugr;</SUB></IT></DE></FR> (A6)
If there is no plasticity at inhibitory synapses, alpha &ugr; = beta &ugr; = 0, then the result obtained for excitatory plasticity only is recovered, &fcirc; = alpha w/beta w. This equilibrium broad spike probability is also obtained if the ratio of the non-associative learning rate to the associative learning rate is the same for the excitatory as for the inhibitory synapses, alpha w/beta w alpha &ugr;/beta &ugr;.

If these ratios are not equal, then by substituting &fcirc; into the expressions for the average weight changes (Eqs. A2 and A3), we find
⟨&Dgr;<IT>w</IT>(<IT>x<SUB>m</SUB></IT><IT>, </IT><IT>t</IT>)⟩<IT>=</IT>⟨<IT>&Dgr;&ugr;</IT>(<IT>x<SUB>m</SUB></IT><IT>, </IT><IT>t</IT>)⟩<IT>=</IT><FR><NU><IT>&agr;</IT><SUB><IT>w</IT></SUB><IT>&bgr;<SUB>&ugr;</SUB>−&agr;<SUB>&ugr;</SUB>&bgr;</IT><SUB><IT>w</IT></SUB></NU><DE><IT>&bgr;</IT><SUB><IT>w</IT></SUB><IT>+&bgr;<SUB>&ugr;</SUB></IT></DE></FR> (A7)
This expression implies that the weights change at the same rates while the spike frequency remains constant. The rate of this "drift" is determined by the difference between the ratios of the learning rates.

The rate of adaptation for the system is measured by the time it takes to approach an equilibrium broad spike frequency. The time constant, tau , associated to this rate can be calculated using the change in the membrane potential per cycle
&Dgr;<IT>V</IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)<IT>=</IT><LIM><OP>∑</OP><LL><IT>m</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM> ⟨<IT>&Dgr;</IT><IT>w</IT>(<IT>x<SUB>m</SUB></IT><IT>, </IT><IT>t</IT>)⟩<IT>E</IT>(<IT>x<SUB>n</SUB></IT><IT>−</IT><IT>x<SUB>m</SUB></IT>)<IT>−</IT><LIM><OP>∑</OP><LL><IT>m</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM> ⟨<IT>&Dgr;&ugr;</IT>(<IT>x<SUB>m</SUB></IT><IT>, </IT><IT>t</IT>)⟩<IT>I</IT>(<IT>x<SUB>n</SUB></IT><IT>−</IT><IT>x<SUB>m</SUB></IT>) (A8)
This expression was derived from the definition of the membrane potential (Eq. 5), using the fact that the change is dependent only on the t-dependent factors. Substituting the expressions for the average change in synaptic weights, we arrive at
&Dgr;<IT>V</IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)<IT>=&agr;</IT><SUB><IT>w</IT></SUB><IT>+&agr;<SUB>&ugr;</SUB>−&bgr;</IT><SUB><IT>w</IT></SUB> <LIM><OP>∑</OP><LL><IT>m</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM> <LIM><OP>∑</OP><LL><IT>p</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM> <IT>f</IT>(<IT>x<SUB>p</SUB></IT><IT>, </IT><IT>t</IT>)<IT>E</IT>(<IT>x<SUB>p</SUB></IT><IT>−</IT><IT>x<SUB>n</SUB></IT>)<IT>E</IT>(<IT>x<SUB>n</SUB></IT><IT>−</IT><IT>x<SUB>m</SUB></IT>)<IT>−&bgr;<SUB>&ugr;</SUB> </IT><LIM><OP>∑</OP><LL><IT>m</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM> <LIM><OP>∑</OP><LL><IT>p</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM> <IT>f</IT>(<IT>x<SUB>p</SUB></IT><IT>, </IT><IT>t</IT>)<IT>I</IT>(<IT>x<SUB>p</SUB></IT><IT>−</IT><IT>x<SUB>n</SUB></IT>)<IT>I</IT>(<IT>x<SUB>n</SUB></IT><IT>−</IT><IT>x<SUB>m</SUB></IT>) (A9)
Note that the nonassociative learning rate for the inhibitory synapses increases the membrane potential because the rate represents a decrease of the inhibitory input per cycle.

The rate of change of the membrane potential can be calculated by expanding the broad spike probability function, f(xn, t), about the equilibrium value, &fcirc;. At the equilibrium value, the noiseless membrane potential is defined to be <A><AC>U</AC><AC>&cjs1171;</AC></A> so that &fcirc; = {1 + exp[-µ(<A><AC>U</AC><AC>&cjs1171;</AC></A> - theta )]}-1. The first two terms of the Taylor expansion of the broad spike probability function near equilibrium are
<IT>f</IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)<IT>=</IT><IT><A><AC>f</AC><AC>ˆ</AC></A></IT><IT>+</IT><FENCE><FR><NU><IT>∂</IT><IT>f</IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)</NU><DE><IT>∂</IT><IT>V</IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)</DE></FR></FENCE><SUB><IT>V</IT><IT>=</IT><OVL><IT>U</IT></OVL></SUB>[<IT>V</IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)<IT>−</IT><OVL><IT>U</IT></OVL>]<IT>+…</IT> (A10)

<IT>=</IT><IT><A><AC>f</AC><AC>ˆ</AC></A></IT><IT>+&mgr;</IT>(<IT><A><AC>f</AC><AC>ˆ</AC></A></IT><IT>−</IT><IT><A><AC>f</AC><AC>ˆ</AC></A></IT><SUP><IT>2</IT></SUP>)[<IT>V</IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)<IT>−</IT><OVL><IT>U</IT></OVL>]<IT>+…</IT>
Substituting the first two terms into Eq. A9 yields
&Dgr;<IT>V</IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)<IT>=</IT>−<IT>&mgr;</IT>(<IT><A><AC>f</AC><AC>ˆ</AC></A></IT><IT>−</IT><IT><A><AC>f</AC><AC>ˆ</AC></A></IT><SUP><IT>2</IT></SUP>)<FENCE><IT>&bgr;</IT><SUB><IT>w</IT></SUB> <LIM><OP>∑</OP><LL><IT>m</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM> <LIM><OP>∑</OP><LL><IT>p</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM> <IT>V</IT>(<IT>x<SUB>p</SUB></IT><IT>, </IT><IT>t</IT>)<IT>E</IT>(<IT>x<SUB>p</SUB></IT><IT>−</IT><IT>x<SUB>n</SUB></IT>)<IT>E</IT>(<IT>x<SUB>n</SUB></IT><IT>−</IT><IT>x<SUB>m</SUB></IT>)<IT>+&bgr;<SUB>&ugr;</SUB> </IT><LIM><OP>∑</OP><LL><IT>m</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM> <LIM><OP>∑</OP><LL><IT>p</IT><IT>=1</IT></LL><UL><IT>N</IT></UL></LIM> <IT>V</IT>(<IT>x<SUB>p</SUB></IT><IT>, </IT><IT>t</IT>)<IT>I</IT>(<IT>x<SUB>p</SUB></IT><IT>−</IT><IT>x<SUB>n</SUB></IT>)<IT>I</IT>(<IT>x<SUB>n</SUB></IT><IT>−</IT><IT>x<SUB>m</SUB></IT>)<IT>+</IT>(<IT>&bgr;</IT><SUB><IT>w</IT></SUB><IT>+&bgr;<SUB>&ugr;</SUB></IT>)<OVL><IT>U</IT></OVL></FENCE> (A11)
Next we approximate the membrane potential by its deviation from the equilibrium level and choose the solutions for the difference equation as decaying oscillations in the x-component
<IT>V</IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)<IT>=</IT><OVL><IT>U</IT></OVL><IT>+</IT><IT>e</IT><SUP><IT>ikx<SUB>n</SUB></IT></SUP><IT>e</IT><SUP><IT>−</IT><IT>t</IT><IT>/&tgr;</IT></SUP> (A12)
In the continuum limit of the t-component, the change of the membrane potential becomes a differential
&Dgr;<IT>V</IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)<IT> → </IT><FR><NU><IT>d</IT></NU><DE><IT>d</IT><IT>t</IT></DE></FR> <IT>V</IT>(<IT>x<SUB>n</SUB></IT><IT>, </IT><IT>t</IT>)<IT>=</IT>−<FR><NU><IT>1</IT></NU><DE><IT>&tgr;</IT></DE></FR> <IT>e</IT><SUP><IT>ikx<SUB>n</SUB></IT></SUP><IT>e</IT><SUP><IT>−</IT><IT>t</IT><IT>/&tgr;</IT></SUP> (A13)
Approximating the sum in Eq. A11 by integrals and changing variables, y = xp - xm and z = xp - xn, we arrive at an expression for the rate of adaptation
<FR><NU>1</NU><DE>&tgr;</DE></FR>=&mgr;(<IT><A><AC>f</AC><AC>ˆ</AC></A></IT><IT>−</IT><IT><A><AC>f</AC><AC>ˆ</AC></A></IT><SUP><IT>2</IT></SUP>)<FENCE><IT>&bgr;</IT><SUB><IT>w</IT></SUB> <LIM><OP>∫</OP><LL><IT>0</IT></LL><UL><IT>∞</IT></UL></LIM> <IT>dy</IT> <LIM><OP>∫</OP><LL><IT>0</IT></LL><UL><IT>y</IT></UL></LIM> <IT>dze<SUP>ikz</SUP>E</IT>(<IT>y</IT>)<IT>E</IT>(<IT>y</IT><IT>−</IT><IT>z</IT>)<IT>+&bgr;<SUB>&ugr;</SUB> </IT><LIM><OP>∫</OP><LL><IT>0</IT></LL><UL><IT>∞</IT></UL></LIM> <IT>dy</IT> <LIM><OP>∫</OP><LL><IT>0</IT></LL><UL><IT>y</IT></UL></LIM> <IT>dze<SUP>ikz</SUP>I</IT>(<IT>y</IT>)<IT>I</IT>(<IT>y</IT><IT>−</IT><IT>z</IT>)</FENCE> (A14)
Let the EPSP and IPSP waveforms be defined by normalized alpha -functions, E(x) = E2 exp(-Ex) and I(x) = I2 exp(-Ix).

If the real part of the decay constant (tau ) is positive, then the oscillatory modes will not grow and the membrane potential will reach an equilibrium value that is constant in x. On evaluating the integrals in Eq. A14, we find that the real part of the decay constant is
Re<FENCE><FR><NU>1</NU><DE>&tgr;</DE></FR></FENCE>=&mgr;(<IT><A><AC>f</AC><AC>ˆ</AC></A></IT><IT>−</IT><IT><A><AC>f</AC><AC>ˆ</AC></A></IT><SUP><IT>2</IT></SUP>)<FENCE><FR><NU><IT>&bgr;</IT><SUB><IT>w</IT></SUB><IT>E</IT><SUP><IT>4</IT></SUP></NU><DE><IT>2</IT>(<IT>E</IT><SUP><IT>2</IT></SUP><IT>+</IT><IT>k</IT><SUP><IT>2</IT></SUP>)<SUP><IT>2</IT></SUP></DE></FR><IT>+</IT><FR><NU><IT>&bgr;<SUB>&ugr;</SUB></IT><IT>I</IT><SUP><IT>4</IT></SUP></NU><DE><IT>2</IT>(<IT>I</IT><SUP><IT>2</IT></SUP><IT>+</IT><IT>k</IT><SUP><IT>2</IT></SUP>)<SUP><IT>2</IT></SUP></DE></FR></FENCE> (A15)
This expression may be simplified by allowing the IPSP time constant to equal the EPSP time constant, I = E. Substituting in the solution for the equilibrium broad spike probability, we have
Re<FENCE><FR><NU>1</NU><DE>&tgr;</DE></FR></FENCE>=&mgr; <FR><NU>(&agr;<SUB><IT>w</IT></SUB><IT>+&agr;<SUB>&ugr;</SUB></IT>)(<IT>&bgr;</IT><SUB><IT>w</IT></SUB><IT>+&bgr;<SUB>&ugr;</SUB></IT>)<IT>−</IT>(<IT>&agr;</IT><SUB><IT>w</IT></SUB><IT>+&agr;<SUB>&ugr;</SUB></IT>)<SUP><IT>2</IT></SUP></NU><DE><IT>&bgr;</IT><SUB><IT>w</IT></SUB><IT>+&bgr;<SUB>&ugr;</SUB></IT></DE></FR> <FENCE><FR><NU><IT>E</IT><SUP><IT>4</IT></SUP></NU><DE><IT>2</IT>(<IT>E</IT><SUP><IT>2</IT></SUP><IT>+</IT><IT>k</IT><SUP><IT>2</IT></SUP>)<SUP><IT>2</IT></SUP></DE></FR></FENCE> (A16)
This expression is positive for all alpha w < beta w and alpha &ugr; < beta &ugr; and is monotonic increasing function of both alpha &ugr; and beta &ugr; for 2(alpha w alpha &ugr;) < (beta w + beta &ugr;). Typically, alpha w beta w (Han and Bell 1999). If we assume a similar proportionality between alpha &ugr; and beta &ugr;, then plasticity of inhibitory synapses increases the rate of adaptation by decreasing the real part of the time constant tau .

To calculate how much the rate of adaptation is increased by plasticity at inhibitory synapses, let alpha  = alpha w = alpha &ugr;/a and beta  = beta  = beta w = beta &ugr;/a, where a is a constant real number. Using these values, we simplify the expression for the decay rate (Eq. A16). If E is the decay parameter for the EPSP waveform, and k is the frequency mode of the decaying disturbance, we find that
Re<FENCE><FR><NU>1</NU><DE>&tgr;</DE></FR></FENCE>=&mgr;(1+<IT>a</IT>) <FR><NU><IT>&agr;</IT>(<IT>&bgr;−&agr;</IT>)</NU><DE><IT>&bgr;</IT></DE></FR> <FENCE><FR><NU><IT>E</IT><SUP><IT>4</IT></SUP></NU><DE><IT>2</IT>(<IT>E</IT><SUP><IT>2</IT></SUP><IT>+</IT><IT>k</IT><SUP><IT>2</IT></SUP>)<SUP><IT>2</IT></SUP></DE></FR></FENCE> (A17)
where Re(1/tau ) is the real part of 1/tau . Thus the time constant, tau , for adaptation decreases by a factor of (1 + a)-1 with the addition of plasticity at inhibitory synapses.


    ACKNOWLEDGMENTS

The author thanks G. McCollum, G. Magnus, V. Han, and C. Bell for discussions and helpful suggestions on the manuscript.

This research was supported in part by National Science Foundation Grant IBN 98-08887.


    FOOTNOTES

Address for reprint requests: Neurological Sciences Institute, OHSU, 1120 N.W. 20th Ave., Portland, OR 97209 (E-mail: proberts{at}reed.edu).

The costs of publication of this article were defrayed in part by the payment of page charges. The article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.

Received 13 March 2000; accepted in final form 20 July 2000.


    REFERENCES
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
APPENDIX
REFERENCES

0022-3077/00 $5.00 Copyright © 2000 The American Physiological Society