Ringo, Doty, Demeter and Simard, Cerebral Cortex 1994;4:331-343: A Proof of the Need for the Spatial Clustering of Interneuronal Connections to Enhance Cortical Computation

Britt Anderson

Department of Neurology, University of Alabama at Birmingham, Birmingham, AL 35233-7340, USA


    Abstract
 Top
 Abstract
 Discussion and Proof
 References
 
It has been argued that an important principle driving the organ- ization of the cerebral cortex towards local processing has been the need to decrease time lost to interneuronal conduction delay. In this paper, I show for a simplified model of the cerebral cortex, using analytical means, that if interneuronal conduction time increases proportional to interneuronal distance, then the only way to increase the numbers of synaptic events occurring in a fixed finite time period is to spatially cluster interneuronal connections.


    Discussion and Proof
 Top
 Abstract
 Discussion and Proof
 References
 
In Cerebral Cortex and an earlier work, Ringo has argued that an important principle driving the organization of the brain, and especially the cerebral cortex, towards local processing has been the need to decrease time lost to interneuronal conduction delay (Ringo, 1991Go; Ringo et al., 1994Go). Basically, Ringo states that if, in the extreme, all neurons were connected to each other, the brain would have to grow so large to accommodate the extra axonal volume that the time it would take for one neuron to conduct to another would be too long for the increased number of computing units to be of practical value. Therefore, some degree of more limited local connectivity evolved to get around this restriction.

While the general principle annunciated by Ringo has intuitive appeal, the supportive demonstrations are generally complex and not entirely perspicuous. In the first paper, Ringo (1991) computes an efficiency statistic that divides the increase in connections that occurs with a constant interneuronal connectedness and increased neuronal number by the resultant increase in conduction time and the volume expansion of brain. When plotted against the volume of the brain consumed by connections we see that `efficiency' decreases with an increase in volume in connections (Ringo, 1991Go, Fig. 4, p. 4). Unfor- tunately, the graph predicts maximal efficiency when no neurons in the brain are connected and makes no argument why in evolution any particular brain size with less efficiency might be optimal. In the later paper in Cerebral Cortex the argument rests on showing that a neural network trained to do an input–output matching tasks is less disturbed by the removal of specific interunit connections if those connections are `slow' during the training phase (Ringo et al., 1994Go). This demon- stration seems rather indirect. The intent of this commentary is to suggest that the accuracy of Ringo's claim can be made more directly by a simple mathematical argument.

In order to focus solely on connectivity's effect on brain size let us imagine an idealized cerebrum that is constructed as a sphere (spherebrum?). The neurons are arrayed as dimensionless points evenly spaced on the surface of the sphere so that the volume of the sphere can be solely identified with the volume of axons traveling to and from neurons.

The volume of our spherical cerebrum (4{pi}r3/3) is composed entirely of axons. Therefore, it is also equal to the number of axons multiplied by their volume. The axons can be treated as cylinders having volume of length l multiplied by cross-sectional area a. l can be estimated by assuming that neurons project randomly to all other neurons. Average length will therefore be the radius multiplied by {surd}2. The number of axons will be determined by the total number of neurons and the number of neurons to which each connects. To represent this all symbolically we can say that:


where bvol is the volume of our spherical brain, n is the number of neurons, c is the number of connections per neuron, a is the cross-sectional area of a typical axon, and r{surd}2 is the mean axonal length. Recalling that bvol = 4{pi}r3/3 we have:


which reduces to:


Increasing either the number of neurons or the number of neurons to which each connects will increase the total number of connections. Increasing the number of connections or the number of neurons outpaces the growth of the radius or the mean axonal length (r{surd}2). For example, if n = 10 and c = 5 and a = 1, then r ~ 4.11, but if c = 10, then r ~ 5.81. The number of connections increases by 100%, but the radius by only ~40%. This simple analysis would seem to suggest that even if inter- neuronal conduction time increases linearly with interneuronal distance, then there should be pressures on the brain to grow without bounds. This conclusion would be incorrect. What is important for an organism is not just a relative ratio of compu- tations to time (a computation rate), but the absolute number of computations it can execute in a finite time epoch defined by its environment, including predators and competitors.

Let us define the time it takes for the neurons in the first scenario to fire and conduct to the neuron furthermost away as t (this would be the time to cover the distance of 2r, from one end of the sphere to the other). Further, let us assume that the number of neuronal computations executed per t is sufficient to achieve equilibrium with competitors. The competitively desirable option would be to increase the number of neuronal computations that the organism could perform in that same time period t. Let us also denote the radius for the first brain sphere in the preceding example as r1. Now, how many connections could a neuron `reach' in time t if the number of connections between neurons were doubled to 10/neuron and the time of conduction increased linearly with axon length? The answer is . . . exactly the same.

Doubling the number of connections increases the radius of the second sphere (r2) by a factor of {surd}2. A chord of length r1 would reach exactly half the sphere of radius r2, so that if there were a random distribution of connections the number of connections `active' would be the total available divided by 2, which would return us to our original number.

The demonstration can be made more general. The surface area of a sphere is 4{pi}r2. The surface area of cap of a sphere is {pi}p2 where p is the length of the chord extending from one point of the sphere to another. If m refers to a factor which is the combined increase in n and c, then r2 = {surd}mr1 and the surface area of the second brain sphere is 4m{pi}r12. A cap of 2r1 (the longest interneuronal distance, and the conduction taking the longest, would be the first sphere's diameter) would have a surface area of 4{pi}r12. The proportion of the sphere covered by the cap is 4{pi}r12/4m{pi}r12 or 1/m. Increasing connections, through increases in neuronal number or the connectivity, by a factor of m decreases the proportion of all available connections to a neuron by a factor of 1/m so that the total number of available connections during time t stays constant.

While the increase in the total number of connections increases much faster than the absolute increase in the radius of our spherical cerebrum, the actual available number of connections for use in a given time t remains invariant no matter how we increase the number of neurons or randomly arrayed connections. One of the ways to overcome the roadblock is to increase connections by a non-random method such as spatial clustering. This will increase the number of connections operating during time t. To achieve a spatial clustering of connections would require neurons to know how long to extend their axons. Or neurons in the brain might use a `clock' to time incoming signals, pruning those from distant targets.

As the time constants relevant for different behaviors may differ, we can infer a hierarchy of processing modules with small localized modules performing processes that require a quick response. These modules, wired together, may perform com- putations that require longer processing cycles, or there may be a mosaic arrangement where neurons in the localized modules might also participate as part of larger metamodules to produce more complex, but less time critical behaviors. Different organisms compete for different niches, so the relevant time constants for distinct behaviors may vary and exert distinct evolutionary pressures on brain development as was considered, but discounted, by Ringo et al. (1994).

In summary, Ringo's thesis can be conclusively demonstrated in an idealized randomized system. It can be shown definitely for such a system that any increase in the absolute number of connections is exactly offset by the spatial separation that increases between elements so that the number of `communications' that can occur per a given fixed time interval remains unchanged. The only way this disadvantage can be overcome is to rearrange the increased connections such that the neurons that are spatially proximate have a greater chance of connecting.


    Notes
 
Address correspondence to B. Anderson, Department of Neurology, University of Alabama at Birmingham, Birmingham, AL 35233–7340, USA. Email: BrittUAB{at}aol.com.


    References
 Top
 Abstract
 Discussion and Proof
 References
 
Ringo JL (1991) Neuronal interconnection as a function of brain size. Brain Behav Evol 38:1–6.[ISI][Medline]

Ringo JL, Doty RW, Demeter S, Simard PY (1994) Time is of the essence: a conjecture that hemispheric specialization arises from inter- hemispheric conduction delay. Cereb Cortex 4:331–343.[Abstract]





This Article
Abstract
FREE Full Text (PDF)
Alert me when this article is cited
Alert me if a correction is posted
Services
Email this article to a friend
Similar articles in this journal
Similar articles in PubMed
Alert me to new issues of the journal
Add to My Personal Archive
Download to citation manager
Search for citing articles in:
ISI Web of Science (3)
Request Permissions
Google Scholar
Articles by Anderson, B.
PubMed
PubMed Citation
Articles by Anderson, B.