Macaque Supplementary Eye Field Neurons Encode Object-Centered Locations Relative to Both Continuous and Discontinuous Objects

Carl R. Olson and Léon Tremblay

Center for the Neural Basis of Cognition, Mellon Institute, Pittsburgh, Pennsylvania 15213-2683


    ABSTRACT
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
REFERENCES

Olson, Carl R. and Léon Tremblay. Macaque Supplementary Eye Field Neurons Encode Object-Centered Locations Relative to Both Continuous and Discontinuous Objects. J. Neurophysiol. 83: 2392-2411, 2000. Many neurons in the supplementary eye field (SEF) of the macaque monkey fire at different rates before eye movements to the right or the left end of a horizontal bar regardless of the bar's location in the visual field. We refer to such neurons as carrying object-centered directional signals. The aim of the present study was to throw light on the nature of object-centered direction selectivity by determining whether it depends on the reference image's physical continuity. To address this issue, we recorded from 143 neurons in two monkeys. All of these neurons were located in a region coincident with the SEF as mapped out in previous electrical stimulation studies and many exhibited task-related activity in a standard saccade task. In each neuron, we compared neuronal activity across trials in which the monkey made eye movements to the right or left end of a reference image. On interleaved trials, the reference image might be either a horizontal bar or a pair of discrete dots in a horizontal array. The dominant effect revealed by this experiment was that neurons selectively active before eye movements to the right (or left) end of a bar were also selectively active before eye movements to the right (or left) dot in a horizontal array. An additional minor effect, present in around a quarter of the sample, took the form of a difference in firing rate between bar and dot trials, with the greater level of activity most commonly associated with dot trials. These phenomena could not be accounted for by minor intertrial differences in the physical directions of eye movements. In summary, SEF neurons carry object-centered signals and carry these signals regardless of whether the reference image is physically continuous or disjunct.


    INTRODUCTION
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
REFERENCES

The supplementary eye field (SEF), an area located on the dorsomedial shoulder of the frontal lobe in macaque monkeys, has been thought since its discovery 10 years ago to serve oculomotor functions (Schlag and Schlag-Rey 1985, 1987). This view has been supported by studies demonstrating that electrical stimulation of the SEF at reasonably low currents (<50 µA) evokes saccadic eye movements (Chen and Wise 1995b; Fujii et al. 1995; Lee and Tehovnik 1995; Mann et al. 1988; Mitz and Goldschalk 1989; Russo and Bruce 1993; Tehovnik and Lee 1993; Tehovnik and Sommer 1997; Tehovnik et al. 1994; Tian and Lynch 1995) and that neurons in the SEF fire during the preparation and execution of saccades, exhibiting selectivity for particular saccade directions (Bon and Lucchetti 1992; Chen and Wise 1995a,b, 1996, 1997; Hanes et al. 1995; Mann et al. 1988; Mushiake et al. 1996; Russo and Bruce 1996; Schall 1991a,b; Schlag and Schlag-Rey 1985, 1987; Schlag-Rey et al. 1997). However, the contributions of the SEF to oculomotor control probably are not as straightforward as those of the other major frontal oculomotor area, the frontal eye field (FEF). In the SEF, more frequently than in the FEF, neuronal activity varies across the course of learning as monkeys acquire arbitrary associations between visual patterns and eye-movement directions (Chen and Wise 1995b). Further, around half of SEF neurons, unlike neurons in the FEF, fire differentially during combined movements of the arm and eye as compared with eye movements alone (Mushiake et al. 1996). Finally, higher levels of electrical current must be delivered to the SEF than to the FEF to elicit saccades (Russo and Bruce 1993; Tehovnik and Sommer 1997). These observations suggest that the SEF is removed farther than the FEF from processes occurring at the oculomotor periphery and that its functions, while encompassing oculomotor control, may not be restricted to it.

A potentially valuable approach to understanding the functions of the SEF is to characterize the spatial reference frames with respect to which it operates. This requires answering the question: insofar as specific sites or neurons in the SEF represent particular eye-movement directions, with respect to what reference frame are these directions specified? Studies carried out to date have yielded evidence for three forms of spatial sensitivity in the SEF: eye-centered, head-centered, and object-centered. 1) Evidence that the SEF encodes directions relative to an oculocentric reference frame arose from studies based on both electrical-stimulation and single-neuron recording. Electrical-stimulation studies demonstrated that fixed vector saccades (saccades having a particular size and direction regardless of the eyes' starting point in the orbit) could be elicited from certain sites in the SEF (Bon and Lucchetti 1992; Mitz and Godschalk 1989; Russo and Bruce 1993; Schlag and Schlag-Rey 1987). Likewise, some SEF neurons were shown to fire in conjunction with saccades in preferred directions regardless of the eyes' starting point (Mitz and Godschalk 1989; Russo and Bruce 1996; Schlag and Schlag-Rey 1987). 2) There are also signs that the SEF encodes directions relative to a craniocentric frame. The fact that electrical stimulation at some sites in the SEF seems to elicit goal-directed saccades, driving the eyes to a certain angle in the orbit regardless of initial direction, has been taken by some as evidence for craniocentric encoding (Tehovnik 1995; Tehovnik and Lee 1993; Tehovnik et al. 1994), although others have interpreted this phenomenon as arising from failure of SEF stimulation to engage cerebellar mechanisms that correct for variations in ocular mechanics across orbital position (Russo and Bruce 1993). At the level of single-neuron recording, some SEF neurons have been shown to possess craniocentric gaze fields, firing as a function of the angle of the eyes in the head during motivated fixation of external targets (Bon and Lucchetti 1990, 1992; Lee and Tehovnik 1995; Schlag et al. 1992). 3) Finally, studies carried out in our laboratory during the last several years have indicated that some SEF neurons are sensitive to the allocentric directions of eye movements---directions as defined with respect to objects in the external world. In monkeys planning and executing eye movements to the left or right end of a horizontal bar, around half of SEF neurons fire differentially on bar-left and bar-right trials even when the location of the bar on the screen is manipulated so as to keep the location of the target on the screen the same (Olson and Gettner 1995, 1999). The object-centered spatial selectivity of these neurons suggests that they are involved in eye-movement control at the level of target specification rather than of motor programming.

The aim of the experiment described here was to extend our understanding of object-centered direction selectivity in the SEF by answering the question does this phenomenon depend on the nature of the reference image and, in particular, on its physical continuity? In previous studies, monkeys were required to make eye movements to the left or right end of only a single image, a physically continuous horizontal bar. Here we trained monkeys to perform a task in which, on interleaved trials, they had to make eye movements to the right or left end of a bar, as in the previous experiments, or, alternatively, to the right or left element in an array consisting of two horizontally separated dots. We recorded from SEF neurons during performance of this task to determine whether firing was different under bar and dot conditions. We found only subtle differences in neuronal activity across the two conditions. This result suggests that SEF neurons carry comparatively pure object-centered spatial signals---signals that reflect the location of the target with respect to the selected reference image but are not influenced to a major degree by the reference image's intrinsic properties.


    METHODS
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
REFERENCES

Subjects

Two adult male rhesus monkeys were used (Macaca mulatta; laboratory designations Ju and Po). Experimental procedures were approved by the Carnegie Mellon University Animal Care and Use Committee and were in compliance with the guidelines set forth in the United States Public Health Service Guide for the Care and Use of Laboratory Animals.

Preparatory surgery

At the outset of the training period, each monkey underwent sterile surgery under general anesthesia maintained with isofluorane inhalation. The top of the skull was exposed, bone screws were inserted around the perimeter of the exposed area, a continuous cap of rapidly hardening acrylic was laid down so as to cover the skull and embed the heads of the screws, a head-restraint bar was embedded in the cap, and scleral search coils were implanted on the eyes with the leads directed subcutaneously to plugs on the acrylic cap (Remmel 1984; Robinson 1963). After initial training, a 2-cm-diam disk of acrylic and skull, centered on the midline of the brain approximately at anterior 23 mm (Horsley-Clarke coordinates), was removed, and a cylindrical recording chamber was cemented into the hole with its base just above the exposed dural membrane.

Single-neuron recording

At the beginning of each day's session, a varnish-coated tungsten microelectrode with an initial impedance of several megohms at 1 kHz (Frederick Haer and Company, Bowdoinham, ME) was advanced vertically through the dura into the immediately underlying cortex. The electrode could be placed reproducibly at points forming a square grid with 1 mm spacing (Crist et al. 1988). The action potentials of a single neuron were isolated from the multineuronal trace by means of an on-line spike-sorting system using a template matching algorithm (Signal Processing Systems, Prospect, Australia). The spike-sorting system, on detection of an action potential, generated a pulse the time of which was stored with 1-ms resolution.

Behavioral apparatus

All aspects of the behavioral experiment, including presentation of stimuli, monitoring of eye movements, monitoring of neuronal activity, and delivery of reward, were under the control of a 486- or pentium-based computer running Cortex software provided by R. Desimone, Laboratory of Neuropsychology, National Institute of Mental Health. Eye position was monitored by means of a scleral search coil system (Remmel Labs, Ashland, MA, or Riverbend Instruments, Birmingham, AL) and the x and y coordinates of eye position were stored with 10-ms resolution. Stimuli generated by an active matrix LCD projector (Sharp, XG H4OU) were rear-projected on a frontoparallel screen 25 cm from the monkey's eyes. Reward in the form of ~0.1 ml of water or juice was delivered through a spigot under control of a solenoid valve on successful completion of each trial.

ANOVA and t-test analysis of data from individual neurons

Details of statistical analysis are provided in the text. The general approach was to analyze results obtained with a given behavioral paradigm by applying a set of identical procedures to data collected from each neuron. The trial epoch under consideration was defined as the period between two identifiable events. The mean firing rate during the epoch was computed for each trial completed successfully during recording from the neuron. Then an ANOVA or t-test was carried out to determine whether firing rate varied significantly across the trials as a function of the conditions by which trials differed from each other.

chi 2 analysis of population data

A population of neurons might exhibit trait a or b in one context and trait x or y in another context. For example, among neurons significantly selective for horizontal direction in two tasks, each neuron might prefer right (a) or left (b) in the first task and right (x) or left (y) in the second task. In such cases, to test whether the distribution of neurons with respect to a and b was correlated with the distribution with respect to x and y, we employed the following procedure. We took as observed values the four counts Oax, Oay, Obx, and Oby, where Oax was the number of neurons observed to express trait a in the first context and trait x in the second context, and so on for Oay, Obx, and Oby. We then computed the sum of the counts, S = Oax + Oay + Obx + Oby, and the four frequencies, Fa = (Oax + Oay)/S, Fb = (Obx + Oby)/S, Fx = (Oax + Obx)/S, and Fy = (Oay + Oby)/S. Then on the assumption that the distribution of neurons with respect to a and b was uncorrelated with the distribution with respect to x and y, we computed the four expected counts: Eax = Fa*Fx*S, Eay Fa*Fy*S, Ebx = Fb*Fx*S, and Eby = Fb*Fy*S. Finally, we used a chi 2 test with 1 df to determine the level of significance of the deviation of the observed values (Oax, Oay, Obx, and Oby) from the expected values (Eax, Eay, Ebx, and Eby).

Localization of recording sites

In each monkey, recording was carried out in a pair of regions, each a few mm in extent, disposed approximately symmetrically across the interhemispheric midline. One of the monkeys (Po) is still under study in behavioral experiments. In the other monkey (Ju) the brain was photographed after it was killed with an overdose of pentobarbital sodium and transcardiac perfusion with 10% formalin. Marks indicating the location of the recording chamber were compared with gross anatomic landmarks including the hemispheric midline and the arcuate and principal sulci. On the basis of the grid coordinates at which the electrode had been placed, recording sites then were projected onto the image of the cortical surface.

Bar-dot task

Both monkeys were trained to perform a task requiring them to make eye movements to one end or the other of a reference image which could be physically continuous or discontinuous. Essential features of the task are summarized in Fig. 1, A and B. At the beginning of each trial, while the monkey was fixating a central spot, a sample was presented, either in the form of a solid horizontal bar (Fig. 1A2) or in the form of a pair of dots corresponding to the ends of a virtual horizontal bar (Fig. 1B2). Then one end of the sample was cued (3). After a delay, a target appeared, identical to the sample in form but not necessarily in location (5). After a second delay, extinction of the central fixation spot (7) signaled the monkey to make an eye movement (8). If the monkey made a saccade directly to the end of the target corresponding to the cued end of the sample, then 100 ms after target-attainment, a white spot came on at the now-fixated target location, thus providing positive feedback. However, the monkey was required to maintain fixation on the target for an additional variable period (300-450 ms). Only at the end of this period was the display extinguished and reward delivered. These postattainment steps were introduced to prevent the monkey from following up the first saccade with a second one to some other part of the display. Any such behavior would have led to interpretational difficulties inasmuch as activity around the time of the first saccade might have been related to programming of the second one. To perform this task successfully, monkeys had to perceive and remember the location of the cue relative to a reference frame centered on the sample and had to do so regardless of whether the sample was a continuous horizontal bar or a pair of dots forming a horizontal array.



View larger version (44K):
[in this window]
[in a new window]
 
Fig. 1. Bar-dot task. A, 1-8: screen in front of the monkey during successive epochs of a single representative trial in which the reference image was a continuous bar. Center of each hatched circle indicates the monkey's direction of gaze during the corresponding trial epoch and the arrow indicates the direction of the eye movement. All other items are patterns visible to the monkey. 1: a white fixation spot appeared at the center of the screen and the monkey achieved foveal fixation. 2: a horizontal sample bar appeared in the superior visual field. 3: a white cue flashed on 1 end of the sample bar. 4: during an ensuing delay period of variable length, the monkey maintained central fixation. 5: the target bar appeared. 6: during a 2nd delay period, the monkey continued to maintain central fixation. 7: offset of the fixation spot signaled the monkey to initiate an eye movement. 8: monkey was required to respond by making an eye movement directly to the end of the target bar corresponding to the cued end of the sample bar. After foveation of the target, the following events occurred (not shown): 100 ms elapsed; then a 0.8 × 0.8° white feedback spot came on at the target location, to provide positive feedback; then a random interval in the range 300-450 ms elapsed; then the entire display was extinguished and reward was delivered. B: equivalent display for a trial in which the reference image was a pair of dots marking the ends of a virtual bar. C: factors varying across bar trials included the location of the 3.1 × 0.2° blue sample bar (a or b), the location of the 0.8 × 0.8° white cue (c, d, or e), the location of the 3.1 × 0.2° blue target bar (f, g, or h), and the direction of the required eye movement (1, 2, 3, or 4). Dot trials were identical with the exception that 2-dot arrays were presented at the locations where bars would have been. Each array consisted of two 0.4 × 0.4° white dots separated horizontally by a 3.1° center-to-center offset. All stimuli were presented at the same elevation (5.8°) relative to the fixation spot (F). Vertical offset of a-e from f-h in this figure is simply a convention adopted for clarity. D: these tables summarize the features defining 12 bar conditions and 12 corresponding dot conditions.

Individual trials differed not only with respect to the nature of the reference image (bar or dots), but also with respect to the location of the sample (Fig. 1C: a or b; each bar indicates the location at which a bar or array could appear), the cued end of the sample (right or left), and the location of the target (Fig. 1C: f, g, or h; each bar indicates the location at which a bar or array could appear). Systematic variation in these factors gave rise to 24 conditions summarized in Fig. 1D. Trials corresponding to these 24 conditions were interleaved pseudorandomly according to the rule that one trial of each type had to be completed successfully before initiation of the next block. An essential feature of this design was the dissociation of relative location (the right or left end of the bar or array) from certain other factors that might influence neuronal activity in the SEF, notably the location of the cue on the screen (and thus the location of its image on the retina) and the screen location of the target. A cue at one screen location (Fig. 1C: d) could mark either the right end of a left-displaced sample (Fig. 1C: b) or the left end of a right-displaced sample (Fig. 1C: a). Similarly a target at one screen location (Fig. 1C: 3) might be either the right side of a left-displaced target (Fig. 1C: h) or the left side of a right-displaced target (Fig. 1C: g).

It may be noted that the location of the sample image was different in this task than in the task employed in previous studies (Olson and Gettner 1995, 1999). In those studies, it was placed to one side of fixation. Here, in contrast, it was presented in the upper visual field where it appeared to the left or right of the midline on interleaved trials. This change was instituted so as to eliminate the asymmetry with respect to the visual field midline inherent in the earlier design. Results obtained with the design used here can be interpreted identically regardless of the recording hemisphere because the task is perfectly symmetric with respect to the visual field and thus with respect to hemispheric representation of visual space.

Memory-guided saccade task

We also recorded neuronal activity during performance of a standard oculomotor test requiring the monkey to make eye movements to targets in the form of small dots located at 9.6° eccentricity above, below, to the right of, and to the left of the fixation point. The main stages of a single representative trial lasting ~1.5 s are summarized in Fig. 13, A-F. The staggered panels in this figure represent the display on the screen in front of the monkey during successive stages of the trial. In each panel, a circle indicates the monkey's direction of gaze. While the monkey maintained fixation on a central spot (A), four potential targets were presented (B) and one of the targets was cued (C). The monkey then was required to maintain central fixation during a delay period (D) at the end of which the fixation spot was extinguished (E), whereupon the monkey had to make an eye movement rapidly and directly to the previously cued target (F). If the monkey made a saccade directly to the target, then 100 ms after target-attainment, the now fixated target increased in size, thus providing positive feedback. However, the monkey was required to maintain fixation on the target for an additional variable period (300-450 ms) before reward was delivered. Trials were imposed in pseudorandom sequence according to the rule that the monkey had to complete successfully one trial in each direction before moving on to the next block. Data collection continued until ~16 successful trials conforming to each of the four conditions had been completed.


    RESULTS
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
REFERENCES

Task performance

Both monkeys learned to perform the bar-dot task at a level well above chance, and each experienced moderately more difficulty on dot than on bar trials. Monkey Ju scored 99.9% on bar trials as compared with 99.6% on dot trials (averages computed across all neuronal data collection sessions; consideration restricted to trials in which the monkey made an eye movement to one end or the other of the target). The difference between the two percent-correct scores, although only a fraction of a percent, was significant (2-tailed paired t-test, P = 0.03). Monkey Po scored 95.9 and 89.4% on bar and dot trials, respectively; these values differed at a high level of significance (P < .0001).

The behavioral reaction time (the interval between offset of the fixation spot and initiation of the saccadic eye movement) also was measured as a function of cue condition in each monkey. In monkey Ju, there was a minor but significant (2-tailed paired t-test, P = 0.004) tendency for reaction times to be longer on bar than on dot trials (150 vs. 148 ms). The same tendency was present and significant in monkey Po (164 vs. 160 ms; P = 0.008). Decision time was not a factor in this effect because a long delay intervened between the instructional cue and the imperative signal. Perhaps it was related to subtle differences in the eye movements executed on bar and dot trials, as described in the following text.

Recording sites

Our approach in selecting recording sites was to record from neurons at the rough location of the SEF, as estimated on the basis of stereotaxic coordinates and, having identified sites at which there was robust eye-movement-related activity, to record from these sites and then move out from them to adjacent sites over successive recording sessions. At each site, we recorded from neurons located in the superficial cortex, remaining within the initially encountered gray matter and never passing through white matter into buried cortex. The mean recording depth (as measured relative to the level at which neural activity first was detected) was 875 ± 448 µm (mean ± SD; minimum = 178 µm, maximum = 1,988 µm) in monkey Ju and 781 ± 681 µm (minimum = 0 µm; maximum = 2,724 µm) in monkey Po. In the context of the bar-dot task, we characterized a total of 77 neurons from monkey Ju (17 and 60 in the left and right hemispheres, respectively) and 66 neurons from monkey Po (29 and 37 in the left and right hemispheres, respectively). The tangential distribution of bar-dot recording sites in monkey Ju is shown in Fig. 2A, where each dot represents one site and the size of the dot indicates how many neurons at that site contributed data to the present paper. Monkey Po is still under behavioral study, therefore it is not possible to describe precisely the relation of the recording sites to gross morphological landmarks.



View larger version (16K):
[in this window]
[in a new window]
 
Fig. 2. A: recording sites superimposed on a dorsal view of the cerebral hemispheres of monkey Ju. Each dot indicates a recording site. Area of each dot is proportional to the number of neurons at that site contributing data to the present paper. Largest dot, in the right hemisphere, represents 14 neurons, whereas the smallest dots represent one neuron each. B: extent of the supplementary eye field (SEF) as defined by mapping with intracortical microstimulation in studies from 10 laboratories listed in Table 1 of Tehovnik (1995). Tehovnik brought results from different laboratories into register by use of two landmarks: for mediolateral register, the hemispheric midline and, for anteroposterior register, the genu of the arcuate sulcus, as marked by the horizontal line spanning A and B in this figure. For each study listed by Tehovnik, we generated a rectangle encompassing, to the nearest mm, the anterior, posterior, lateral and medial limits of the region in which electrical stimulation elicited eye movements. Then, on each point in a 1 × 1 mm grid spanning the cortex, we superimposed a dot the area of which was proportional to the number of rectangles including that point. Number of times a site was counted could range from 0 (not marked on this figure: sites not implicated by any study) through 1 (smallest dots visible in this figure: sites implicated by just 1 study) to 10 (largest dots visible in this figure: sites implicated by all 10 studies). Note that sites in which recording was carried out in this study (dots in A) overlap the relatively anterior region in which electrical stimulation in previous studies most consistently elicited eye movements (largest dots in B). as, arcuate sulcus; as, genu, genu of the arcuate sulcus; cs, central sulcus; ps, principal sulcus.

Because we almost immediately located sites of oculomotor activity in each hemisphere of each monkey, we had no occasion to carry out extensive mapping, defining the borders of the SEF or identifying adjacent regions such as the supplementary motor area. Accordingly, it is reasonable to ask whether the sites from which we recorded were indeed in the SEF. To answer this question, we compared our recording sites to maps of the SEF generated in previous studies as summarized by Tehovnik (1995). Table 1 of Tehovnik's review summarizes the results of 10 studies in which electrical stimulation was used to map out the SEF, indicating, for each study, the area's mediolateral extent (ML, defined relative to the interhemispheric midline) and anterior-posterior extent (AP, defined relative to the genu of the arcuate sulcus). These results are translated, in Fig. 2B, into a graph in which the area of each dot corresponds to the fraction of the 10 studies in which electrical stimulation at the dot's location elicited eye movements (the dots in Fig. 2B range in area from 1---only one study reported elicitation of eye movements by stimulation at that location--- to 10---all 10 studies reported a positive result). Loci at which electrical stimulation elicited eye movements in a large number of studies are marked by a cluster of large dots extending 3-7 mm anterior to the level of the genu of the arcuate sulcus. We may now compare the recording sites in monkey Ju to sites of electrical stimulation in these studies. Recording sites in monkey Ju extended 4-9 mm anterior to the genu of the arcuate sulcus (Fig. 2A: as, genu), with an average of ~6 mm. We conclude that recording sites in monkey Ju were toward the front of the cortical territory in which electrical stimulation has been reported to elicit eye movements and that they overlapped the part of this territory in which electrically induced eye movements have been obtained with greatest frequency. Recording sites in monkey Ju also overlapped the SEF as identified by electrical stimulation in later studies not considered by Tehovnik. Chen and Wise (1995b, Fig. 8A) show sites positive for elicitation of eye movements as extending 2-6 mm anterior to the genu, whereas Fujii et al. (1995, Fig. 1) show such sites at levels 1-8 mm anterior to the genu. Finally, it should be noted that recording sites in monkey Ju do not overlap the zone rostral to the SEF in which Bon and Lucchetti (1994) have described electrical stimulation as eliciting ear movements. This zone extends ~10-14 mm anterior to the genu (Bon and Lucchetti 1994, Fig. 2A). This set of comparisons, although not as conclusive as electrical stimulation mapping carried out in the same monkey and although limited by the accuracy with which gross morphological landmarks can be identified in published figures, nevertheless suggests strongly that recording sites in this study were confined to the SEF.

Object-centered direction selectivity

We will refer to a neuron as exhibiting object-centered direction selectivity if it fired at different rates on trials requiring an eye movement to the left versus the right end of a reference image even when the retinal location of the cue and the location of the target on the screen were held constant across trials. An example of a neuron exhibiting strong object-centered direction selectivity under both bar and dot conditions is shown in Fig. 3. During delay 1, the period between presentation of the cue and onset of the target bar, this neuron's rate of firing was markedly higher on trials in which the right side of the image had been cued (Fig. 3, C and D) than on those in which the left side had been cued (Fig. 3, A and B) regardless of whether the image was a bar (Fig. 3, B and D) or a pair of dots (Fig. 3, A and C). This difference in level of activity cannot have resulted from any difference in the retinal location of the cue because, under all four illustrated conditions, the cue was at the same location, directly above fixation.



View larger version (28K):
[in this window]
[in a new window]
 
Fig. 3. Data from a neuron selective, during delay 1, for those conditions in which the right end of the sample image had been cued. Each histogram, with accompanying raster display, represents neuronal activity under a single set of conditions defined with respect to type of sample image (bar or dot), location of sample image (right or left on screen), and location of cue (right or left on sample). Juxtaposed to each histogram is a panel depicting the sample image and cue in relation to the fixation spot (black dot). Numbers in each panel indicate the trial conditions under which the stimuli were in this configuration (cf. Fig. 1D). Data from successive successfully completed trials were aligned on target-onset. Times of cue onset and trigger onset are indicated as a range because delay 1 and delay 2 were variable. Ticks on the horizontal axis mark 250-ms intervals. A: dot trials with cue presented on the left end of the right-displaced reference image. B: dot trials with cue presented on the right end of the left-displaced reference image. C: bar trials with cue presented on the left end of the right-displaced reference image. D: bar trials with cue presented on the right end of the left-displaced reference image. Note that the location of the cue on the screen is the same across all sets of conditions.

To obtain an objective estimate of the frequency with which neurons exhibited object-centered direction selectivity, we carried out analyses of variance on data collected from each neuron during three trial epochs: delay 1 (from cue onset until target onset), delay 2 (from target-onset until fix-spot-offset) and the movement period (from the initiation of the saccade until 100 ms after its completion). There was a solid rationale for using these epochs, insofar as object-centered signals, if they waxed and waned during a trial, generally did so in the vicinity of the epoch boundaries. Nevertheless the divisions should be viewed as essentially heuristic with full appreciation of the fact that continuous activity might be parsed into multiple epochs (e.g., in the case shown in Fig. 3, where object-centered signals carried over from delay 1 to delay 2). In each analysis, there was one dependent variable (firing rate) and there were two factors: object-centered direction (right or left) and image type (bar or dot). Consideration during delay 1 was restricted to a subset of conditions in which the screen location of the cue was balanced across the two factors (conditions 2, 4, 6, 7, 9, 11, 14, 16, 18, 19, 21, and 23 in Fig. 1D). Consideration during delay 2 and the movement period was restricted to a subset of conditions in which the screen location of the target was balanced across the two factors (conditions 2, 3, 4, 5, 8, 9, 10, 11, 14, 15, 16, 17, 20, 21, 22, and 23 in Fig. 1D). A significance criterion of P < 0.05 was employed. The results, summarized in Table 1, indicate that around half of the tested neurons showed a main effect of object-centered direction during each epoch (65/143 during delay 1, 89/143 during delay 2, and 57/143 during the movement period).


                              
View this table:
[in this window]
[in a new window]
 
Table 1. Neurons with significant dependence on task variables

To determine whether neurons exhibiting object-centered direction selectivity were arranged within the recording zone according to any clear global pattern, we computed for each recording site the frequency with which neurons at that site yielded a significant main effect for object-centered direction. Three tests had been carried out on each neuron, assessing activity during delay 1, delay 2, and the movement period. Thus at a cortical site where n neurons had been studied, 3*n tests were carried out. The results of these tests are summarized for each recording site in Fig. 4, A and B. In this figure, the size of each circle indicates the percentage of tests revealing significant selectivity for object-centered direction. Although there was some variation from site to site in the proportion of tests yielding a significant result, there was no clear mediolateral or anterior-posterior trend in the arrangement of sites with a high yield.



View larger version (35K):
[in this window]
[in a new window]
 
Fig. 4. Cortical distribution of neurons exhibiting significant object-centered direction selectivity in the bar-dot task for monkey Ju (A) and monkey Po (B). Coordinates are with respect to the center of the recording grid (0,0). Each site at which recording was carried out during performance of the bar-dot task is marked by a circle. The area of each dark circle is proportional to the percentage of tests revealing significant object-centered direction selectivity at the corresponding site. Where n neurons were studied, 3*n tests were carried out (direction selectivity was assessed independently during delay 1, delay 2, and the movement period for each neuron). The largest circles represent cases in which 100% of tests yielded a significant result. Sites at which 0% of tests yielded a significant result are indicated by small open circles. Note that sites marked in A correspond one-to-one to recording sites projected onto the dorsal view of the frontal lobe in Fig. 2A.

It was obvious on casual inspection of the data that neurons exhibiting object-centered direction selectivity under bar conditions also did so under dot conditions (Fig. 3). To assess this effect systematically, we carried out an additional step of analysis. For each recorded neuron during each trial epoch, we computed the directional signal (firing rate under left-side-cued trials minus firing rate on right-side-cued trials) independently for bar and dot conditions, restricting consideration to conditions in which the retinal location of the cue and the screen-location of the target were balanced across object-centered direction. The results are summarized in the graphs of Fig. 5, which plot the directional signal for dot trials, on the vertical axis, against the directional signal for bar trials, on the horizontal axis, with each neuron represented as a single point. The clear positive correlation between directional signals recorded during dot and bar trials (significant at P < .0001 for each monkey during each epoch) indicates that neurons firing more strongly during left-side-cued (or right-side-cued) trials under dot conditions tended to display the same pattern under bar conditions. In monkey Ju, the R2 values for delay 1, delay 2, and the movement period were 0.574, 0.584, and 0.405, respectively. In monkey Po, the corresponding values were 0.810, 0.449, and 0.324. 



View larger version (27K):
[in this window]
[in a new window]
 
Fig. 5. Scatter plots of object-centered directional activity on dot trials (vertical axis) vs. bar trials (horizontal axis) for all neurons in each monkey. Each point represents 1 neuron. Directional signal was computed by subtracting the mean firing rate on trials when the right end of the reference image had been cued from the mean firing rate on trials when the left end had been cued, with consideration restricted to successfully completed trials within a subset of conditions selected so that the retinal location of the cue and the screen-location of the target were fully balanced across object-centered direction.

A few neurons, although exhibiting object-centered direction selectivity under both bar and dot conditions, nevertheless appeared to fire at different rates under the two conditions or appeared to carry object-centered signals of different strength. In the neuron of Fig. 6, firing during delay 1 was stronger on trials in which the right end of the image had been cued (Fig. 6: C and D vs. A and B). In addition, activity was stronger under bar conditions than under corresponding dot conditions (Fig. 6: B and D vs. A and C). The converse was true of the neuron shown in Fig. 7. During delay 2, after onset of the target and before the signal to respond, this neuron fired more strongly on trials when the right end of the reference image was the target (Fig. 7: E-H vs. A-D). However, its activity differed across dot and bar trials. On dot trials, it fired more strongly and showed an enhancement of the object-centered directional signal (the difference in firing rate between conditions in which the left or right end of the reference image had been cued). The strength of the directional signal can be estimated in Fig. 7 by comparing horizontally juxtaposed histograms (A vs. E; B vs. F; C vs. G; D vs. H). In each pair, the left histogram represents activity on trials in which the left end of the reference image had been cued and the right histogram represents activity on trials in which the right end had been cued, with other factors---the retinal location of the cue and the location of the target on the screen---held constant.



View larger version (39K):
[in this window]
[in a new window]
 
Fig. 6. Data from a neuron influenced, during delay 1, both by the cued end of the reference image (stronger firing for right) and by the type of reference image (stronger firing for bar). All conventions as for Fig. 3.



View larger version (27K):
[in this window]
[in a new window]
 
Fig. 7. Data from a neuron selective, during delay 2, both for the selected end of the target image (stronger firing for right) and for the type of target image (stronger firing for dot). Each histogram, with accompanying raster display, represents neuronal activity under a condition defined with respect to type of target image (bar or dot), location of target image (right, middle or left), and location of selected spot on target image (right or left). Juxtaposed to each histogram is a panel depicting the target image and the direction of the impending eye movement. The number in each panel indicates the corresponding trial condition (cf. Fig. 1D). A and E: conditions differing with respect to the selected end of the target array (A = left; E = right) but identical with respect to the location of the cue and the target on the screen. Three other pairs (B and F; C and G; D and H) stand in the same relation to each other. Other conventions as for Fig. 3.

The frequency with which neurons differentiated between bar and dot conditions is indicated by results summarized in Table 1. On the basis of the frequency with which main effects and interaction effects involving image type occurred, we draw the following conclusions. 1) Around a quarter of tested neurons showed a main effect of image type (bar vs. dot) during each epoch (31/143 during delay 1, 44/143 during delay 2, and 36/143 during the movement period). Each of these proportions is greater than expected by chance (P < .0001, chi 2 test). 2) Among neurons in which there was a main effect of image type, those firing more strongly under dot conditions were markedly preponderant during later epochs (34/44 during delay 2 and 30/36 during the movement period). Each of these proportions is greater than expected by chance (P < 0.001, chi 2 test). 3) In a few neurons, there was an interaction between object-centered direction and image type (17/143 during delay 1, 39/143 during delay 2, and 22/143 during the movement period). Each of these proportions is greater than expected by chance (P < 0.001, chi 2 test). 4) Among neurons exhibiting a significant interaction between object-centered direction and image type, the preponderant pattern during later epochs was for the directional signal (the difference in firing rate between left-side-cued and right-side-cued conditions) to be stronger under the dot condition (26/39 during delay 2 and 16/22 during the movement period). Each of these proportions is greater than expected by chance (P < 0.05, chi 2 test). The general conclusion arising from these observations is that neuronal activity (both net activity and differential activity dependent on object-centered direction) tended to be stronger under the dot condition but that the effect was weak.

Finally, we assessed whether the tendency of neurons to fire differently on bar and dot trials was related to their cortical location. For each neuron during each of three trial epochs---delay 1, delay 2, and the movement period---an ANOVA had been carried out indicating whether or not firing rate was significantly (P < 0.05) dependent on two factors (object-centered direction and image type) or their interaction. Thus at a location in the cortex where n neurons had been recorded, there were 3*n tests that might reveal a main effect of image type and 3*n tests that might reveal an interaction effect involving image type. For each cortical location, we counted the number of significant outcomes in each of four categories: main effect (firing greater under dot conditions), main effect (firing greater under bar conditions), interaction effect (difference in firing rate between left-on-image and right-on-image trials greater under dot conditions), and interaction effect (difference in firing rate between left-on-image and right-on-image trials greater under bar conditions). The results are shown in Fig. 8, A-H, where the size of each circle indicates the number of tests on data from that site yielding the indicated outcome. The figure reveals no clear trend toward segregation of sites exhibiting different patterns of dependence on image type.



View larger version (49K):
[in this window]
[in a new window]
 
Fig. 8. Cortical distribution of neurons exhibiting significant dependence on image type (bar vs. dot) in the bar-dot task. Left: monkey Ju. Right: monkey Po. Coordinates are with respect to the recording grid: (0,0) was at the center of the chamber. In each box, the area of each dark circle is proportional to the number of tests carried out on neurons at that site which yielded a particular form of significant dependence on image type. At a site where n neurons were studied, 3*n tests were carried out (direction selectivity was assessed independently during delay 1, delay 2, and the movement period for each neuron). The largest circle (in A) corresponds to a count of 14. A and B: main effect of image type, with the mean firing rate greater in dot trials. C and D: interaction effect between object-centered direction and image type, with the directional signal greater on dot trials. E and F: main effect of image type, with the mean firing rate greater in bar trials. G and H: interaction effect between object-centered direction and image type, with the directional signal greater in bar trials. Directional signal = the absolute value of difference in firing rate between trials on which the left vs. the right end of the image was cued.

In summary, our main finding is that most SEF neurons exhibiting object-centered direction selectivity under the standard condition used in our previous experiments (horizontal bar as reference image) also did so under a new condition (a pair of dots in a horizontal array as reference image). During each trial epoch, the firing of around a quarter of the neuronal sample was significantly affected by the type of image (bar or dot) either in the form of a main effect or in the form of an interaction with object-centered direction. Even in these cases, however, the preferred object-centered direction was the same under both conditions.

Possible influence of variations in ocular landing position

It is important to ask whether the signals interpreted in the preceding section as being object-centered possibly could have arisen from minor variations in the physical trajectory of the eyes. Accordingly, we analyzed saccades executed on bar and dot trials. We found that the trajectory of the eyes did vary slightly as a function of whether the target was the left or right end of a reference image. Especially in the case of a bar, the eyes did not land precisely on the end of the image but rather deviated inward toward its center. This is illustrated in Fig. 9, which shows eye-movement data from a single data-collection session. The symbols represent eye position over a period extending from 100 ms before to 100 ms after the instant of peak eye velocity for ~12 eye movements under each of 12 conditions. The reference image could be at any of three locations (left = L, middle = M, and right = R), and the target could be either the right end (r) or the left end (l) of the image. Thus there were six conditions in which the reference image was a bar and six in which it was a pair of dots. Among the six bar conditions (Fig. 9A), there were two pairs in which the targets were at the same location on the screen but at opposite ends of a bar (Lr vs. Ml and Mr vs. Rl). It is clear that the terminal direction of gaze was offset to the left by around half a degree on trials when the target was the right end of a bar (Lr and Mr) as compared with corresponding trials when target was the left end of a bar (Ml and Rl, respectively). In contrast, under conditions in which the target was an array of dots (Fig. 9B), this tendency was vanishingly small.



View larger version (39K):
[in this window]
[in a new window]
 
Fig. 9. Superimposed eye movement traces for 12 conditions (12 trials each) during a single data collection session. Each bundle of traces is identified by an uppercase letter indicating the location of the reference image on the screen (L, left; M, middle; and R, right) and the locus of the target on the reference image (l = left end, r = right end). Traces collected during eye movements to the left versus right end of a reference image are represented by red crosses versus green circles. The consistent difference between endpoints of corresponding saccades executed under bar conditions (A) and dot conditions (B) indicates that, in bar conditions, the saccade was directed not to the extreme end of the bar but, rather, to a point a few tenths of a degree in from the end. Eye position was stored every 10 ms. On each trial, the instant of maximal eye velocity was identified. Trace for that trial consisted of 10 readings before and 10 readings after this instant, spanning a total period of 190 ms. All targets were at an elevation of 5.8°. The outermost targets (Ll and Rr) were 4.65° to the right and left of the center of the screen, respectively. Conditions shown here correspond to conditions 2, 4, 6, 7, 9, 11, 14, 16, 18, 19, 21, and 23 of Fig. 1D.

To determine how consistent this pattern was, we computed the mean landing point of the eyes (the location to which gaze was directed 70-100 ms after the instant of peak velocity) under each of four spatial conditions (Lr, Ml, Mr, and Rl) for both bar and dot reference images. The results are summarized in Fig. 10, which shows the mean, across all data collection sessions, of the ocular landing position associated with each condition (all SDs were between 0.05 and 0.25°). In each of eight comparisons (2 screen locations × 2 image types × 2 monkeys), the eyes landed at significantly different loci when the targets were at the same location on the screen but at opposite ends of their respective reference images (paired t-test, P < 0.05). In seven of eight comparisons, the eyes deviated toward the center of the reference image so as to land farther to the left when the target was the right end of a reference image and farther the right when it was the left end. The sole exception arose in monkey Ju on comparison of eye movements to the right end of the middle dot array (Mr) and the left end of the right dot array (Rl). Across both monkeys and all four target locations, the mean horizontal displacement of the landing position on image-right as compared with image-left trials was 0.85° under the bar condition and 0.10° under the dot condition. This pattern of deviation is similar to the one observed by Edelman and Keller (1998) in monkeys trained to make eye movements to single target spots and exposed to occasional trials in which two spots came on simultaneously at radial directions 45° apart. On those trials, the eyes tended to land between the two targets; indeed when the saccades were of express latency (<90 ms), they landed close to the middle of the array. The effect was mild in our study because, unlike Edelman and Keller, we trained monkeys to select one end or the other of the distributed pattern, withholding reward if they landed outside a target window centered on the correct end.



View larger version (25K):
[in this window]
[in a new window]
 
Fig. 10. Mean landing positions of the eyes under 8 critical trial conditions (conditions used for statistical analysis of object-centered direction selectivity) in each of 2 monkeys. Positions were measured during the period from 70 to 100 ms after attainment of maximal saccadic velocity. Each mean was computed by finding the average horizontal and vertical positions of the eyes for each data collection session and then averaging the averages across all sessions. Filled and open symbols represent landing positions on bar and dot trials, respectively. Lr: target = right end of reference image at left location (conditions 11 and 23 in Fig. 1D). Ml: target = left end of reference image at middle location (conditions 4 and 16 in Fig. 1D). Mr: target = right end of reference image at middle location (conditions 9 and 21 in Fig. 1D). Rl: target = left end of right reference image (conditions 2 and 14 in Fig. 1D). In cases Lr and Ml, the target was located at superior 5.8°, left 4.65°, relative to presaccadic fixation. In cases Mr and Rl, the target was located at superior 5.8°, right 4.65°. Degrees on the graphs are positive upward and to the right.

Given that the orbital directions of the eye movements varied subtly but systematically between image-right and image-left trials, we considered the possibility that neurons exhibiting apparent object-centered direction selectivity were simply selective for the orbital directions of eye movements. To assess this possibility, we carried out a test summarized in Fig. 11. The test was applied to each neuron for which the ANOVA had revealed a significant main effect of object-centered direction. For each such neuron, the test was applied independently to each epoch in which a significant effect had been present. For each such epoch, it was applied to each image type. Each test focused on those four trial conditions in which the target was the right end of an image at the left location (Lr), the left end of an image at the middle location (Ml), the right end of an image at the middle location (Mr), or the left end of an image at the right location (Rl). For each of these conditions, we computed the mean horizontal coordinate of the eyes' landing position: X(Lr), X(Ml), X(Mr), and X(Rl). We also computed the mean observed firing rate: O(Lr), O(Ml), O(Mr), and O(Rl). We next fitted a line to the four points representing O as a function of X. Then for each condition, we computed the firing rates predicted on the assumption that firing rate was a linear function of X: P(Lr), P(Ml), P(Mr), and P(Rl). Finally, we computed two object-centered directional signals: the one actually observed---0.5 * [O(Mr) - O(Rl) + O(Lr) - O(Ml)]---and the one predicted from the linear function---0.5 * [P(Mr) - P(Rl) + P(Lr) - P(Ml)]. Figure 11 shows the results of applying this procedure to a single case---neuron ju152a41, delay 2, dot conditions---for which eye-position data are shown in Fig. 9B and firing rate data in Fig. 7, A, C, E, and G. The observed object-centered directional signal was 9.6 spikes/s, in marked contrast to the object-centered directional signal predicted on the basis of linear dependence on horizontal landing position (-0.23 spikes/s). Given the fact that the predicted signal was 42 times smaller in amplitude than the observed signal, not to mention opposite in sign, we conclude that orbital direction selectivity cannot explain this neuron's object-centered direction selectivity.



View larger version (23K):
[in this window]
[in a new window]
 
Fig. 11. Method for assessing whether measured values of object-centered direction selectivity could arise spuriously from systematic variation of ocular landing position across task conditions. Method is illustrated with data collected from neuron ju152a41 during the 2nd delay period under dot conditions. Mean neuronal activity under conditions Rl, Ml, Mr, and Lr can be judged by inspection of the histograms of Fig. 7, A, C, E, and G, respectively. Mean ocular landing position under the same conditions can be judged by inspection of Fig. 9B. Four points in this graph are at locations corresponding to the combinations of observed firing rate (vertical axis: O) and horizontal ocular landing position (horizontal axis: x) observed under the 4 conditions. Line (y = 15.57 + 1.27x) is the linear function relating firing rate (spikes/s) to the horizontal coordinate of the ocular landing position (deg) that provides the best fit to these points. Positive slope reflects an overall tendency for firing to be stronger on trials in which the eye movement had a rightward component. Observed index of object-centered direction selectivity---as based on observed firing rate values and computed according to the formula 0.5 * [O(Lr) - O(Ml) + O(Mr) - O(Rl)]---was positive and large (9.6 spikes/s), reflecting the tendency of the neuron to fire more strongly on trials when the right end of the reference image was the target. To determine whether the observed pattern of object-centered direction selectivity could arise spuriously through a mechanism based solely on neuronal sensitivity to ocular landing position, we noted the firing rates at which the best-fit line (representing neuronal sensitivity to ocular landing position) intersected the observed landing positions. These firing rates [P(Lr), P(Ml), P(Mr), and P(Rl)] were the ones predicted on the basis of neuronal sensitivity to ocular landing position alone. Predicted index of object-centered direction selectivity---as based on these predicted firing rates and computed according to the formula 0.5 * [P(Lr) - P(Ml) + P(Mr) - P(Rl)]---was negative and small (-0.23 spikes/s). From the fact that the predicted index was forty times smaller than the observed index, and, incidentally, of opposite sign, we conclude that the appearance of object-centered direction selectivity in this neuron could not have arisen simply as a secondary manifestation of sensitivity to the ocular landing position. Method illustrated here was used to generate the measures of observed and predicted object-centered direction selectivity which are plotted against each other in Fig. 12.

The results for all neurons and epochs are summarized in Fig. 12, where, in each panel, the observed object-centered signal is plotted on the horizontal axis and the object-centered signal predicted on the basis of the landing-position hypothesis is plotted on the vertical axis. The range of predicted object-centered signals is obviously miniscule as compared with the range of observed object-centered signals. In monkey Ju, the standard deviation of the observed values was greater than the standard deviation of the predicted values by factors of 9.7 and 38.1 under bar and dot conditions, respectively. In monkey Po, the corresponding values were 5.3 and 56.1. In summary, if we assume that neuronal activity is related only to the eyes' landing position, form the best estimate of the linear function relating firing rate to landing position, take into account the differences in landing position across different conditions, and compute the spurious "object-centered" directional signal predicted on the basis of the differences in landing position, then we find that the predicted spurious signals are extremely small as compared with the signals actually observed in the experiment. We conclude that object-centered direction selectivity is not an artifact arising from subtle variations of the eyes' landing position across conditions.



View larger version (31K):
[in this window]
[in a new window]
 
Fig. 12. Assessment of whether measured values of object-centered direction selectivity could arise simply from systematic variation of ocular landing position across task conditions. Each point represents data obtained from 1 neuron during a single epoch (delay 1, delay 2, or movement period) during use of a single type of reference image (bar or dots). Consideration was restricted to epochs in which statistical analysis had revealed significant object-centered direction selectivity. For each epoch, 2 indices were computed: the observed index of object-centered direction selectivity---as computed without taking into account the systematic variation of ocular landing position across conditions---and a predicted index---as computed by taking systematic variations into account and assuming that neuronal activity was solely a linear function of the horizontal landing position of the eyes. See legend to Fig. 11 for further details. Large measured values of object-centered direction selectivity (horizontal axis) are incommensurate with the small values predicted on the basis of sensitivity to ocular landing position alone (vertical axis). Interpretation that object-centered direction selectivity is an artifact of sensitivity to ocular landing position therefore is rejected.

Relation to selectivity for saccade direction in the memory-guided saccade task

Even if our recording sites were within the SEF as defined on morphological grounds, which we believe to have been the case, nevertheless neurons exhibiting object-centered direction selectivity in the bar-dot task might constitute a population distinct from intermingled neurons exhibiting selectivity for saccade direction in standard oculomotor tasks as described by previous authors. To cast light on this issue, we compared results obtained in the bar-dot task (Fig. 1) with those obtained in a memory-guided saccade task (Fig. 13). The latter task required monkeys to make eye movements to four targets at rightward, upward, leftward, and downward locations relative to fixation. The use of four targets at directions 90° apart, common in studies of the SEF (Chen and Wise 1995a,b, 1996, 1997; Schall 1991a,b), is warranted because SEF neurons are very broadly tuned for saccade direction and amplitude (Russo and Bruce 1996). In the context of the memory-guided saccade task, we studied a total of 125 neurons from monkey Ju (56 and 69 in the left and right hemispheres, respectively) and 156 neurons from monkey Po (79 and 77 in the left and right hemispheres, respectively). Many but not all of these neurons also were studied in the bar-dot task (62 in monkey Ju and 52 in monkey Po).



View larger version (15K):
[in this window]
[in a new window]
 
Fig. 13. Memory-guided saccade task. A-F: screen in front of the monkey during successive epochs of a single representative trial. Center of each circle indicates the monkey's direction of gaze during the corresponding trial epoch and the arrow indicates the direction of the eye movement. All other items are patterns visible to the monkey. A: white fixation spot appeared at the center of the screen and the monkey achieved foveal fixation. B: 4 potential targets (white dots 0.4° in diameter) appeared at locations 9.6° rightward, leftward, upward, and downward relative to fixation. C: white cue (1.2° in diameter) flashed on 1 of the targets. D: during an ensuing delay period, the monkey maintained central fixation. E: extinction of the central fixation spot signaled the monkey to initiate an eye movement. F: monkey made a saccade directly to the previously cued target.

First we asked whether there was any systematic pattern to the topographic distribution of neurons exhibiting saccade-direction selectivity. We based this analysis on all neurons studied in the memory-guided saccade test regardless of whether they were studied in the bar-dot task. The significance (P < 0.05) of each neuron's selectivity for eye-movement direction was assessed by means of an ANOVA with direction (right, up, left, or down) as the single factor and with firing rate as the dependent variable. This was done independently for the delay period (from onset of the cue to offset of the fixation spot) and the movement period (from offset of the fixation spot to 100 ms after completion of the saccade). Thus at a location in the cortex where n neurons had been recorded, 2*n tests of significance were carried out. For each cortical location, we computed the percentage of tests that yielded a significant outcome. The results are shown in Fig. 14, A and, B, where the size of each circle indicates the percentage of tests, on data from that site, indicating significant selectivity for eye-movement direction. Inspection of this figure reveals that orbital direction selectivity was comparatively widespread across the recording sites sampled in each monkey. Further, the patterns of regional arrangement showed no clear trends of a form consistent across hemispheres or monkeys. Finally, comparison of sites yielding significant selectivity for eye-movement direction (Fig. 14, A and B) to sites yielding significant selectivity for object-centered direction (Fig. 4, A and B) reveals that the two were largely overlapping.



View larger version (38K):
[in this window]
[in a new window]
 
Fig. 14. Cortical distribution of neurons exhibiting significant eye-movement direction selectivity in the memory-guided saccade task for monkey Ju (A) and monkey Po (B). Coordinates are with respect to the center of the recording grid (0,0). Each site at which recording was carried out during performance of this task is marked by a circle. Area of each dark circle is proportional to the percentage of tests revealing significant eye-movement direction selectivity at the corresponding site. Where n neurons were studied, 2*n tests were carried out (direction selectivity was assessed independently during the delay period and the movement period for each neuron). Largest circles represent cases in which 100% of tests yielded a significant result. Sites at which 0% of tests yielded a significant result are indicated by small open circles.

Next we carried out a set of comparisons intended to reveal whether the presence or sign of direction selectivity in the memory-guided saccade task was correlated with the presence or sign of object-centered direction selectivity in the bar-dot task. This analysis was restricted to 114 neurons studied in both tasks. Data collected from each neuron during the memory-guided saccade task were assessed to determine whether the firing rate was significantly affected by vertical direction (upward vs. downward trials) or horizontal direction (rightward vs. leftward trials). Each comparison was carried out on data from the delay period (cue onset to fix-spot offset) and the movement period (fix-spot offset to 100 ms after completion of the saccade). Four t-tests indicated whether the neuron was significantly (P < 0.05) selective for direction as defined with respect to the horizontal and vertical axes during the delay and movement epochs. Neurons were tested for object-centered direction selectivity by means of an ANOVA as described in an earlier section.

We first asked whether selectivity for vertical direction, as observed in the memory-guided saccade task, was related to object-centered direction selectivity, as observed in the bar-dot task. It was reasonable to pose this question because all eye movements required in the bar-dot task were in an upward direction. We calculated the numbers of neurons exhibiting various combinations of selectivity during three pairs of epochs: delay 1 in the bar-dot task versus delay in the memory-guided saccade task; delay 2 in the bar-dot task versus delay in the memory-guided saccade task; and movement period in the bar-dot task versus movement period in the memory-guided saccade task. The results are presented in Fig. 15, A and B. In each panel, the rows contain counts of cells exhibiting (S) or not exhibiting (N) significant image-centered direction selectivity in the bar-dot task. Likewise, the columns contain counts of cells not exhibiting vertical direction selectivity (N) or significantly favoring downward (D) or upward (U) movements in the memory-guided saccade task. We carried out two chi 2 tests on these counts. The first test, applied to all neurons, assessed whether the presence of selectivity for eye-movement direction as defined with respect to the vertical axis (in the memory-guided saccade task) was correlated with the presence of selectivity for object-centered direction (in the bar-dot task). Overall, across all pairs of epochs in both monkeys, there was a slight trend for object-centered direction selectivity to be more common among neurons selective for vertical direction than among those not so selective (56 vs. 46%). However, this tendency did not achieve significance (P < 0.05) for any pair of epochs in either monkey. The second test, applied only to neurons exhibiting selectivity for direction as defined with respect to the vertical axis, assessed whether the preferred vertical eye-movement direction (upward or downward) was correlated with the presence of selectivity for object-centered direction. Overall, across all pairs of epochs in both monkeys, there was a slight trend for object-centered direction selectivity to be more common in neurons selective for upward than in those selective for downward movement (58 vs. 41%). However, this tendency did not achieve significance (P < 0.05) for any pair of epochs in either monkey.



View larger version (40K):
[in this window]
[in a new window]
 
Fig. 15. Counts of neurons exhibiting various combinations of object-centered direction selectivity (in the bar-dot task) and eye-movement direction selectivity (in the memory-guided saccade task). A and B: object-centered direction selectivity vs. selectivity for vertical eye-movement direction in monkeys Ju and Po, respectively. Within each box, rows represent neurons exhibiting specific patterns of object-centered direction selectivity (N, no significant selectivity; S, significant selectivity), whereas columns represent neurons exhibiting specific patterns of vertical eye-movement direction selectivity (N, no significant vertical selectivity; D, significant vertical selectivity favoring downward movements; U, significant vertical selectivity favoring upward movements). Boxes at left, middle, and right compare results in delay 1, delay 2, and the movement period of the bar-dot task to results in the delay period, the delay period, and the movement period of the memory-guided saccade task. C and D: object-centered direction selectivity vs. selectivity for horizontal eye-movement direction in monkeys Ju and Po, respectively. Within each box, rows represent neurons exhibiting specific patterns of object-centered direction selectivity (N, no significant selectivity; I, significant horizontal selectivity favoring the ipsilateral end of the image; C, significant horizontal selectivity favoring the contralateral end of the image), whereas columns represent neurons exhibiting specific patterns of horizontal eye-movement direction selectivity (N, no significant vertical selectivity; I, significant horizontal selectivity favoring ipsiversive movements; C, significant horizontal selectivity favoring contraversive movements). Boxes at left, middle, and right compare results in delay 1, delay 2, and the movement period of the bar-dot task to results in the delay period, the delay period, and the movement period of the memory-guided saccade task.

We next asked whether selectivity for horizontal direction, as observed in the memory-guided saccade task, was related to object-centered direction selectivity, as observed in the bar-dot task. We wished to determine whether object-centered direction selectivity was especially common among neurons exhibiting selectivity for horizontal eye-movement direction. Further, in cases where selectivity was present in both tasks, we wished to determine whether the preferred directions matched. We compared results between three pairs of epochs as described in the preceding paragraph. Counts of neurons exhibiting various combinations of significant selectivity during these epochs are presented in Fig. 15, C and D. In each panel, the rows contain counts of neurons not exhibiting object-centered direction selectivity (N) or significantly favoring the image's ipsilateral (I) or contralateral (C) end in the bar-dot task. Likewise, the columns contain counts of neurons not exhibiting horizontal direction selectivity (N) or significantly favoring ipsiversive (I) or contraversive (C) eye movements in the memory-guided saccade task. We carried out two chi 2 tests on these counts. The first test, applied to all neurons, assessed whether the presence of selectivity for eye-movement direction as defined with respect to the horizontal axis (in the memory-guided saccade task) was correlated with the presence of selectivity for object-centered direction (in the bar-dot task). Overall, across all pairs of epochs in both monkeys, there was a slight trend for object-centered direction selectivity to occur more frequently in cases where horizontal eye-movement direction selectivity was present than in cases where it was absent (56 vs. 44%). However, this tendency did not achieve significance (P < 0.05) for any task epoch in either monkey. The second test, applied only to neurons exhibiting direction selectivity in both tasks, assessed whether the preferred horizontal eye-movement direction (contraversive or ipsiversive) was correlated with the preferred object-centered direction (contralateral-on-image or ipsilateral-on-image). Overall, across all pairs of epochs in both monkeys, neurons with matching preferences for object-centered direction (in the bar-dot task) and eye-movement direction (in the memory-guided saccade task) outnumbered those with nonmatching preferences (70 vs. 21%). When delay 1 in the bar-dot task was compared with the delay period in the memory-guided saccade task, the trend toward matching directional preferences was significant in monkey Po (P = 0.007) and approached significance in monkey Ju (P = 0.099). When delay 2 in the bar-dot task was compared with the delay period in the memory-guided saccade task, the trend toward matching directional preferences achieved significance in monkey Po (P = 0.020) but not in monkey Ju (P = 0.35). When the movement period in the bar-dot task was compared with the movement period in the memory-guided saccade task, the trend toward matching directional preferences achieved significance in monkey Ju (P = 0.038) and approached significance in monkey Po (P = 0.098). We conclude that the strongest correlation between results obtained in the bar-dot task and the memory-guided saccade task concerns preferred direction. Neurons preferring contraversive (or ipsiversive) eye movements in the memory-guided saccade task tend to prefer the contralateral (or ipsilateral) end of the image in the bar-dot task.

In summary, comparison between neuronal activity in the memory-guided saccade task and the bar-dot task has revealed a significant trend for neurons preferring leftward (or rightward) saccades in the memory-guided saccade task to favor the left (or right) end of the bar or dot display. Two other trends were present but did not achieve significance. First, neurons selective for saccade direction with respect either to the vertical or the horizontal axis tended also to be selective for object-centered direction. Second, among neurons selective for vertical saccade direction, those selective for upward saccades (the direction required in the bar-dot task) tended to exhibit object-centered direction selectivity. The first, significant, finding is compatible with the notion that there is a principled relation between neuronal activity displayed in the two task contexts. The other trends, if genuine, would provide additional support for this view.


    DISCUSSION
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
REFERENCES

SEF neurons encode locations relative to bars and arrays

The most general finding of this study is that the SEF contains a population of neurons that encode the object-centered location of the target of an eye movement, doing so regardless of whether that location is defined with respect to a continuous image (a horizontal bar) or a discontinuous image (a pair of dots marking the ends of a virtual horizontal bar). Neurons firing preferentially when the monkey has selected the left (or right) end of a bar as a target also fire preferentially when he has selected the leftmost (or rightmost) of two dots in a horizontal array. The primary significance of this finding lies in its showing that the object-centered direction selectivity of SEF neurons (Olson and Gettner 1995, 1999) is robust across large changes in the visual properties of the reference object including ones that affect its physical continuity. The only precedent for such an effect is the finding of Niki (1974) that prefrontal neurons in monkeys performing a delayed alternation task fired at a level determined by the relative location of the previously selected lever (right or left) regardless of the absolute location of the two-lever array.

In light of this finding, one can imagine two general possibilities. 1) If the monkey can discriminate an image from the background and if, doing so, he makes eye movements to particular locations defined relative to the image, then SEF neurons will encode the object-centered directions of those eye movements. In other words, object-centered directional signals in the SEF can be referred to any discriminable object. 2) Some visual attributes of the image other than sheer discriminability do matter, but physical continuity is not one of those attributes. There are manipulations by which one could still further degrade the perceptual coherence of the dot array used in this experiment while leaving it discriminable. For example, the salience of the dot array could be reduced by presenting it against a field of multiple dots from which it is only minimally discriminable. Alternatively, its perceptual unity could be reduced by making the two dots different in color, shape, and motion. It is conceivable that the monkey still would be able to perform the task after these manipulations and yet that object-centered signals in the SEF would be reduced.

There is an apparent discrepancy between our finding of bar-dot equivalence and studies of brain-damaged humans demonstrating that neural pathways for "within-object" and "between-object" spatial vision are at least partially separate (Humphreys and Riddoch 1994, 1995). Humphreys and Riddoch have described bilateral parietal-lobe patients who neglect the left side of an array if treating it as an object (as in reading a printed word) but neglect the right side if treating it as an array (as in spelling out a written word one letter at a time). This result implies that separate populations of neurons in the intact parietal lobe must represent the right or left end of an object as opposed to the right or left element in an array. In contrast, we have found that the same population of SEF neurons encodes the right or left end of a bar and the right or left element in a dot-pair. This apparent discrepancy is potentially resolvable in at least two ways. Perhaps within-object and between-object signals are carried by separate populations of neurons at the level of parietal cortex but converge at the level of the SEF. Or perhaps our monkeys adopted a within-object set toward both the bars and the dot arrays with the result that the same within-object neuronal population was active during both kinds of trials. Observations on parietal neglect patients have suggested that the mode of processing is dependent on both the physical properties of the stimulus and the subject's instructional set. The influence of the stimulus is manifest in the fact that Humphreys and Riddoch's patient JR exhibited within-object and between-object patterns of neglect when carrying out an identical task on groups of elements that respectively suggested or did not suggest a single object. Further, a pair of neglect patients studied by Bisiach et al. (1994) made subtly different bisection errors according to whether the image being bisected was a horizontal line or a pair of dots. The influence of instructional set is manifest in the fact that two instructions ("read" or "spell") induced different modes of processing of the same material (a word) in Humphreys and Riddoch's patients. The implication of this dual set of observations is that presenting a pair of dots as a reference image probably favored but did not necessarily enforce the monkeys' processing it as two objects rather than one.

Some SEF neurons are sensitive to image type

The central finding of this study is that SEF neurons encode object-centered locations regardless of the physical continuity or discontinuity of the reference image; however, a secondary finding is that some neurons do fire more or less strongly according to whether the reference image is a bar or a pair of dots. During each task epoch, beginning with the onset of the reference image and ending with execution of the eye movement, around a quarter of SEF neurons fired at significantly different levels under dot and bar conditions with a majority firing more strongly under the dot condition. Likewise, neurons tended to carry stronger object-centered directional signals under the dot condition. These effects cannot be interpreted unequivocally on the basis of the present data alone. They could reflect neuronal sensitivity to the purely visual properties of the image, the requirement to make within-object versus between-object spatial judgements, or task difficulty. Certain manipulations rendering an oculomotor task more difficult already are known to elicit enhanced activity in the SEF. These include requiring the monkey to select a target by a learned arbitrary association (Olson and Gettner 1996) and requiring him to make an eye movement away from a cue (Gettner and Olson 1996; Schlag et al. 1997). Although there is little face validity to the notion that the dot condition should have been harder than the bar condition, the fact is that both monkeys made more errors under this condition. Regardless of which explanation turns out to be the correct one, this phenomenon does not detract from the main conclusion of this study, namely, that SEF neurons do encode object-centered locations regardless of whether the reference images are physically continuous or discontinuous.

Selectivity for object-centered direction versus saccade direction

A full analysis of the relation between object-centered direction selectivity and selectivity for saccade direction is outside the scope of this paper. However, we will comment briefly on the issue of how these two properties relate to each other. On comparing neuronal activity in the bar-dot task to neuronal activity in the memory-guided saccade task, we discovered one significant trend. Neurons preferring leftward (or rightward) saccades in the memory-guided saccade task tended to favor the left (or right) end of the bar or dot display. This finding seems to imply that the same neuron can carry object-centered signals in one context and eye-centered signals in another; however, it is conceivable that this is not so. For example, in the memory-guided saccade task, the signals that we have regarded as reflecting saccade direction might in fact encode the location of the target relative to a default reference object such as the screen or the array of possible targets. In this case, neuronal activity in both tasks would reflect object-centered direction. Alternatively, in the bar-dot task, on image-right (or -left) trials, the monkey might covertly program saccades to the right (or left) on the false premise that the end of the bar predicts the direction of the saccade. In this case, neuronal activity in both tasks would reflect the direction of the intended saccade. Arguments that there is only one kind of signal falter, however, when confronted with the observation that the same neuron can be influenced simultaneously by both eye- and object-centered direction during the second delay period in the bar-dot task, a period during which the monkey knows both the eye- and the object-centered direction of the impending saccade (Olson and Gettner 1995) (also Fig. 7 of this paper). For this reason, we believe that SEF neurons genuinely carry both eye- and object-centered signals. The confluence of eye- and object-centered signals in the SEF, far from being problematic, would make good sense if the area were at a level in the functional circuitry of the brain transitional between stages at which spatial representations are object and eye centered (Deneve and Pouget 1998) or if SEF neurons embodied a basis set from which representations relative to multiple reference frames including eye-centered and object-centered ones could be extracted (Pouget and Sejnowski 1999). The fact that any given neuron's activity is ambiguous (because it could reflect either eye-centered or object-centered direction) is no more a problem than in area V1 (where a given level of activity in a given neuron may arise from numerous combinations of orientation and contrast). In each case, ambiguity vanishes at the level of activity across a population.

Implications with respect to general functions of the SEF

The long-standing view that the SEF is a premotor area for eye movements (Schall 1997) is supported by two main observations: SEF neurons fire before and during saccades (Bon and Lucchetti 1992; Chen and Wise 1995a,b, 1996, 1997; Hanes et al. 1995; Mann et al. 1988; Mushiake et al. 1996; Russo and Bruce 1996; Schall 1991a,b; Schlag and Schlag-Rey 1985, 1987; Schlag-Rey et al. 1997) and electrical stimulation of the SEF elicits saccades (Chen and Wise 1995b; Fujii et al. 1995; Lee and Tehovnik 1995; Mann et al. 1988; Mitz and Goldschalk 1989; Russo and Bruce 1993; Tehovnik and Lee 1993; Tehovnik and Sommer 1997; Tehovnik et al. 1994; Tian and Lynch 1995). Further, in both recording and stimulation studies, it has been shown that particular sites in the SEF represent directions with respect to motorically relevant frames of reference: frames centered on the eye (Bon and Lucchetti 1992; Mitz and Godschalk 1989; Russo and Bruce 1993, 1996; Schlag and Schlag-Rey 1987) and head (Bon and Lucchetti 1990, 1992; Lee and Tehovnik 1995; Schlag et al. 1992; Tehovnik 1995; Tehovnik and Lee 1993; Tehovnik et al. 1994). These observations, with their strong implication that the SEF is an oculomotor area, seem difficult to reconcile with the finding, reported both here and in our previous publications (Olson and Gettner 1995, 1999), that around half of SEF neurons signal the directions of eye movements as defined relative to an object-centered reference frame. Neurons that carry object-centered signals fire at different levels during physically similar eye movements if those eye movements are to different parts of a reference image. Thus their signals are unyoked from the dynamics and kinetics of the eye movements and, in that sense, are not motor signals. How are we to resolve the apparent contradiction between this finding and the classic view of the SEF as an oculomotor area?

There would be no contradiction between our findings and the classic view of the SEF as a motor area if it was the case that our recording sites were outside the SEF. Then we simply could conclude that neurons in an area adjacent to the SEF encode the object-centered locations of targets while SEF neurons participate in the programming of eye movements. The view that our recording sites were outside the SEF seems implausible, however, in light of the anatomic location of recording sites and the functional properties of neurons. We have measured the location of the recording sites relative to a standard set of morphological landmarks (Tehovnik 1995) and have shown that they lie within the zone demarcated in previous mapping studies based on electrical stimulation (Fig. 2). Within this zone, they are located relatively anteriorly, but they are not outside it. Further, they overlap the subregion of the zone in which eye movements have been elicited most frequently---i.e., in virtually all electrical stimulation studies to date (largest dots in Fig. 2B). Further, we have compared results obtained in the object-centered localization task to results obtained in a standard oculomotor test, the memory-guided saccade task. We have shown that cortical sites where neurons exhibit object-centered direction selectivity (Fig. 4) substantially overlap those sites where neurons exhibit selectivity for saccade direction (Fig. 14). Further, we have shown that a substantial number of neurons exhibits spatial selectivity in the context of both tasks and that, among these neurons, there is a significant tendency for those preferring rightward (or leftward) eye movements in the memory-guided saccade task to prefer right-on-image (or left-on-image) conditions in the object-centered localization task. On these grounds, we consider it very probable that our recording sites are within the confines of the SEF as delimited by other authors.

There would be no contradiction between our findings and the classic view of the SEF as a motor area if the apparently object-centered signals of the neurons in our study were correlated with the physical properties of the monkey's eye movements---properties that happened to covary with object-centered direction. To assess this possibility, we analyzed the directions of eye movements executed under different trial conditions. We found that the landing position of the eyes did deviate slightly away from the target location toward the center of the reference object on bar trials and, to a lesser degree, on dot trials (Figs. 9 and 10). However, it is unlikely that this alone could account for object-centered direction selectivity. Against this interpretation, we have shown that the measured orbital sensitivity of each neuron could account for only a small fraction of its measured object-centered directional sensitivity (Figs. 11 and 12). Further, we have shown that object-centered direction selectivity is at least as strong under dot conditions as under bar conditions (Fig. 5), whereas the deviation of the eyes from the target is smaller under dot than under bar conditions by almost an order of magnitude (Fig. 10). Thus we feel confident that object-centered directional signals in the SEF are not simply an artifact of variations in eye-movement direction. It might be suggested that even though the initial saccades were similar on image-right and image-left trials, they were followed up by second saccades that were in different directions. In particular, having fixated the right end of the reference image, monkeys might execute a saccade to its left end, and vice versa. Human and monkey studies have implicated the SEF in the execution of sequences of saccades; so it would not be surprising if neurons fired differentially at the outset of different sequences (Gaymard et al. 1990, 1993; Sommer and Tehovnik 1999). However, this interpretation is ruled out by the fact that the monkeys in our study were required to maintain fixation on the target for a period of 450-550 ms after foveating it, at which point the display was extinguished and reward delivered, so that there was no opportunity to execute a saccade to the other end of the target object. At the moment of reward delivery, both monkeys generally made large downward saccades; however, because data collection stopped at that point, we are not able to comment on the metrics of these movements.

Given that the neurons in our study are in the SEF and do carry object-centered signals, we are left with an apparent contradiction between the classic view that the SEF is an oculomotor area and the current finding that the activity of some of its neurons is unyoked from the physical parameters of eye movements. The simplest resolution to this contradiction is to suppose that the SEF contributes to oculomotor control at a comparatively early or abstract stage before the final programming of the movements. This view is actually consonant with an already existent body of evidence indicating that the functions of the SEF are comparatively far removed from the oculomotor periphery. In particular, the SEF, as compared with the FEF, exhibits a higher incidence of learning-related activity (Chen and Wise 1995b), a greater frequency of hand-movement-related activity (Mushiake et al. 1996), a higher current-threshold for elicitation of eye movements by electrical stimulation (Russo and Bruce 1993; Tehovnik and Sommer 1997), and a weaker impact of local inactivation on oculomotor performance (Sommer and Tehovnik 1999).

There are several general functions, antecedent to programming the physical parameters of eye movements, to which the SEF might contribute and in terms of which one might try to understand the phenomenon of object-centered direction selectivity. In this section, we will consider and provisionally reject three possible interpretations before presenting a fourth interpretation that seems, on the basis of current evidence, to be the most plausible. We recognize, however, that this issue is a complex one and that final resolution will not be possible without further study. 1) Representing cues for eye movements. One potential explanation for object-centered direction selectivity is that SEF neurons mediate arbitrary learned associations between cues and eye movements. According to this argument, SEF neurons with object-centered direction selectivity are simply recording the occurrence of a visual event (appearance of the cue on the right or left end of the sample bar) possessing a learned association with a particular eye movement. This argument is fallacious because, in fact, the sample-cue display is not associated with a particular eye movement but rather with a particular rule for selecting the eye movement, given the location of the target bar. Further, this argument ignores the report of Chen and Wise (1995a,b), that SEF neurons, recorded in monkeys performing a pattern-conditional eye movement task, encode eye-movement direction and not cue identity. 2) Representing rules for eye movements. Another potential explanation for object-centered direction selectivity is that SEF neurons represent any arbitrary rule by which the monkey is prepared to select an eye-movement target. This would imply that select populations of SEF neurons should become active when the monkey is prepared to select not only the leftmost but also the reddest or the largest of a group of impending stimuli. We question this interpretation on two grounds. First, some SEF neurons fire differentially before visually guided eye movements to a spot incidentally superimposed on the left or right end of a task-irrelevant bar, thus carrying object-centered signals even when the monkey is not following an object-centered rule (Olson and Gettner 1995). Second, when the monkey is using a color rule (select as target either the red or green dot of a two-dot array) SEF neurons exhibit virtually no difference in activity on trials when the rule is "red" or "green" but differentiate strongly between trials on which the target dot happens to be the right or left end of its array (Olson et al. 1999). 3) Representing corrections of eye movements. A third potential explanation for object-centered signals is that they represent corrections imposed by the SEF on reflexively programmed eye movements. Suppose, as suggested by the results of Edelman and Keller (1998), that the onset of a target configuration automatically induces the programming of a reflexive eye movement that would bring the eye to the configuration's visual center of gravity. Correct performance in the bar-dot task then could be achieved by adding to this reflexive signal (directing the eyes to the center of the bar or array) a corrective signal (corresponding to the offset of the selected target from the center). This corrective signal would appear to be object-centered. This explanation is appealing because it links object-centered direction selectivity to a quite peripheral aspect of oculomotor control. It falters, however, in the face of the observation that object-centered signals are virtually unaffected by a doubling of the size of the target bar (unpublished results). One would expect neuronal activity encoding the corrective signal to change dramatically under these circumstances. 4) Representing locations of targets relative to landmarks. A final potential explanation for object-centered direction selectivity is that the SEF mediates the performance of eye movements under conditions such as those pertaining in this study---conditions in which the location of the target is computed by triangulation from other elements visible in the scene. The ability to make eye movements under the guidance of visible stimuli to locations not at the center of gravity of those stimuli may seem like a skill with little use outside the laboratory. However, it is easy to imagine cases in which it would be worthwhile to look at a location where something is expected to appear, even when that point is not currently marked by any local detail, and that getting to that point might be aided by taking into account elements visible elsewhere in the scene. Humans are able to use indirect spatial cues in this way as evidenced by the fact that their saccades to a remembered location are more accurate if a visible landmark is present in some part of the scene (Karn et al. 1997). How often, under natural circumstances, monkeys execute a saccade, under guidance of a set of scene elements, to a location other than the center of gravity of those elements is an empiric issue that remains to be resolved. On any occasion when they do so, the brain can be thought of as carrying out a process in which it combines the perceived eye-centered coordinates of a landmark and the stored landmark-centered coordinates of the target so as to compute the intended eye-centered coordinates of the saccade. We suggest that the SEF is part of a network responsible for this process and that, within that network, it occupies a level before the output level at which representations are purely motoric. This general account leaves many specific questions unanswered. For example, why do different neurons carry object-centered signals during different phases of task performance and why do some neurons carry object-centered signals even during the movement epoch---after selection of the target has been finalized? Despite these limitations, the idea that the SEF participates in the guidance of eye movements by landmarks seems to provide the most plausible explanation for our results.


    ACKNOWLEDGMENTS

We thank K. Rearick for excellent technical assistance.

C. R. Olson received support from the National Eye Institute (Grant RO1 EY-11831), which also provided technical support through Core Grant EY-08098.

Present address of L. Tremblay: INSERM U289, Pavillon Claude Bernard, Hôpital de la Salpêtri re, 47 Bld. de l'Hôpital, 75651 Paris Cedex 13, France.


    FOOTNOTES

Address for reprint requests: C. R. Olson, Center for the Neural Basis of Cognition, Mellon Institute, Room 115, 4400 Fifth Ave., Pittsburgh, PA 15213-2683.

The costs of publication of this article were defrayed in part by the payment of page charges. The article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.

Received 9 August 1999; accepted in final form 2 November 1999.


    REFERENCES
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
REFERENCES

0022-3077/00 $5.00 Copyright © 2000 The American Physiological Society