Human Oculomotor System Accounts for 3-D Eye Orientation in the Visual-Motor Transformation for Saccades

Eliana M. Klier1 and J. Douglas Crawford1, 2

Centre for Vision Research and 1 Department of Biology and 2 Department of Psychology, York University, Toronto, Ontario M3J 1P3, Canada

    ABSTRACT
Abstract
Introduction
Methods
Results
Discussion
References

Klier, Eliana M. and J. Douglas Crawford. Human oculomotor system accounts for 3-D eye orientation in the visual-motor transformation for saccades. J. Neurophysiol. 80: 2274-2294, 1998. A recent theoretical investigation has demonstrated that three-dimensional (3-D) eye position dependencies in the geometry of retinal stimulation must be accounted for neurally (i.e., in a visuomotor reference frame transformation) if saccades are to be both accurate and obey Listing's law from all initial eye positions. Our goal was to determine whether the human saccade generator correctly implements this eye-to-head reference frame transformation (RFT), or if it approximates this function with a visuomotor look-up table (LT). Six head-fixed subjects participated in three experiments in complete darkness. We recorded 60° horizontal saccades between five parallel pairs of lights, over a vertical range of ±40° (experiment 1), and 30° radial saccades from a central target, with the head upright or tilted 45° clockwise/counterclockwise to induce torsional ocular counterroll, under both binocular and monocular viewing conditions (experiments 2 and 3). 3-D eye orientation and oculocentric target direction (i.e., retinal error) were computed from search coil signals in the right eye. Experiment 1: as predicted, retinal error was a nontrivial function of both target displacement in space and 3-D eye orientation (e.g., horizontally displaced targets could induce horizontal or oblique retinal errors, depending on eye position). These data were input to a 3-D visuomotor LT model, which implemented Listing's law, but predicted position-dependent errors in final gaze direction of up to 19.8°. Actual saccades obeyed Listing's law but did not show the predicted pattern of inaccuracies in final gaze direction, i.e., the slope of actual error, as a function of predicted error, was only -0.01 ± 0.14 (compared with 0 for RFT model and 1.0 for LT model), suggesting near-perfect compensation for eye position. Experiments 2 and 3: actual directional errors from initial torsional eye positions were only a fraction of those predicted by the LT model (e.g., 32% for clockwise and 33% for counterclockwise counterroll during binocular viewing). Furthermore, any residual errors were immediately reduced when visual feedback was provided during saccades. Thus, other than sporadic miscalibrations for torsion, saccades were accurate from all 3-D eye positions. We conclude that 1) the hypothesis of a visuomotor look-up table for saccades fails to account even for saccades made directly toward visual targets, but rather, 2) the oculomotor system takes 3-D eye orientation into account in a visuomotor reference frame transformation. This transformation is probably implemented physiologically between retinotopically organized saccade centers (in cortex and superior colliculus) and the brain stem burst generator.

    INTRODUCTION
Abstract
Introduction
Methods
Results
Discussion
References

Visual signals must be processed sequentially through several internal stages to generate accurate saccades. For example, visible light is initially coded on several two-dimensional (2-D) retinotopic maps including the retina, primary visual cortex, and the superficial layers of the superior colliculus (Hubel and Wiesel 1979; Sparks 1989). At a later stage, reticular formation burst neurons produce phasic signals, in a 3-D head-fixed coordinate system, that provide the "eye velocity" signal (to the motor neurons) necessary to drive the eyes in a certain direction at a certain speed (Crawford and Vilis 1992; Henn et al. 1989; Luschei and Fuchs 1972). However, it is unclear how the intermediate structures convert 2-D, oculocentric, sensory vectors into the 3-D, headcentric, motor vectors needed to drive the burst generator. In other words, how is retinal error (RE; the retinal distance and direction of the target image from the fovea, or alternatively, desired gaze direction relative to the eye) converted into the motor error (ME) command that drives the burst neurons?

One possibility is that the brain maps RE signals directly onto equivalent ME signals in the neural equivalent to a visuomotor "look-up table" (LT). This idea originated with the foveation hypothesis of Schiller (1972). Here, horizontal and vertical components of 2-D RE are input to a look-up table that simply maps RE onto ME displacements directly, without any comparisons with current eye position. This hypothesis also featured prominently in the displacement-feedback tradition of models founded by Jürgens et al. (1981). This scheme is often associated with a direct mapping between the superficial sensory and deeper motor layers of the superior colliculus (Moschovakis et al. 1988) and has been cited as the classic example of a sensorimotor look-up table (e.g., Churchland and Sejnowski 1992).

The second hypothesis was initially proposed by David Robinson and colleagues (Robinson 1975; Zee et al. 1976). We call this the "reference frame transformation" (RFT) hypothesis because it involves a transformation of eye-centered representations into head-centered representations. To do this, the RFT model uses comparisons between visual input and an internal representation of current eye position. In the first such comparison, information about eye position, derived from the burst neurons' integrated velocity signal, is added onto incoming RE to derive a desired eye position command. This signal is then transformed, via a second subtractive comparison to eye position, into an instantaneous ME command.

To date, experimental evidence has been cited in support of both models. First, retinotopic maps (sufficient for the LT model) are prevalent in the brain in such visuomotor areas as the occipital lobe, the posterior parietal cortex, and the superior colliculus (reviewed in Moschovakis and Highstein 1994). However, information regarding target position relative to the head or body (required for the RFT model) has also been identified in several areas including the thalamus, frontal cortex, and posterior parietal cortex (Andersen et al. 1985; Schlag and Schlag-Rey 1987; Sparks 1989). Second, the RFT model is capable of accounting for the ability to saccade to remembered target locations after intervening saccades (Hallet and Lightstone 1976; Sparks 1989), whereas the original LT model failed to emulate multiple saccades. The latter has been corrected by the addition of a "vector subtraction" mechanism upstream of the visuomotor transformation (Goldberg and Bruce 1990; Moschovakis and Highstein 1994; Waitzman et al. 1988). However, the mechanism for remembering target locations independent of eye movements may differ from the visuomotor transformation for saccades made directly to visual targets (Crawford and Guitton 1997; Henriques et al. 1998), which will be the focus of our experiments. In this context (direct visuomotor execution), the sequential adding and subtracting of eye position in the 1-D RFT model seems redundant. Indeed, a trivial mapping between RE and ME displacement codes seems completely sufficient to determine saccade direction and amplitude in both 1-D and 2-D models (Waitzman et al. 1991).

Thus the practical difference between these two hypotheses seems ambiguous in abstract 1-D or 2-D models. However, a recent theoretical investigation has suggested that in real 3-D space, saccades cannot obey Listing's law and be accurate from all initial eye positions without an intermediate position-dependent reference frame transformation (Crawford and Guitton 1997). As pointed out by Crawford and Guitton (1997), RE, being eye-fixed, depends on the 3-D orientation of the eye as well as the configuration of the target in space (Fig. 1A). This would not be a problem if saccade axes were also eye-fixed, but Listing's law only allows such axes to rotate by half the angle of eye position (Helmholtz 1867; Tweed and Vilis 1990).


View larger version (15K):
[in this window]
[in a new window]
 
FIG. 1. Basic input-output geometry for saccades showing Listing's law and position-dependent retinal geometry in head coordinates. A: right side view of the right eye and head, with gaze elevated 30°. Eye position vectors fall within Listing's plane, which is viewed edge on. The head-fixed vertical axis for horizontal eye rotation also falls within Listing's plane. In contrast, the shortest-path axis of rotation () for a rightward saccade would be perpendicular to current gaze direction, i.e., eye-fixed. The actual axis of rotation (- - -) allowed by Listing's law is about halfway between the latter 2 axes (Tweed and Vilis 1990). B: behind view of same situation as in A, again, with gaze pointed 30° upward. The "horizontal" meridian of the retina is now tilted with respect to the head. Circle shows the points where light falling on this meridian would intersect with a sphere centered around the right eye (radius = gaze vector), as it would project onto Listing's plane. As targets are displaced further horizontally from current gaze in retinal coordinates, from 30° (square ) to 60° (black-square) to 90° (open circle ) retinal error (RE), the target displacement becomes more and more oblique in headcentric coordinates. Rightward rotation about the eye-fixed axis (shown in A) would cause gaze to sweep around this circle. Rightward rotation about the head-fixed axis would cause gaze to curve away from this circle (right-arrow). Rightward rotation compatible with Listing's law would produce an intermediate trajectory (- - -right-arrow). C: same situation in eye-fixed coordinates centered around current gaze. In these coordinates, the targets (square , black-square, open circle ) are displaced horizontally, and the REs that would be satisfied by the gaze trajectories in B (- - -) curve obliquely, such that the deviation between these traces increases with eccentricity. (Larger and more complex patterns occur for tertiary and torsional eye positions). Thus a horizontal saccade will not satisfy horizontal RE at these eccentricities. For the saccade generator to acquire these targets, it must map a horizontal RE onto a nonhorizontal saccade. This imposes a position-dependent visuomotor reference frame problem in saccade generation that cannot be solved by any known eye muscle properties. See Crawford and Guitton (1997) for further details.

One possible solution is that the visuomotor transformation ignores the difference between RE and ME and approximates the above transformations with a fixed mapping between any one RE and any one ME (Hepp et al. 1993, 1997; Raphan 1997, 1998). This strategy would only produce minor errors in the peri-primary range (Crawford and Guitton 1997; Hepp et al. 1993, 1997). However, Crawford and Guitton (1997) demonstrated that any 3-D version of the LT model (Fig. 2A) would produce large directional inaccuracies for large saccades between eccentric targets and from initial torsional eye positions. Regardless, it has been suggested that the system would tolerate such errors in favor of simplifying the visuomotor transformation (Hepp et al. 1997). Indeed, Hepp et al. (1997) proposed that the function of Listing's law is to allow for the best approximation for a LT transformation to give reasonably accurate saccades while also providing a fixed torsional component for each gaze direction (i.e., Donder's law).


View larger version (18K):
[in this window]
[in a new window]
 
FIG. 2. Two 3-dimensional (3-D) models of the saccade generator tested in this paper. A: look-up table (LT) model. Components of RE are mapped directly onto components of Delta Ei without any considerations or comparisons to the eyes' current position in the head (E). We called this mapping a "look-up table." B: reference frame transformation (RFT) model. 2-D RE, or desired gaze direction relative to the eye (Gdeye), is rotated multiplicatively (Pi ) by an internal representation of current eye position (E) to produce a desired gaze direction relative to the head (Gdhead). This accomplishes the necessary transformation of data from an oculocentric to a craniotopic reference frame. This command (still 2-D) is then input to a Listing's law operator (LL), described by Tweed and Vilis (1990), to give a 3-D desired eye position command (Ed). Finally, subtracting E from Ed results in Delta Ei. For more details see Crawford and Guitton (1997). C: both models share the same downstream saccade generator. Displacement feedback from a resettable integrator is subtracted from initial motor eror (ME; Delta Ei) to compute instantaneous 3-D ME (Delta E). A rate-of-position-change signal (Ė) is then derived to drive the burst neurons, whose velocity output travels both straight to the motoneurons (MN) that move the eyes, as well as through an integrator that produces an eye position signal (E) with which the eyes maintain their final position. K and R represent the elasticity and viscosity estimates, respectively, used by the brain stem to overcome those found in the plant (the eye and its surrounding tissues and musculature). We have modeled the plant either as having head-fixed muscle pulling directions, requiring an internal implementation (Pi ) of the "half-angle" rule (defined in text), or as a "linear plant" that implements the half-angle rule of Listing's law itself (the latter was used exclusively in our simulations of the LT model) (Quaia and Optican 1998).

In contrast, Crawford and Guitton (1997) assumed that the saccade generator does not sacrifice either accuracy or Listing's law within the oculomotor range. To convert 2-D, oculocentric, RE vectors into 3-D, head-fixed, ME vectors, they formulated a model that, in outline, bears a striking resemblance to that of Robinson's model (Fig. 2B). In this model, incoming 2-D RE was first rotated by an internal measure of current 3-D eye position, providing a measure of desired gaze direction relative to the head. The next step involved a Listing's law operator that performed a 2-D to 3-D transformation, giving rise to a 3-D command encoding desired eye position in Listing's plane. Finally, current eye position was subtracted from desired eye position to produce a 3-D ME signal that drove a feedback loop, containing a resettable displacement integrator, and subsequent burst neurons. It was suggested that these position-dependent transformations may be implemented implicitly (van Opstal and Hepp 1995; Zipser and Andersen 1988), such that only the inputs (RE) and outputs (ME) might be explicitly observed in the brain. In contrast to the 3-D LT model, this model produced accurate and kinematically correct saccades from all initial 3-D eye positions (Crawford and Guitton 1997).

Surprisingly, no one has yet simultaneously evaluated Listing's law and saccade accuracy over a large enough range to distinguish between the 3-D LT and RFT models experimentally. Furthermore, a rigorous test between these hypotheses would require a geometrically correct computation of RE, which remarkably, has not yet been done [beyond local measures of "false torsion" at tertiary positions (Helmholtz 1867)]. Finally, these actual measures would have to be input to 3-D versions of the RFT and LT models to compare their predictions against actual saccade trajectories. Our goal was to combine these approaches to determine whether the oculomotor visuomotor transformation for saccades uses a look-up table to approximately satisfy RE, or if it makes the proper compensation for eye position.

    THEORETICAL PREDICTIONS

This section describes the simulations and predictions that motivated the specific paradigms used in this study. First, we examined the geometrically unavoidable, yet often ignored prediction that RE depends not only on target displacement in space, but also on eye orientation in Listing's plane. During saccades, 3-D eye position vectors (which describe eye orientation as an axis of rotation) lie in a 2-D plane known as Listing's plane (e.g., Tweed and Vilis 1990). One particular gaze direction, that which is orthogonal to the plane, is referred to as primary position. If this orthogonal direction is defined as the torsional axis, then Listing's plane becomes the plane of zero torsion (Westheimer 1957). It has been demonstrated that, to keep eye position vectors in Listing's plane, axes for horizontal saccades must tilt out of Listing's plane by half the angle of vertical gaze deviation from primary position. We will refer to this as the "half-angle" rule (Tweed and Vilis 1990).

Crawford and Guitton (1997) pointed out that even for eye positions in Listing's plane, an error arises if one assumes that a target displaced horizontally, from an initial fixation point in space, produces horizontal RE. For example, they showed that horizontal REs correspond to nonhorizontal lines in space, depending on eye position (Fig. 1B), and conversely, that when the eye is oriented at any vertical or torsional position, targets displaced horizontally in space stimulate nonhorizontal (i.e., oblique) REs (Fig. 1C). (Contrary to common belief, this effect even occurs at secondary eye positions, but it becomes even more complex at the tertiary positions described below.) The challenge for the oculomotor system is then to generate horizontal saccades from these oblique RE signals, or to deal with the consequences of inaccurate target foveation (Crawford and Guitton 1997).

To test this, we first simulated a saccade paradigm, shown in Fig. 3A, similar to the one described in Crawford and Guitton (1997) [for model equations, see the appendix in Crawford and Guitton (1997)]. Data simulations are illustrated in Listing's coordinates (thus the origin corresponds to primary gaze direction). Five leftward fixation lights (black-square) and their horizontally paired target lights (bullet ) were separated by 60° in space, symmetrically about the ordinate, and ranging in elevation from +40° to -40°, at 20° intervals. Subjects foveated one of the five leftward fixation lights until its corresponding rightward target light was briefly flashed, at which time they were required to make a saccade and foveate the target as accurately as possible.


View larger version (25K):
[in this window]
[in a new window]
 
FIG. 3. Accuracy of the 3-D RFT and LT models (illustrated in Fig. 1) for simulated horizontal saccades. A: 5 initial fixation lights (black-square) and their paired target lights (bullet ) are separated by 60° symmetrically about the ordinate, at 5 different vertical elevations (0°, ±20°, and ±40°). Horizontal and vertical components of gaze are plotted in Listing's coordinates (thus the origin corresponds to primary gaze position). Simulated lines (- - -) of lights that would stimulate the vertical and horizontal meridians of the eye at each initial eye position (black-square) are also shown. B: REs (bullet ) caused by each target light while the eye foveated the initial light were computed for each of the 5 light pairs. C: LT model (black-squareblack-squareblack-square) produces systematic, position-dependent errors in final saccade direction. This error is only absent along the abscissa, but then increases with increased eccentricity from primary position (directional error of 6.1° for targets at ±20° and 13.9° for targets at ±40°). D: both the RFT models, with the standard plant (square square square ) and the linear plant (black-squareblack-squareblack-square), consistently predict accurate saccade endpoints that coincide with the targets' locations.

Normally, this task would be assumed to evoke exclusively horizontal REs, but this is not correct. For example, Fig. 3A also shows the simulated lines (- - -) of lights that would stimulate the vertical and horizontal meridians of the eye at each initial eye position (black-square). The rightward horizontal retinal lines (- - -) follow a characteristic curving pattern, first curving centrifugally (related to false torsion), and then curving more strongly in the centripetal direction. By corollary, lines that are straight in these head coordinates should curve in retinal coordinates. Figure 3B shows these simulated REs, calculated when the targets (bullet ) in Fig. 3A were converted into oculocentric coordinates. This was done by rotating the five rightward target directions by the inverse of initial 3-D eye position at the five corresponding leftward lights. It is apparent that, except across primary position, the resultant REs (bullet ) were oblique, in a position-dependent pattern, where the degree of "fanning out" was proportional to the target's initial vertical position. Thus we predict that targets displaced horizontally in space do not simply cause horizontal RE. The true value of RE will depend on both the initial position of the eye-in-head and the relative locations of the targets in space.

How then, would our two alternative models (Fig. 2) handle this pattern of inputs? Figure 3C depicts simulated horizontal saccades, at the same five vertical elevations (0°, ±20°, and ±40°) depicted in Fig. 3A, for the LT model. The eye began each movement positioned 30° to the left, and simulated RE was computed as described above. Although not shown here, the LT model correctly upheld Listing's law in these circumstances (Crawford and Guitton 1997). However, it produced gaze shifts (black-square) that were only accurate along the abscissa (i.e., across primary position). Otherwise, it predicted position-dependent inaccuracies that increased with increased displacement from primary position. Essentially, this occurred because Listing's law only allowed the eye to rotate about the axis orthogonal to the RE vector across primary position, but then caused these two axes to diverge by half the angle of upward or downward vertical gaze. Notice that, not surprisingly, the erroneous pattern of saccade trajectories closely resembled the pattern of REs calculated in Fig. 3B. This is because the LT model must output ME commands directly from RE input. Errors in final gaze direction (open circle ) of 6.1 and 13.9° were predicted at the ±20 and ±40° elevations, respectively (again, assuming that primary position fell within the middle of the range).

Figure 3D illustrates the predicted outcomes of the RFT model, which takes eye position into account. The two trajectories shown represent the outcomes of two versions of the ocular plant (the eye globe and its surrounding musculature and tissues). With the "standard plant," eye muscle activation relative to the head is independent of eye position and thus requires an internal implementation of the half-angle rule (Crawford 1994), whereas with the "linear plant," the eye muscles tilt by half the angle of eye eccentricity, in line with the pulley hypothesis (Demer et al. 1995; Miller 1989; Miller et al. 1993; Quaia and Optican 1998; Raphan 1998). Both models predicted the same endpoints, but the linear plant (black-square) predicted straight gaze trajectories independent of the eye's initial eccentricity, whereas the standard plant (square ) predicted trajectories that curve as a function of initial eye position. In either case (i.e., independent of plant characteristics), by taking eye position into account, the models made no appreciable errors and led to accurate final foveation (open circle ) of each of the five targets.

In addition to testing our two models with eye positions in Listing's plane, we also tested whether torsional eye positions out of Listing's plane are compensated for in a similar fashion. A well-known method by which to induce such ocular torsion in humans is by rotating the head about the occipital-nasal axis. This, in turn, induces an ocular counterroll in a direction opposite to head rotation, and thus the eyes assume a torsional component out of Listing's plane (Crawford and Vilis 1991; Haslwanter et al. 1992). This torsion produces a misalignment between the retina and head that is problematic both for perception (Wade and Curthoys 1997) and saccade generation (Crawford and Guitton 1997). For example, Fig. 4 illustrates saccades made from a central fixation point to eight targets, displaced by 30° from the center, in the four cardinal and four diagonal directions. The simulation in Fig. 4A was programmed such that the head was upright and the eyes had no initial torsional component. This is indicated on the left side by the head caricature, and is evident on the right side where corresponding 3-D eye positions remained in Listing's plane (i.e., the plane of zero torsion). This led to simulations, as shown by the eye-in-head trajectories on the left side, in which both models correctly foveated all eight targets. Figure 4B included a counterclockwise (CCW) eye torsion of 10°. The RFT (square ) model took this deviation into account and thus produced accurate eye movements, whereas the LT (···open circle ) model, which directly maps RE onto ME, output consistently inaccurate gaze trajectories in a clockwise (CW) pattern of errors (i.e., in the direction opposite to that of the eye torsion) for each of the eight targets. Figure 4C was almost identical to B, except that the 10° eye torsion was now present in the CW direction. Again, similar errors were made in a direction opposite to that of eye torsion (i.e., CCW) for all eight targets (or, as a rule of thumb, the trajectories are tilted incorrectly in the same direction as the head).


View larger version (19K):
[in this window]
[in a new window]
 
FIG. 4. Simulations of the 3-D RFT (square ) and LT (···open circle ) models for radial saccades of 30°. A: with the head upright (0° ocular torsion). Left column: behind view of gaze positions shows that both models predict accurate foveation of all 8 targets. Right column: side view shows that 3-D eye positions remain in Listing's plane (i.e., along the ordinate). B: with the head rotated about the line of sight 45° clockwise [CW; 10° counterclockwise (CCW) ocular counterroll]. Left column: behind view shows that the RFT model remains accurate, but the LT model misses the targets in the direction of head rotation (CW). Right column: side view shows how the eye counterrolls in a direction opposite to that of head rotation, and thus eye positions lie out of Listing's plane by 10° in the CCW direction. C: with the head rotated about the line of sight 45° CCW (10° CW ocular counterroll). Left column: behind view shows again that that the RFT model accurately reaches each target, while the LT model errs in the direction of head rotation (CCW). Right column: side view shows how the eye counterrolls in the CW direction and eye positions remain out of Listing's plane throughout the simulation.

These latter predictions can be explained intuitively as follows. When the eye is rotated 10° CCW, what was once the top of the eye has now been twisted 10° CCW, so that the uppermost target (in space coordinates) now causes a RE that, relative to the eye, is up and to the right. This, by definition of the LT model, causes a ME indicating "move the eyes up and to the right," and such a movement misses the final target. This error occurs consistently for all eight targets, but note that the directional errors are only approximately half the angle of the 10° ocular counterroll. This is because the simulated linear plant rotates the axes of eye rotations by half the angle of eye position.

For this reason it is necessary to point out that the plants used to simulate the LT predictions in RESULTS assume that the pulling directions of the muscles, in the horizontal and vertical directions, tilt 50% with current eye position (Quaia and Optican 1998). This model also assumes a similar dependence of axes on torsional position, which is supported by mechanical simulations of orbital "pulleys" (Miller et al. 1997). Note that without such a mechanical position dependence, the errors predicted by the LT model below would essentially double, and Listing's law would be violated. [Conversely, a 100% position dependency would provide accurate saccades but would also result in gross violations of Listing's law (Crawford and Guitton 1997)].

    METHODS
Abstract
Introduction
Methods
Results
Discussion
References

Subjects

A total of seven human subjects (4 male; 3 female), ranging in age between 23 and 33, participated in our study. Six participated in the first experiment and continued on to perform the second. Five of those also completed the third experiment, but one was unable to participate and was replaced with a seventh subject. None of the participants had any known neuromuscular deficits, and only one required corrective lenses during the third experiment. Subjects signed informed consent forms before inclusion, and the study was preapproved by the York University Human Participants Review Subcommittee.

Apparatus

Each subject was seated in an earth-fixed chair fitted with a personalized bite bar for head stabilization. The subject's right eye, when seated in the chair, was located 1 m off the ground and 1.1 m away from a flat, 2.14-m2 tangent screen holding 19, 3-mm light-emitting diodes (LEDs; each 0.17° in visual diameter and luminance of 2.0 mcd). In addition, the subject's right eye was in the exact center of three mutually perpendicular magnetic fields (90, 125, and 250 kHz), generated by Helmholtz coils 2 m diam. Movement of the right eye was recorded using Skalar 3-D scleral search coils, while head orientations were measured using a homemade, 3-D coil taped securely in place on the center of the forehead (~5 cm from the center of the fields). In calibration tests, measured quaternions were accurate to <= 0.58% (magnitude)/<= 0.9° (direction) with coils at the center of the fields, and <= 2% (magnitude)/ <= 2.05° (direction) with coils at ±10 cm from center. Data from the search coils were monitored on-line, on an oscilloscope in an adjacent room, and simultaneously sampled at 200 Hz. These signals were collected onto a PC for analysis along with feedback signals from the LEDs.

The magnetic field signals were precalibrated by rotating a gimbal-mounted coil 360° in the horizontal, vertical and torsional directions, by the method described in Tweed et al. (1990). At the end of each experimental session, we instructed each subject to freely rotate their eyes and head simultaneously in large horizontal (yaw), vertical (pitch), and torsional (roll) semicircles. The gains and biases of the corresponding coil signals, recorded during the latter procedure, were then further adjusted off-line so that the 3-D coil "vectors" described spheres centered about the origin (Tweed et al. 1990). This was done to eliminate any in vivo distortions of the coil signals. Torsional calibrations of the eye and head coils were also double checked at the end of the second and third experiments to ensure that computations of eye-in-head torsion during counterroll were correct.

Procedure

All experiments were performed in complete darkness. At the beginning of each paradigm, subjects were required to fixate the central target light for 5 s to obtain a reference position and check for coil slipping. At either the beginning or end of each experimental session, subjects were asked to perform pseudorandom self-generated saccades for 100 s (still in complete darkness). We made certain that subjects covered their entire visual field by viewing on-line measurements of their eye movements and encouraging them verbally to explore their full range. This allowed for the measurement and visualization of gaze, 3-D eye positions, and especially Listing's plane, over the entire oculomotor range. In addition, we performed the following evaluations of saccade accuracy.

EXPERIMENT 1. In the first experiment, subjects were required to make horizontal saccades between five parallel pairs of lights, each pair arranged symmetrically across the midline such that the rightward, target light was displaced 60° horizontally (angle of gaze projected onto the horizontal plane), in space coordinates, from the leftward, initial light (similar to the simulation in Fig. 3A). One pair of lights was situated at the subjects' eye level (i.e., 1 m above the ground), and subsequent pairs were placed at both 20 and 40° (angle of gaze projected onto the saggital plane) above and below the center pair (Fig. 3A). Subjects were instructed to stare at the leftward member of each pair (light duration varied randomly from 1,000 to 2,000 ms) until it disappeared and the rightward light (visible only monocularly to the right eye) was briefly flashed (150 ms). The random timing of the initial lights was chosen to eliminate any anticipatory effects, while the timing of the target lights corresponded to typical saccade latencies and thus avoided the possibility of subjects using visual feedback during the experiment. Subjects then made a saccade rightward, to the target light. Horizontal eye traces for five consecutive trials, plotted as a function of time, are shown in Fig. 5 for one subject. Note that the saccades were initiated slightly after (78.70 ± 10.44 ms, mean ± SE averaged across all saccades and then across all subjects) the target light was extinguished, so that there was no visual feedback, but not long enough after to evoke memory effects (Gnadt et al. 1991; White et al. 1994). Thus these were visually triggered saccades based solely on initial RE. This paradigm was designed to emulate the theoretical test shown in Fig. 3A, where saccade trajectories were programmed based on initial RE and eye position. This sequence was repeated 20 times for each of the 5 pairs of lights.


View larger version (25K):
[in this window]
[in a new window]
 
FIG. 5. Five consecutive saccade gaze trajectories for the right eye () are plotted against time. Subjects foveated the initial light (30° to the left) until its paired target light (30° to the right) was briefly flashed. Subjects made a saccade to the target light only after it had been extinguished.

At the end of this experiment, a visual calibration task was performed in which subjects were instructed to foveate the illuminated targets as accurately as possible. This was done five times for each of the LEDs described above. In this case, the target LEDs were illuminated for 2 s, allowing ample time for visually guided corrective saccades. This was used as a measure of the subjects' "desired" gaze direction for each light, and these values were later used as reference positions to determine the endpoint errors of the saccades. This measure of desired gaze direction (as opposed to our geometric measures) was used because 1) it is conceivable that there could be subjective variations in target foveation and 2) this would automatically cancel out minute errors in eye-coil signals so that they would not be misconstrued as inaccuracies. Note that this paradigm also allowed us to evaluate saccade accuracy in the presence of visual feedback, as a further control.

EXPERIMENT 2. Eight binocularly viewed target LEDs were arranged in a radial pattern, in the four cardinal and four diagonal directions, at an eccentric distance of 30° from the center light. With the head upright, subjects stared at the center light (duration varied randomly between 1,000 and 2,000 ms) until one of the peripheral lights flashed (150 ms), after which they made a saccade toward it. In this experiment, the light sequence began with the uppermost target (i.e., at 12:00), and saccades to all eight target lights were repeated five times, in a clockwise sequence. This was followed by a desired gaze calibration task similar to the one described above.

We then repeated the test and calibration paradigms with the head tilted torsionally to induce ocular counterroll. First, the head was rotated 45° CW, along with the bite bar apparatus, to induce CCW ocular counterroll. A 45° perturbation of the head in this manner has been found to induce ~5-10° of ocular torsion in the opposite direction (Crawford and Vilis 1991; Haslwanter et al. 1992). Next, the head was rotated upright again, and the calibration procedure was repeated. This was done to check the torsional stability of the eye coil and to minimize any cross-training effects. Finally, we rotated the head 45° CCW, and the procedure was repeated. As described below, the head-fixed coil was used to measure precise head orientation and to compute 3-D eye position relative to the head.

EXPERIMENT 3. After performing the second experiment, we concluded that, conceivably, binocular visual inputs could be used to infer ocular torsion indirectly (Howard and Zacher 1991), and that there may have been order effects in the radial saccade task. Furthermore, we wanted to compute the geometrically correct RE for the right eye as the unique measure of visual input, as we had done in experiment 1. Therefore we repeated experiment 2, but with the left, nonrecorded eye patched, and with a randomized order of target lights.

Data analysis

QUANTIFICATION OF COIL SIGNALS. The coil signals recorded while subjects fixated the central target were used as the initial reference positions for eye positions in space coordinates. Coil signals were first used to compute quaternions (Tweed et al. 1990), to visualize Listing's plane, and to perform the mathematical transformations described below. Quaternions were then used to compute unit vectors aligned with gaze direction (Tweed et al. 1990). (Our 2-D figures show the vertical and horizontal components of these gaze vectors as they project onto the plane of the tangent screen or Listing's plane.) In addition, quaternions were transformed into linear angular measures of 3-D eye position (Crawford and Guitton 1997) for statistical analysis. In this way, any final eye orientation could be described as a rotation vector from an initial reference eye position (this can be visualized with the right-hand rule). The torsional thickness (quantified as the standard deviation) of this data was computed using the algorithm described in Tweed et al. (1990). Finally, we were also able to compute angular velocities from the quaternions when required.

COORDINATE TRANSFORMATIONS AND COMPUTING RETINAL ERROR. Three different coordinate systems were used in the computation and subsequent analysis of the data. The raw eye coil data were in an earth-fixed orthogonal coordinate system defined by the magnetic fields that we called "space" coordinates. Eye position quaternions were subsequently rotated into (eye-in-) head coordinates by dividing them by the head position quaternion (Glen and Vilis 1992). This was particularly important during the head tilts in experiments 2 and 3, but was also useful to account for minute tilts of head posture against the bite bar. To put the data into Listing's coordinates, primary eye position was computed and used as the reference position, while the coordinates were rotated to align with Listing's plane (Tweed et al. 1990). Finally, 2-D target directions and 3-D eye positions in Listing's coordinates were used to compute target directions in eye coordinates (i.e., Teye), as follows.

With the data in Listing's coordinates, we first obtained the subjects' final eye positions at the target lights. These data were selected visually from the most stable traces of horizontal, vertical, and torsional eye positions, for each of five trials per target light in the calibration task, and then averaged. These points were then converted into gaze directions to produce a measure of each target's direction relative to the head (Thead). They were thus considered the ideal desired target directions in Listing's coordinates.

For each saccade, we also found the eye's initial position quaternion (q) at the initial fixation light. These points were chosen automatically by a computer algorithm restricted to certain selection criteria (described below), and their inverses (q-1) were computed (Tweed and Vilis 1987). Thead was then rotated into eye coordinates by the following formula (Crawford and Guitton 1997)
<IT>T</IT><SUB>eye</SUB><IT> = q</IT><SUP>−1</SUP><IT>T</IT><SUB>head</SUB><IT>q</IT>
This final unit vector (relative to the eye) was graphed in retinal coordinates, where the origin of the coordinate system represents a unit vector emanating from the fovea through the center of rotation of the eye. We defined the "horizontal" and "vertical" meridians of the eye as the arc intersections of the retina with the vertical and horizontal planes in Listing's coordinates, with the eye at primary position. Thus this particular target direction, in eye coordinates, specifies the unique point of retinal stimulation relative to the fovea (i.e., RE) (Crawford and Guitton 1997).

QUANTIFICATION OF PREDICTED AND ACTUAL SACCADE ERRORS. Actual saccades were selected according to the following criteria. Only the initial saccades were analyzed (corrective saccades, if any, were not). Occasional saccades that began before the target light was extinguished were rejected. Actual saccade starting points were selected as the points before the interval where their velocities reached 100°/s, and their endpoints selected at the points where their velocities decreased to 20°/s in the preceding interval. Trials that did not adhere to our set criteria were not quantified.

Before analyzing any of our data, we simulated the outcomes of the LT model using the experimental paradigms of this paper and formulas found in Crawford and Guitton (1997). We quantified the predicted errors of the LT model by inputting subjects' actual initial 3-D eye position data and computed RE into a simulation algorithm, and then allowing the computer to generate the predicted outcomes in Listing's coordinates. These results were then compared, along with the subjects' actual saccade endpoints, to the desired gaze directions obtained in the calibration trials to judge their relative accuracy.

VISUAL FEEDBACK. It has been assumed for many years that during saccades our vision is suppressed, and thus we cannot make use of any visual stimuli we may encounter midflight (Carpenter 1977). Some studies have suggested that stimuli presented during a saccade may be used to guide subsequent eye movements (Hallet and Lightstone 1976). However, the processing time required for vision is generally thought to be too lengthy to influence the current movement (Carpenter 1977). Our experiments afforded several opportunities where relatively inaccurate saccades (particularly in the monocular, torsional, radial task) were made in the absence of visual feedback (experimental trials), but then also tested with visual feedback (calibration trials). While analyzing these data, we observed a trend in which saccades made with visual feedback appeared to be more accurate than those without. We therefore included these data in our RESULTS, as described quantitatively below.

    RESULTS
Abstract
Introduction
Methods
Results
Discussion
References

Listing's law

The theoretical arguments of Crawford and Guitton (1997) assume that Listing's law is obeyed, within reasonable limits, even for large, eccentric saccades. Similarly, the models that we tested take initial 3-D eye position into account, but then assume that the half-angle rule for Listing's law holds. In contrast, some models have made the contrary assumption that Listing's law only holds for small saccades in peri-primary range (e.g., Schnabolk and Raphan 1994). This is relevant to saccade accuracy because rotation of the eye about a head-fixed torsional axis will contribute to gaze direction at peripheral targets [~sin (egaze eccentricity) × torsional angle]. Therefore, before testing between the RFT and LT models, we first confirmed the adherence of the large saccades used in our study to Listing's law.

The full 3-D range of eye-in-head positions in the random saccade task is depicted in Fig. 6A. Subjects were asked to make saccades throughout their oculomotor range in complete darkness. Eye position vectors (square ) during fixation (i.e., with velocities of <1°/s) are plotted, for one subject. Horizontal and vertical components of eye positions are shown from a behind perspective (indicated by the head caricature). Note that these are actually the tips of vectors emanating from the origin. The direction and magnitude of each position vector gives the axis and magnitude of the eye's relative rotation from primary position. This can be visualized by using the right-hand rule. For example, a downward pointing vector (direction of thumb) represents a rightward position (fingers curl to the right). These positions are plotted relative to the computed primary position, which was not generally at the center of the eye position range. The subjects' typically obtained a wide range of vertical and horizontal eye positions in this task. For example, the subject shown here spanned 80° vertically and 90° horizontally.


View larger version (28K):
[in this window]
[in a new window]
 
FIG. 6. Large saccades obey Listing's law. Quaternions derived from random saccades made throughout the oculomotor range, for one subject, are plotted in Listing's coordinates from behind (A) and the side (B). Coordinate axes are defined according to the right-hand rule. Only those points with velocities of <1°/s are shown to emphasis their compliance with Listing's law. From the side view, eye positions are clearly restricted to a flat plane of approximate thickness ±4° (torsional SD). Eye positions collected during one cycle of the calibration task are plotted in Listing's coordinates for the same subject. The entire range of eye movements covered in this experiment are plotted from behind (C), and from the side (D), eye positions both during and at the end of each saccade are clearly seen to lie in Listing's plane. E: the half-angle rule for saccade axes. Velocity trajectories for 5 consecutive saccades are plotted along with their respective gaze position in Listing's coordinates, for one subject, for the 5 light elevations (1-5) used in our experiment. Eye velocity vectors tilt torsionally out of Listing's plane by half the angle of eye eccentricity from primary gaze position (right-arrow). Thus the angle between gaze and velocity becomes more acute as the eye increases its eccentricity from primary position.

Figure 6B shows the same data, but now viewed from a perspective to the right side of the head. The abscissa corresponds to the head-fixed torsional axis and the ordinate to the vertical axis, where rotation about the torsional axis causes the eye to move CW/CCW and rotation about the vertical axis causes horizontal eye displacements. From this view, the subject's eye position vectors appear flattened into a plane centered at 0° torsion. As further quantified below, this confirmed the classic observation that eye position vectors are confined to a plane (i.e., Listing's plane) during head-fixed saccades.

Next, we examined the large saccades between our visual targets to see how well they conformed to Listing's law. To illustrate this, we have shown 3-D eye positions recorded while the same subject made saccades between all 10 targets in the calibration task (Fig. 6C). The rightward saccades (in the direction indicated by the arrows and labeled 1-5) correspond to the particular movements we will study. As indicated in this behind view, this forced the subjects to use a very large distribution of the complete horizontal/vertical range, even going beyond the randomly selected range in some instances. However, the side view (Fig. 6D) suggests that these eye positions conformed to Listing's plane, both during and particularly in between the large horizontal saccades, over the range that we studied.

These observations are quantified for all subjects in Table 1. The columns indicate the thickness of Listing's plane (i.e., degrees torsional SD) for 1) saccades made in the absence of visual feedback at each light elevation (40° up, 20° up, 0°, 20° down, and 40° down), 2) during calibrations with visual feedback, and 3) for fixations between random saccades in the dark. Values for the first five columns were calculated relative to a plane fit to the calibration data in column 6. On average, the standard deviations for the random saccade paradigm (3.45 ± 0.55°) were relatively high compared with previously reported repetitive saccades in the central range (Tweed and Vilis 1990), presumably due to the larger excursions in our range, the randomness of the saccade directions, and the complete absence of visual stimuli. In comparison, the torsional ranges during fixations with visual feedback during the calibration task were considerable lower (2.40 ± 0.45°). Most importantly, the torsional range for the five sets of experimental saccades, to be quantified for accuracy below, were minute compared with their ~60° horizontal excursions. Averages for each of the five elevations (40° up, 20° up, 0°, 20° down, and 40° down) were 2.31 ± 0.47°, 1.58 ± 0.17°, 2.99 ± 0.93°, 2.43 ± 0.33°, and 2.70 ± 0.52° respectively. This confirmed the assumption that these large saccades still obeyed Listing' law with only small random deviations. Since, for example, 2° of headcentric torsion at an eccentricity of 45° would rotate gaze direction by only 1.4°, such minute torsional deviations would not significantly alter the predictions cited below.

 
View this table:
[in this window] [in a new window]
 
TABLE 1. Standard deviations of torsional eye positions from an ideal Listing's plane for the test saccades to all five light elevations, the calibration task, and the random saccades paradigm

Crawford and Guitton (1997) argued that Listing's law poses a problem for saccade accuracy because it precludes the use of eye-fixed axes for saccades in favor of the half-angle rule. This strategy is illustrated for the saccades in our study in Fig. 6E. Five graphs, each one depicting five horizontal saccade velocity traces as well as five gaze trajectories, from the side, at each of the five elevations labeled 1-5 (40° up, 20° up, 0°, 20° down, and 40° down, in space coordinates) are shown. Notice how the angle between gaze and velocity becomes more acute as gaze moves downward from primary gaze position (right-arrow). This occurs because in each case, the velocity traces tilt by approximately half the amount of gaze eccentricity from primary position. Because this effect is kinematically equivalent to the results shown in Table 1, we will henceforth focus on eye positions and gaze accuracy. However, Fig. 6E graphically demonstrates the key observation that Listing's law precludes eye-fixed RE from being mapped trivially onto an eye-fixed rotation (Crawford and Guitton 1997). It also demonstrates that, because the deviations between the eye-fixed and actual axes grow relative to primary position, the predicted errors should also be measured relative to Listing's primary position.

Computing retinal error

This section describes the procedure used to compute a geometrically correct measure of RE and tests the prediction that horizontally displaced targets may not elicit horizontal RE. Figure 7A shows gaze directions from a behind view while subjects stared repeatedly at the 10 target lights, plotted in space coordinates. These data were recorded during calibration trials in which visual feedback allowed subjects' to correctly foveate the desired targets, and therefore these points were taken to represent the desired gaze directions. In these coordinates, the five pairs of lights were indeed displaced horizontally with respect to each other. For reference, Fig. 7B shows 3-D eye position vectors from a side view as subjects made saccades between the same targets. This shows that eye positions fall into a planar range that does not align perfectly with arbitrary space coordinates.


View larger version (21K):
[in this window]
[in a new window]
 
FIG. 7. Calculating the Geye (or Teye), produced by each of the 5 target lights while the eye fixates on the paired initial lights, as a geometrically correct measure of RE (Crawford and Guitton 1997). The horizontal saccade calibration task depicted in space coordinates. A: behind view of gaze vectors during 5 repetitions of foveating each light for 1.5 s. Note that the scale of this vector projection system does not correspond exactly to the azimuth/elevation angles (described in METHODS) used to place the targets. Thus, at tertiary positions, 40° elevation appears to be less eccentric. B: side view of 3-D eye position vectors during the same task. The same data are replotted in Listing's coordinates. C: behind view of gaze vectors in Listing's coordinates, where the origin of the coordinate axes corresponds to primary position. Clusters of target direction data are slightly more spread out in head coordinates due to slight shifts in head position against the bite bar. This is accounted for in the space-to-head transformations. D: side view depicts Listing's plane. E: REs, for each of the 5 light pairs (1-5), computed by rotating a vector representing the target direction around the inverse of a vector representing eye position at the initial light. The origin of this oculocentric coordinate system corresponds to the fovea.

These same points were then replotted in headcentric, Listing's coordinates by recomputing gaze directions relative to primary position (Fig. 7, C and D) using a method described previously (Tweed et al. 1990). Figure 7D illustrates how this resulted in an improved alignment of the eye position vectors, in Listing's plane, with the coordinates (this improved alignment was often more dramatic than shown for this particular subject). However, other than a slight shift relative to the newly computed primary position, the overall pattern of target directions (Fig. 7C) remained unchanged.

Finally, we rotated the target direction vectors (at the rightward member of each horizontal pair) by the inverse of 3-D eye position at foveation (at the leftward member of each pair), both in Listing's coordinates, to obtain the "rightward" REs, in eye coordinates, of the rightward targets (see METHODS). As Fig. 7E shows, varying the eye's initial 3-D eye orientation, even within Listing's plane, changed the RE produced by a purely horizontally (in space or head coordinates) displaced target light. The further the subjects' eyes were displaced from primary position, the greater the vertical and (to a lesser extent) the horizontal components of RE deviated from the displacement of the target in space coordinates. This confirmed the predictions of Crawford and Guitton (1997) and shows the importance of taking 3-D eye orientation into account when computing RE. Similar procedures were used to compute RE in all of the examples below.

Large horizontal saccades

The previous figure gives rise to certain testable predictions. If a model of saccade generation changes RE into ME directly, like the LT model proposes, the oblique REs in Fig. 7E should lead to oblique movements of the eye. This would result in a poor oculomotor response, because oblique eye movements could not correctly foveate horizontally displaced targets. To rigorously quantify these predictions, we input real RE and 3-D eye position data from initial light foveation for each individual saccade into our LT model, as described in METHODS. The predicted results, for one subject, are illustrated in Listing's coordinates in Fig. 8A. The subject's initial gaze positions (black-square to the left), the associated final gaze positions as predicted by our LT model algorithm (square  to the right), and average desired gaze points (open circle ; from the calibration data) at each elevation are shown.


View larger version (32K):
[in this window]
[in a new window]
 
FIG. 8. A: predicted directional accuracy of the LT model vs. actual saccades, for 1 subject. black-square on the left, the eye's initial gaze positions; square  on the right, final gaze positions predicted by the LT model of saccade generation; open circle  to the right, average final calibration positions. B: actual gaze trajectories for the same subject as in A for 5 consecutive saccades at each light elevation. This subject appears to foveate each target accurately, both vertically and horizontally. C: similar data for another subject who consistently undershot the targets. D: data for a 3rd subject who consistently overshot the targets.

The endpoints predicted by the LT model (Fig. 8A) miss their targets in a systematic position-dependent pattern consistent with the outwardly "fanning" pattern of RE shown in Fig. 7E (which used data from the same subject). This is not surprising because, by definition, the LT model maps RE directly onto ME. In this way, the specified direction shown on the retina would be reflected by a similar trajectory of the eye relative to the head. Such a plan would lead to errors that increase systematically with increased eye deviations from primary position. In contrast, the RFT model always predicted accurate foveation of each target (open circle ), because it accounts for initial eye orientation (Crawford and Guitton 1997).

After computing the predicted errors of the LT transformation for each individual saccade in each subject, we compared these results to our actual recordings of saccade trajectories. Figure 8B depicts five consecutive saccade gaze trajectories, at each elevation, for the same subject used to generate the predictions in Fig. 8A. Note that these saccades were made in complete darkness and without any visual feedback. Figure 8B illustrates the path of the eyes from the initial, leftward light toward the final, rightward target (open circle ). However, in contrast to Fig. 8A, the actual saccade endpoints in Fig. 8B were relatively accurate (as quantified below), and even the slight errors that they did show did not systematically follow the position-dependent pattern of errors predicted by the LT model. Indeed, this subject was relatively accurate both in saccade direction and magnitude.

Figure 8, C and D, illustrates saccade trajectories for two more subjects, in which the first consistently undershot the targets, while the second consistently overshot them. Because all the subjects that undershot the final targets had no difficulty in acquiring them in the visually guided calibration task, we concluded that these final gaze positions were not limited mechanically. All three subjects displayed somewhat curved gaze trajectories that increased their curvature with increased eccentricity from primary position, similar to the "standard plant" simulations of the RFT model (Fig. 3D). However, again, even with the variance found in the horizontal component of final eye positions, these subjects did not show the pattern of directional errors predicted by the LT model (Fig. 8A).

Figure 9, A and B, quantifies the observed directional and magnitude errors as a function of initial eye position in Listing's coordinates. Figure 9A shows horizontal under/overshooting, whereas Fig. 9B shows vertical upward/downward error, measured for each subject, relative to their own calibration data. It is evident from Fig. 9A that considerable horizontal endpoint variability existed between subjects at each elevation. The variation in final horizontal position was 4.86° (SD averaged across subjects). One subject completely overshot all five targets, three consistently undershot, whereas two showed variable under/overshooting, depending on the position of the lights. In general, more overshooting was observed at the center target, whereas more undershooting occurred at the peripheral targets. As mentioned previously, this does not seem to be due to the fact that subjects were physically unable to attain the targets, because during the calibration trials, all five lights were easily foveated. In contrast to the horizontal errors, the range of vertical errors (average SD = 2.00), shown in Fig. 9B, were significantly less variable, t(4) = 5.13 (P < 0.01). In other words, saccade directions were more accurate than saccade magnitudes.


View larger version (22K):
[in this window]
[in a new window]
 
FIG. 9. A and B: quantification of horizontal (i.e., magnitude) and vertical (i.e., directional) errors of final gaze position at all 5 light elevations for all subjects. A: horizontal overshoots and undershoots made by the 6 subjects (black-square, square , bullet , open circle , black-triangle, triangle ) to all light elevations analyzed. B: vertical upward and downward directional errors for each light elevation for all 6 subjects as listed above. C and D: quantitative plots of actual vs. predicted (by the LT model) final vertical directional errors. The RFT model predicts a slope of 0, whereas with the LT model, a slope of 1 (- - -) is expected. C: data plotted for 1 subject (refer to Fig. 3B; 5 of 20 saccades quantified here) for all 5 light elevations: 40° up (black-square), 20° up (×), 0° (bullet ), 20° down (black-diamond ), and 40° down (black-triangle) (in space coordinates). , regression line for this subject. D: regression lines for all 6 subjects are plotted. Average slope was -0.01 ± 0.14.

The preceding observations suggest that, contrary to the predictions of the LT model, the saccade generator compensated for eye orientation effects when deriving saccade commands from RE. To rigorously quantify the degree to which these subjects compensated for eye position, we plotted actual versus predicted (by the LT model) final vertical gaze errors for each subject. The RFT model predicted a slope of 0 (indicating complete eye position compensation) because no actual errors were anticipated, whereas the LT model predicted a slope of 1.0 (indicating no eye position compensation). Figure 9C shows final actual versus predicted errors for one subject, the slope fit to this data (), and the slope predicted by the LT model (- - -). The data are shifted leftward on this graph (downward in real life) because primary position was relatively high in the range of this subject. The predicted errors grew with increased displacement from primary position to a maximum of 19.90 ± 1.02° at the lowest, most eccentric lights. In contrast, the actual errors remained small, such that the slope of best fit was only -0.04. Finally, similar results were found when we plotted regression lines to the corresponding data for the all six subjects (Fig. 9D). Their average slope was -0.01 ± 0.14 (mean ± SD between subjects), indicating near perfect compensation for 3-D eye orientation.

Radial saccades

The binocular and monocular radial saccade paradigms were designed to test the simulated predictions in Fig. 4. There, saccades from a central target were made to each of eight radially displaced targets with the head in three different orientations (upright, 45° CW and 45° CCW). Recall that in the latter two conditions, where the eyes counterrolled torsionally out of Listing's plane, the simulated saccades missed their targets unless 3-D eye orientations were taken into account. We will now show the actual performance of real subjects in an identical task.

Figure 10 shows five consecutive gaze trajectories to each target (on the left side) and 3-D eye positions corresponding to the same saccades (on the right side) for one typical subject, using the same conventions as in Fig. 4. With the head upright, eye positions gathered around 0° on the abscissa (i.e., eye positions lie in Listing's plane; Fig. 10A, right). Gaze trajectories were relatively straight for purely horizontal and vertical saccades, whereas the oblique eye movements were curved in a systematic manner (Fig. 10A, left) as previously described (Smit et al. 1990; Smit and van Gisbergen 1990). More importantly, note that with the head upright, final gaze positions appeared to be quite accurate in direction (quantified below), particularly in their final direction relative to the targets (an exception being the purely rightward saccades in this specific example).


View larger version (30K):
[in this window]
[in a new window]
 
FIG. 10. Actual saccade trajectories () to targets (open circle ) and 3-D eye position data (black-squareblack-squareblack-square) are shown for 1 subject. Five gaze trajectories, made by recording continuously from each radial paradigm, are plotted along with eye position quaternions from 1 cycle of each paradigm. A: with the head upright (0° ocular torsion). Left column: behind view shows gaze trajectories are straight along the axes and curve slightly in the diagonal directions. Their endpoints appear to be accurate. Right column: side view indicates that average eye positions lie at 0.42 ± 0.45° CW torsion. Essentially, these eye positions are in Listing's plane. B: with the head rotated 45° CW. Left column: behind view saccade trajectories are somewhat less accurate and more variable. Again straighter saccades are seen along the cardinal directions relative to the head, whereas curved paths are followed for the oblique directions. Right column: side view quaternions indicate that the eye rotates in the CCW direction, on average, by 12.55 ± 0.49° and remains there while the subject saccades to every target from center. C: with the head rotated 45° CCW. Left column: behind view indicates similar results as in B. Right column: side view shows that the eye rotates in the CW direction, on average, by 11.50 ± 1.76°, and again, remains there throughout the trial.

Rotation of the head about the torsional axis caused a compensatory torsional counterroll of the eyes. On turning the head 45° CW, the torsional component of eye position deviated in the CCW direction (Fig. 10B, right). In this case, average eye-in-head CCW counterroll was found to be 8.71 ± 2.61° (mean ± SD, across subjects) during binocular viewing and 7.32 ± 5.56° during monocular viewing. Conversely, with the head rotated 45° CCW, the subjects' right eyes rotated in the CW direction (Fig. 10C, right). The average CW counterroll was 10.26 ± 2.23° during binocular viewing, and 7.52 ± 2.41° during monocular viewing. The eye positions seen in the latter two conditions were clearly shifted away from the normal Listing's plane, and, from these new eye positions, saccades resulted in eye position trajectories that remained out of the normal Listing's plane (Fig. 10, B and C, right).

To keep the location of the target lights consistent, irrespective of head orientation (indicated by the caricatures in Fig. 10), gaze was plotted in space coordinates in Fig. 10, A-C. Taking this and the head tilts into account, gaze trajectories continued to be relatively straight for horizontal and vertical saccades made relative to the head, while saccades made obliquely relative to the head were curved as mentioned previously (Fig. 10, B and C, left). The trajectories of these oblique saccades were initially too horizontal (re: head), but then changed course about two-thirds of their way to the target, despite the lack of any visual feedback. This could be accounted for by the relative strengths of the horizontal recti muscles, and probably does not reflect the visuomotor transformation.

Our main data analysis therefore focused on the endpoint accuracies of the initial saccades. In general, more directional errors were seen in final gaze direction with the head tilted, as compared with saccades made with the head upright (quantified below). Examples include the down-left and down-right (re: space) saccades in Fig. 10B, and the down-right and up-left (re: space) saccades in Fig. 10C. However, these errors did not qualitatively seem to follow the pattern predicted by the LT model (Fig. 4), i.e., the trajectories were not consistently tilted in the direction of head rotation (by the rule illustrated in Fig. 4) in either of the two head tilted orientations. In any one condition, we found great variability in the direction of observed errors both between and within target directions (upward and rightward saccades in Fig. 10B, and upward and leftward saccades in Fig. 10C). Thus a more rigorous quantification was necessary.

Figure 11 summarizes the endpoint accuracies of the subjects in the six tasks. Figure 11, A, C, and E, shows binocular data, Fig. 11, B, D, and F, depicts monocular data, and the caricatures next to each row indicate the position of the head-in-space. (Note that the head-fixed coordinates inside each caricature imply that the actual data were now plotted relative to the head to take the location of primary position into account.) We plotted average desired gaze points ( ×), average actual final gaze positions (- - - square ), and average predicted final gaze positions based on the LT model algorithm (···diamond ), along with the average location of the center light (bullet ). To find these average values, we first averaged data from the five saccades made by each subject to each of the eight targets in each of the three conditions, and subsequently averaged the resultant numbers across all six subjects. This figure illustrates the means by which we computed directional errors for both the LT model predictions and the actual saccades. We defined the actual directional error as the angle (CW or CCW) between the solid and dashed lines, and the predicted directional error as the angle between the solid and dotted lines.


View larger version (32K):
[in this window]
[in a new window]
 
FIG. 11. Calibration points (×), actual final gaze positions (- - - square ), final predicted gaze positions as per the LT model (···diamond ), and center lights (bullet ) are averaged across all subjects, in Listing's coordinates, and plotted for both the binocular (A, head upright; C, head CW; E, head CCW) and monocular (B, head upright; D, head CW; F, head CCW) radial tasks.

The origin of the axes gives us a rough idea as to the average location of primary position. Because average primary position was up and to the right of the center light, this indicates that Listing's plane of the right eye tended to be tilted upward and rightward in the head (relative to our original arbitrary coordinate system). On average, subjects tended to undershoot most of the targets, particularly in the downward and leftward directions (although note that there was considerable variability between subjects, as shown in Fig. 9). This occurred in both the binocular and monocular conditions (although subjects could easily foveate each target during the calibration trials).

With the head upright, both the predicted (diamond ) and actual (square ) average saccade direction errors were relatively small. This was observed in both the binocular (Fig. 11A) and monocular tasks (Fig. 11B). Thus, as expected, this task did not clearly test between the two models. This is more evident in Fig. 12, A and B. These bar graphs compare average actual final errors in saccade direction across subjects (black bars) with those predicted by the LT model (white bars). As with the horizontal saccades, predicted values were obtained by inputting actual initial eye position and RE data into the LT model, which then predicted the corresponding final endpoints. Thus in the six conditions shown (binocular task: A, C, and E; monocular task: B, D, and F), each white bar represents the average actual directional errors, whereas each black bar represents the average predicted directional errors, at each of the eight target locations (represented by the saccade directions up, right, down and left), in head coordinates.


View larger version (31K):
[in this window]
[in a new window]
 
FIG. 12. Average actual (black) vs. predicted (white) final gaze directions in both the binocular (A, head upright; C, head CW; E, head CCW) and the monocular (B, head upright; D, head CW; F, head CCW) experiments, are quantified for each target light (up, down, right, and left) relative to the head. Notice that the actual errors do not coincide with those predicted by the LT model.

Figure 12 (A and B) shows that both the predicted and actual errors in final endpoint direction were indeed quite small (i.e., none exceeded 5°) in both the binocular (Fig. 12A) and monocular (Fig. 12B) experiments. Interestingly, a cyclical pattern of errors was found in both the monocular and especially the binocular condition. (One can approximately fit a sine wave to the actual and predicted errors, especially in A.) This effect is attributed to the fact that the average primary position did not correspond precisely to the location of the center light. As a result, the radial saccades were not traveling precisely to or from primary position, and thus small errors were predicted (for the reasons described in the INTRODUCTION). Quantitatively, the saccade data in Fig. 12A seemed to follow a reverse pattern of that predicted by the LT model, perhaps suggesting an overcompensation for small position effects. However, no significant differences in directional error could be found between the actual and predicted data, when averaging across all target lights, for both the binocular and monocular conditions (P > 0.20). Again, this was expected in the head upright condition where both models anticipated similar results (see simulation in Fig. 4A).

In contrast, the models differed in their predictions when eye-in-head position was rolled torsionally out of Listing's plane. Although the RFT model continues to predict accurate endpoints in this situation (Crawford and Guitton 1997), the LT model consistently predicts errors in target foveation in a direction opposite to eye rotation (simulations in Fig. 4). These averaged simulated saccade endpoints (diamond ), here based on real REs and initial CCW torsional eye positions (Fig. 11, C and D), predict final gaze directions that consistently miss the actual targets in the CW direction (i.e., not in the sense of torsional deviation from Listing's plane, but rather the dotted lines are always tilted CW relative to the solid lines). This also applies for saccades that began with initial CW components (Fig. 11, E and F). They similarly show final predicted gaze directions missing the desired gaze points in the opposite CCW direction (dotted lines are always tilted CCW of the solid lines). (Again, the rule of thumb is that the predicted trajectories will tilt in the same direction as head tilt.) In the actual average data (square ), certain inaccuracies in certain directions were observed (dashed lines). These were sometimes in the direction predicted by the LT model, but they did not seem to consistently follow the predicted pattern in either direction or magnitude. Note again, that these are averaged data, i.e., individuals sporadically showed larger errors in certain directions that were not consistent (e.g., Fig. 10). However, both the predictions and the data were probably confounded by small errors already present in the controls (Fig. 11, A and B). The direct contribution of initial ocular torsion to the systematic inaccuracies in these saccades was more clear when quantified as shown in Fig. 12.

Because we only wanted to compare the errors caused by induced ocular counterroll, we first subtracted the baseline errors made with the head upright (Fig. 12, A and B) from those made with the head turned torsionally (within each subject, in headcentric coordinates, before averaging the results; Fig. 12, C-F). (The general trend could still be seen without this subtraction, but was confounded by the baseline effect.) This then left remaining errors attributable to the torsional deviation in initial eye position alone. Thus, with the eye rotated, the LT model now consistently predicted final directional errors opposite to head rotation (white bars; i.e., in the CW direction in Fig. 12, C and D, and in the CCW direction in Fig. 12, E and F). The subjects' actual endpoint error bars (black) often did seem to lie, on average, in the same direction as that of the predicted bars (white), however, in general, not nearly by the same amount. With binocular viewing, 32.72% of the error predicted by the LT model was realized with the head tilted CW, and 32.02% with the head tilted CCW (averaged across all target directions). Similarly, with monocular viewing, 45.85% of the predicted error was realized with the head CW, and 68.58% with the head CCW. These differences were significant during binocular viewing with CCW (P < 0.001) and CW (P < 0.05) counterroll. They were also significant during monocular viewing with CCW counterroll (P < 0.05), but not during monocular CW counterroll (P > 0.05) (probably due to the large variance in errors observed in this task). Thus, overall, actual saccade endpoints originating from, and terminating in, torsional eye positions out of Listing's plane were significantly more accurate than those predicted by the LT model.

To further summarize these trends, we plotted actual (square ) and predicted (bullet ; by the LT model) directional errors as a function of initial eye torsion, averaged across all saccade directions, for each subject (Fig. 13). The regression fit to the predicted data provides a predicted slope for the LT model, which simulates zero neural compensation and 50% muscular compensation for eye position. Conversely, the RFT model predicts a slope of 0, because any predicted errors (made by the LT) would not be realized in the actual data. We found, in both the binocular (Fig. 13A) and monocular (Fig. 13B) conditions, that the predicted slopes of the LT model were steeper than the actual slopes. For the binocular task, the actual slope was -0.2 (r = -0.82), compared with the predicted slope of -0.57 (r = -0.99). The ratio between these slopes was 0.35, suggesting 65% neural compensation for torsion. For the monocular task, the predicted slope was -0.63 (r = -0.99), whereas the actual slope was -0.28 (r = -0.78). The ratio between these slopes was 0.41, suggesting 59% compensations for torsion. Note again, that without the assumption of 50% muscular compensation, these estimates of neural compensation would become significantly higher.


View larger version (22K):
[in this window]
[in a new window]
 
FIG. 13. Actual (square ) vs. predicted (bullet ) directional errors, averaged across direction for each subject, are plotted as a function of initial ocular torsion for the binocular (A) and monocular (B) conditions. In both cases, the predicted slopes are larger than the actual slopes, indicating large errors in saccade direction that were not realized in the actual data.

Effects of visual feedback

The relatively large directional errors observed in the radial torsion tasks offered an opportunity to examine the effects of visual feedback on saccade accuracy. In general, the addition of visual feedback during saccades (present during the calibrations, but not available in the experimental data above) led to a dramatic increase in saccade accuracy. Figure 14A shows examples of selected saccades along the four cardinal directions that were particularly inaccurate in the absence of visual feedback (note that these individual errors do not reflect the general trend described above). Several consecutive saccades are shown for each target. Figure 14B depicts performance on the same tasks when visual feedback was provided. Note the striking difference between these trajectories. For example, saccades made without visual feedback toward the uppermost light seemed to initially begin in the wrong direction and continue on this path until the end. The corresponding saccades made with visual feedback also begin in the wrong direction, but at various points in their trajectories, they begin to curve toward the correct target location, eventually reaching the appropriate target. The remaining three examples show similar effects, with "light on" saccades curving in much closer to the target than "light off" saccades.


View larger version (13K):
[in this window]
[in a new window]
 
FIG. 14. Saccades without and with visual feedback. A: 5 saccades to each of 4 target lights without visual feedback. The 4 sets of data were taken from various subjects during the monocular, radial experiment. B: 5 saccades collected from the corresponding calibration trials of those in A. The presence of visual feedback caused changes in final saccade endpoints.

The temporal evolution of these selected saccades is illustrated in Fig. 15, which plots the vertical (thick lines) and horizontal (thin lines) components of eye velocity as a function of time. Velocity is plotted relative to space to be consistent with the previous figure, and corresponding saccades are numbered in the same order. In each case, the minor component of eye velocity should ideally remain at zero. In actuality, the minor components of the illustrated in-dark saccades (left column) usually deviated to one side of zero, corresponding to the directional errors shown in the previous figure. Occasionally, the minor velocity component reversed direction toward the end of the in-dark saccades (e.g., Fig. 15A), giving the appearance of feedback correction. However, those "corrective" reversals were much stronger and more consistent when visual feedback was available (right column). It appears that there was no single stereotyped mechanism for this across subjects. For example, in subject JZ (Fig. 15A), this reversal was accomplished without lengthening or disrupting the major velocity components. In contrast subject XF (Fig. 15, C and D) showed a lengthening of the saccade profile, with several reaccelerations in a lingering, low-velocity "tail." Subject DC (Fig. 15B) showed an intermediate strategy.


View larger version (27K):
[in this window]
[in a new window]
 
FIG. 15. Vertical (thick lines) and horizontal (thin lines) velocity profiles of saccades without (left column) and with (right column) visual feedback. Data are computed in space coordinates, and numbers correspond to the saccade trials in the previous figure. A: upward saccades, subject JZ. B: rightward saccades, subject DC. C: downward saccades, subject XF. D: leftward saccades, subject XF. Note that these highly inaccurate saccades were selected for illustrated purposes and are not representative of the bulk of the data analyzed in previous sections.

We quantified the overall effect of visual feedback on accuracy for the monocular, radial saccade experiment, where the largest variations in directional errors were observed. Note that the individual saccades for both conditions (with and without feedback) were selected using the same velocity criteria, as graphically illustrated in the previous figure. Despite the "lengthening" effect observed occasionally in some visually guided saccades (Fig. 15, B-D), we found that the average duration of the saccades in both conditions [with feedback (150.37 ms) and without feedback (168.72 ms)] were not significantly different (P = 0.08). Finally, we quantified both overall gaze error (angle between actual final gaze direction and desired final gaze direction) and directional error (as defined above) independently. These were computed for saccades toward each of the eight target lights, made both with and without visual feedback (Fig. 16). Figure 16, left column, compares overall gaze errors (due to both magnitude and direction), whereas the right column compares overall directional errors. Each row represents data collected at a different head orientation (i.e., A and B: head upright; C and D: head CW; E and F: head CCW, as indicated by the head caricatures). The black bars represent subjects' performances without visual feedback, and the white bars indicate performances with visual feedback. In each of the six conditions, and in all saccade directions, the errors observed in saccades made with visual feedback were significantly smaller than those made without visual feedback (i.e., the white bars are consistently smaller than the black bars).


View larger version (44K):
[in this window]
[in a new window]
 
FIG. 16. Gaze (left column) and directional (right column) errors toward each of the 8 radial targets during the monocular experimental/no visual feedback (black) and calibration/visual feedback (white) trials. Data are shown for the head upright (A and B), the head CW (C and D) and the head CCW (E and F). Note that here we averaged the absolute values of the errors, which is why the appear larger than in the previous figures.

One possibility is that this increased accuracy was a learning effect. If so, saccades toward a given target should have initially been inaccurate, but then should have progressively grown more accurate with each repetition. We quantitatively tested this hypothesis in Fig. 17 for the monocular calibration task, where subjects were exposed to visual feedback from each target five times in an unpredictable order. The left column depicts gaze errors, the right column shows directional errors, and each row represents a different head orientation (i.e., A and B: head upright; C and D: head CW; E and F: head CCW). Again, the black bars represent subjects' performances without visual feedback, and the white bars indicate performances with visual feedback. The numbers 1-5 represent the initial through final saccades, respectively, made to each of the eight targets. Again, a marked difference between saccades made with and without visual feedback is obvious. However, there was no statistically significant improvement in saccade accuracy over time. Indeed, visual feedback immediately improved saccade accuracy but had no further noticeable effect over subsequent trials.


View larger version (35K):
[in this window]
[in a new window]
 
FIG. 17. Gaze (left column) and directional (right column) errors for each of the 5 trials to each target light. Monocular experimental trials without visual feedback (black) and calibration trials with visual feedback (white) are shown for data with the head upright (A and B), the head CW (C and D) and the head CCW (E and F). Again, note that here we averaged the absolute values of the errors, which is why they appear larger than in the previous figures.

    DISCUSSION
Abstract
Introduction
Methods
Results
Discussion
References

General findings

The most important conclusion to be drawn from this study is that the oculomotor system accounts for both RE and 3-D eye orientation in the visuomotor transformation. This conclusion is necessary given Listing's law, the nontrivial position dependency of RE both eye orientation and objective target direction, and the degree of saccade accuracy that we have described above. More precisely, knowledge of eye position is necessary to transform occulocentric, retinal displacement signals into headcentric, motor displacement commands (Crawford and Guitton 1997). Our results show that the saccade generator does indeed perform such a reference frame transformation, because eye positions out of Listing's plane were partially compensated and eye positions in Listing's plane were almost fully compensated. This finding thus contradicts the idea that the visuomotor transformation for saccades in Listing's plane amounts to a simple stimulus-response look-up table (Hepp et al. 1997; Jürgens et al. 1981; Raphan 1997, 1998). The simpler 3-D LT model was simply unable to account for the level of saccade accuracy observed in our data because it lacks a position-dependent reference frame transformation.

Visuomotor transformation versus visual representation

Before discussing these results further, we feel that it is important to clarify our view of the reference frame transformation in our model. Because our RFT model includes an internal representation of targets relative to the head, it is tempting to think that this is used to remember target direction during eye movements (Brotchie et al. 1995; Howard 1982; Zipser and Andersen 1988). However, this is likely a separate issue (Crawford and Guitton 1997). For example, Henriques et al. (1998) recently demonstrated that visual targets are probably perceived and remembered in an oculocentric reference frame (Duhamel et al. 1992). This result may initially seem to contradict our current findings, but the two views can be easily reconciled. As proposed by Henriques et al. (1998), we suggest that the mechanism of space constancy across saccades may well involve oculocentric remapping at the level of RE (Duhamel et al. 1992), but the reference frame transformation for saccades (discussed in this paper) takes place downstream as part of a separate process, i.e., the visuomotor transformation. Thus we believe that the internal eye-to-head reference frame transformation described in our experiments pertains, not to the mechanism of space constancy across saccades, but rather to visuomotor execution and perhaps applies similarly to perceptual interpretation at the current eye orientation.

Retinal error and eye orientation

By examining Fig. 7, it is clear that the sight of retinal stimulation is a product of both target displacement in space and 3-D eye position. As the targets became more vertically eccentric, they produced progressively more oblique REs. Thus, when making saccades between visual targets, even with eye position in Listing's plane, it is insufficient to assume, as many previous researchers have done, that the sight of retinal stimulation is geometrically determined solely on information derived from the target display. As Crawford and Guitton (1997) have shown theoretically and we now show experimentally, in reality, this could lead to gross misestimations of the pattern of retinal stimulation. The correct pattern of retinal stimulation can only be determined by measuring the eye's 3-D orientation relative to the head. Our work suggests that, whenever the eyes obtain any position other than primary position, this pattern changes, posing a nontrivial problem for both external measures of visual geometry (Fig. 7) and for the brain to interpret these retinal patterns.

Therefore these basic geometric problems pertain not only to the saccadic eye movements investigated in this study, but also for the egocentric perception of spatial relationships between objects in general. For example, this poses a direct problem in perceiving the objective orientation of lines in space when the eye is at eccentric (i.e., secondary, tertiary, and torsional) positions (Figs. 1, 3A, and 4). Furthermore, in a recent theoretical paper (Tweed 1997), the same basic problem was described as it pertains to binocular vision, where it was again concluded that eye orientation must be taken into account. Finally, considering the complexity of the relationship between RE and eye position that we have demonstrated here in normals, a similar quantification is probably important during eye muscle pathology. For example, patients with strabismus will have unusual 3-D eye orientations varying at different gaze directions. These position dependencies will likely produce complex and unexpected monocular and binocular visual deficits that can only be predicted if 3-D eye orientation is measured and accounted for by the means that we have demonstrated.

Saccade properties

Crawford and Guitton (1997) simulated RE in the context of Listing's law because this behavioral constraint determines both the initial eye orientations where RE is induced and the axes of eye rotation used to satisfy these REs. Although our model predictions took initial eye positions into account, they assumed that the latter constrains held during the generation of saccades. We therefore had to first evaluate Listing's law over the large ranges used in our experiments.

Our measurements confirmed that Listing's law held in all head upright conditions, but with varying degrees of precision. In particular, the torsional ranges obtained during random saccades were quite large. This may have been due to the very large range that subjects were encouraged to explore, as well as the fact that these eye positions were recorded in complete darkness. Furthermore, in a recent paper, it was shown that Listing's plane is thicker for multidirectional tasks (such as our random paradigm), because its thickness is a composite of thicknesses for several unidirectional tasks (Desouza et al. 1997). In comparison, the calibration planes (in which visual feedback was provided and there was less variability in saccade direction) were more narrow. Most importantly, the horizontal saccades employed in our analysis of saccade accuracy obeyed Listing's law with a good degree of precision, despite their large size (~60°). As demonstrated above, this is kinematically equivalent to saying that these saccades obeyed the half-angle rule of Listing's law. This is important in this context because it shows that these saccades cannot satisfy eye-fixed RE by mapping it onto an eye-fixed axis of rotation, regardless of plant properties (Crawford and Guitton 1997).

On a related theme, the horizontal gaze trajectories for most subjects (e.g., Fig. 8, B-D) appeared to curve outward at all five light elevations, and the curvature seemed to increase with increased distance from primary position. This observation was more consistent with the saccades produced by the standard plant version (half-angle rule applied in the brain stem) of the RFT model, as opposed to the linear plant version (half-angle rule applied in the ocular plant; Fig. 3D). The standard plant version produced arcing gaze trajectories because it generates fixed-axis saccades, whereas the linear plant version predicted straight gaze trajectories only because it produces nonconstant axes during saccades (Crawford and Guitton 1997). However, we do not regard this to be important evidence for the neural model of the half-angle rule because there may be slight modifications of the linear plant that do produce fixed-axis saccades, and regardless, neural operations can potentially compensate for or "undo" the characteristics of any arbitrary plant (e.g., Smith and Crawford 1997).

Finally, it has been suggested that saccades over 10° normally undershoot their targets by ~10%, and that the amount of undershooting increases with saccade amplitude (Becker 1972). Moreover, it has been hypothesized that undershooting is actually a strategy employed by the saccadic system for the sake of efficiency and economy (Howard 1982). In our experiments, the endpoints of the subjects' saccades were variable but did tend to undershoot the final target (Fig. 9A). One-half of the subjects (3 of 6) completely undershot all five horizontal targets, whereas two of the remaining three undershot at least one target, and only one subject consistently overshot all the targets. Furthermore, the six subjects consistently undershot the eight radial targets, particularly in the down-left direction. It is conceivable that the relative location of primary position (up-right of the center light) may have contributed to this latter effect (i.e., that the saccade generator is best calibrated near primary position).

Testing between two models of the visuomotor transformation

Because 1) RE is fixed with respect to the eye and thus varies with eye position, and 2) Listing's law precludes the use of eye-fixed saccades, then RE cannot be correctly mapped in a fixed way onto ME. If these factors are ignored, as in our LT model, then a pattern of errors emerges in which eye positions in Listing's plane miss their targets centrifugally by larger amounts as the eyes' orientation deviates eccentrically from primary position, and saccades from torsional eye positions miss their targets in a direction tilted opposite to the direction of induced torsion. In contrast, we found that the saccade generator correctly converts oblique REs (Fig. 7C, REs 1, 2, 4, and 5) into horizontal saccades for targets lying at various eccentricities from primary position, and partially corrects for torsional eye positions out of Listing's plane.

In the horizontal saccade task, where eye positions remained in Listing's plane, we found that, although final horizontal eye positions varied greatly between subjects (Fig. 9A), the final vertical distribution of these eye movements were quite compact and accurate (Fig. 9B). Moreover, the directional errors of these large horizontal saccades are much smaller than the receptive fields of individual ganglion cells in the peripheral portion of the retina (Perry and Cowey 1985). Similar cases of such visual "hyperacuity" have been observed previously for vernier line judgments (Westheimer 1981) and are probably mediated by overlapping receptive fields in a population code (Sparks 1989). Figure 9D illustrated that four of the six subjects, and the average slope, carried negative values. We hypothesize that this may indicate a slight overcompensation for eye position. A much larger exaggeration might account for the position-dependent pattern of direction errors observed in saccades to remembered targets (White et al. 1994).

In contrast, our comparisons between simulations of the LT model and actual saccades suggest that the saccade generator did not compensate completely for torsional eye positions. More accurately, ocular counterroll had different effects depending on the subject and the direction of the saccade, perhaps due to variation in the torsional dependencies of individual muscles or variable visuomotor calibration. Note that humans and monkeys normally hold their heads upright when orienting themselves (Glen and Vilis 1992). Therefore it should not be surprising that the oculomotor system is well calibrated for saccades from a head upright position (Melis and van Gisbergen 1995; Optican and Miles 1985). Conversely, because we rarely make saccades with torsional eye-in-head positions, then the system should be poorly calibrated for counterroll. We therefore hypothesize that subjects could learn to make more accurate saccades given proper training. However, it should be emphasized that the systematic torsion-related errors that we observed were still only a small percentage of that predicted by the LT model.

In comparison, little or no compensation for torsional eye positions was found in perception when subjects were asked to match a visual line to gravitational horizontal (Wade and Curthoys 1997). This may be because perception is still worse calibrated for torsion than saccades, and muscular pulleys reduce the effect of torsion on saccades, whereas they have no effect on perception. Finally, it is also possible that otolith signal interference may have contributed to some of the errors in saccade direction that were observed with the head tilted. Unfortunately, very little is known about the effect of tonic vestibular signals on saccade accuracy (Henn et al. 1997), and therefore we could not control for this in our analysis. However, it would be possible to eliminate this variable by performing our experiments on upright nonhuman animals and rotating the eyes torsionally by stimulating the appropriate torsional brain stem neurons (Crawford and Vilis 1992) just before visually eliciting saccades.

Thus, in our experiments, eye positions in Listing's plane were fully compensated for while positions out of Listing's plane were, at least, partially compensated for. Therefore we conclude that the brain takes 3-D eye position into account when reading the retinal code to generate accurate saccades. The sort of 3-D input/output analysis that we have utilized here could be important clinically for evaluating visual and oculomotor function in patients with eye muscle deficiencies (which may be associated with both unusual initial eye orientations and problems with certain saccade trajectories).

Visual feedback

The preceding discussion pertains only to saccades with no visual feedback. Although these saccades were found to be relatively accurate on average, saccades toward certain targets in some subjects were sometimes quite inaccurate in direction, particularly with the head tilted (Figs. 14 and 15). However, we found that allowing visual feedback during saccades dramatically and consistently reduced these residual errors in both direction and magnitude.

The pattern that we observed would suggest that vision (perhaps early in a saccade) is either being used for in-flight corrections toward the end of the saccade, or at the least, provides an extra "drive" (even if only through attention/arousal effects) that helps to get the saccade on target before the "latch mechanism" shuts it off. The drawn out corrective "tails" observed in the velocity profiles of some of the least accurate saccades (Fig. 15) certainly give the appearance of a feedback driven mechanism. If these corrections are visual feedback driven, this would seem to contradict the required latencies of inputs from primary visual cortex (Carpenter 1977). However, a faster response might be mediated by direct retinotectal projections to the superficial layers of the superior colliculus (Hoffmann 1973), which in turn have some direct projections to the deeper motor layers of the colliculus. Indeed, this could be an important function of such a path in higher mammals, as opposed to an anatomic basis for the LT hypothesis (Moschovakis et al. 1988). In any case, it would seem that the widespread assumption that vision during saccades has no immediate impact on saccade accuracy needs to be reexamined.

Biological implications for the visuomotor transformation

Having determined that the brain does perform the equivalent of a position-dependent reference frame transformation, the next experimental task is to determine where and how this occurs. As argued above, our findings preclude the use of a simple input/output look-up table, or any similar scheme, because eye orientation must also be taken into account. A reference frame transformation could be accomplished by a more complex look-up table with entries for each different combination of RE and eye position. However, the brain does not seem to employ such space-intensive mechanisms. For example, eye position signals seem to interact multiplicatively with RE in the quasi-retinotopic maps of posterior parietal cortex and superior colliculus (Andersen 1989; van Opstal 1993). Alternatively, RE and eye position could interact in an even more compact fashion, perhaps within a circumscribed brain stem nucleus, if this were performed past the stage of retinotopic mapping. Finally, the demonstrated role of the cerebellum in saccade-related eye position dependencies (e.g., Vilis and Hore 1981) may implicate it in this process, or at least in the calibration of the reference frame transformation.

Before determining how this transformation occurs, it will probably be easier to determine where it occurs, by observing the input-output signals of neural structures. Crawford and Guitton (1997) have predicted that, given the results of our study, there must be a specific internal switch in representation between 2-D, oculocentric RE and 3-D, headcentric ME vectors. For example, if one were to evoke large gaze shifts (>= 60°) by stimulating the colliculus (Freedman et al. 1996), then two different results could occur. If the superior colliculus encodes motor gaze shifts (relative to the head or body), then one would expect to find fixed saccade vectors independent of initial eye position. This would imply a purely motor code, and thus the transformation would have to occur upstream from that point. However, if the superior colliculus encodes RE, then one would expect to see saccades showing a position-dependent, semi-converging pattern of eye movements, suggesting that the reference frame transformation exists somewhere downstream.1 The observations that the deeper layers of the colliculus encode a shift in visual gaze independent of downstream compensations for eye position seems to be more consistent with the latter hypothesis (Freedman et al. 1996; Stanford and Sparks 1994). Moreover, Russo and Bruce (1993) quantified a pattern of mildly convergent saccades (evoked by stimulation of the frontal and supplementary eye fields) very similar to that predicted here. However, contrary to previous assumption, this may mean that these sites encode true RE, with the correct position-dependent transformation implemented downstream, perhaps via cerebellar inputs to the brain stem (Russo and Bruce 1993; Vilis and Hore 1981). In any case, our results suggest that the equivalent of a 3-D, position-dependent RFT must occur between the 2-D, retinotopic maps of the brain (Goldberg and Bruce 1990; Moschovakis and Highstein 1994; Waitzman et al. 1991), and the 3-D, headcentric burst neuron coordinate system (Crawford and Vilis 1992; Henn et al. 1989).

    ACKNOWLEDGEMENTS

  We thank Dr. T. Vilis for critical comments on this manuscript.

  This work was supported by Canadian Medical Research Council and National Sciences and Engineering Research Council grants to J. D. Crawford and by the Sloan Foundation. J. D. Crawford is a Canadian Medical Research Council Scholar and an Alfred P. Sloan Fellow.

    FOOTNOTES

1   There may be other contributing factors to convergence in electrically evoked saccades. For example, the data of Freedman et al. (1996) suggest that some of the strongly converging patterns observed in head-fixed animals may be an artifact of removing head movement from the control system.

  Address for reprint requests: J. D. Crawford, Dept. of Psychology, York University, 4700 Keele St., Toronto, Ontario M3J 1P3, Canada.

  Received 27 February 1998; accepted in final form 20 July 1998.

    REFERENCES
Abstract
Introduction
Methods
Results
Discussion
References

0022-3077/98 $5.00 Copyright ©1998 The American Physiological Society