Centre for Vision Research, Canadian Institutes for Health Research, Group for Action and Perception, 1Department of Psychology, and 2Department of Biology, York University, Toronto, Ontario M3J 1P3, Canada
![]() |
ABSTRACT |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
Smith, Michael A. and J. Douglas Crawford. Implications of Ocular Kinematics for the Internal Updating of Visual Space. J. Neurophysiol. 86: 2112-2117, 2001. Recent studies have suggested that during saccades cortical and subcortical representations of visual targets are represented and remapped in retinal coordinates. If this is correct, then the remapping processes must incorporate the noncommutativity of rotations. For example, our three-dimensional (3-D) simulations of the commutative vector-subtraction model of retinocentric remapping predicted centripetal errors in saccade trajectories between "remembered" eccentric targets, whereas our noncommutative model predicted accurate saccades. We tested between these two models in five head-fixed human subjects. Typically, a central fixation light appeared and two peripheral targets were flashed. With all targets extinguished, subjects were required to saccade to the remembered location of one of the peripheral targets and saccade between their remembered locations. Subjects showed minor misestimations of the spatial locations of targets, but failed to show the cumulative pattern of errors predicted by the commutative model. This experiment indicates that if targets are remapped in a retinal frame, then the remapping process also takes the noncommutativity of 3-D eye rotations into account. Unlike other noncommutative aspects of eye rotations that may have mechanical explanations, the noncommutative aspects of this process must be entirely internal.
![]() |
INTRODUCTION |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
For visual information to
be useful for more than a fraction of a second, its spatial content
must be stored and updated across saccades. Recent neurophysiological
studies have suggested that the spatial targets for eye and arm
movements are updated by remapping their internal representations
within retinal coordinates during each saccade (Batista et al.
1999; Duhamel et al. 1992
; Gnadt and
Andersen 1988
; Henriques et al. 1998
;
Walker et al. 1995
). For example, suppose that the
receptive field of a visually responsive neuron is currently encoding
target A and that an intended eye movement will cause target B to fall
within its receptive field. Concomitant with the eye movement, the
neuron will stop responding to target A and begin responding to target
B even though target B is not yet within its receptive field
(Walker et al. 1995
). Such neural events have been
modeled by subtracting a vector representing the saccade from other
vectors representing visual locations on a retinotopic map
(Goldberg and Bruce 1990
; Moschovakis and
Highstein 1994
).
However, an important property of three-dimensional (3-D) eye
rotations is their noncommutativity. By noncommutativity we mean that,
unlike vector addition, different orders of rotation around the three
axes of rotation will land the eye in different orientations
(Tweed and Vilis 1987). As a result, vector subtraction (i.e., through the addition of the negative of a vector) does not
properly represent the physical rotations of the eye and may not be the
appropriate mechanism for retinocentric remapping (Henriques et
al. 1998
). The implications of noncommutativity for oculomotor control have been controversial (Crawford and Guitton
1997
; Demer et al. 2000
; Quaia and
Optican 1998
; Raphan 1998
; Tweed and
Vilis 1987
), but their implications for higher level processes
like visuospatial remapping remain largely unexplored.
Recently, Henriques et al. (1998) suggested a
model for the intersaccadic remapping process that would incorporate
the noncommutativity of 3-D rotations. In particular, the authors
suggested that the brain would rotate its retinocentric
representations by the 3-D inverse of each eye rotation, which is the
rotary analog of vector subtraction. In theory, this would be a more
correct mechanism, but it is not yet clear how important this would be
for behavior or whether the actual system bothers to take into account
the difference between these approaches. The purpose of this study was
to generate simulations that would provide a behavioral test between
the commutative and noncommutative models of the remapping process and
to test these simulations experimentally.
![]() |
THEORY |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
The cortical structures involved in remapping [e.g., lateral
intraparietal cortex (LIP), frontal cortex] seem to encode
saccade targets in a visual frame (Colby and Goldberg
1999; Colby et al. 1995
). Moreover, Klier
et al. (2001)
have shown that the superior colliculus also
encodes movements in an eye-centered visual frame. In light of these
findings, the process of visuospatial remapping must be modeled in a
visual coordinate frame. In this study, two models of visuospatial
remapping were tested: the vector subtraction model of Goldberg
and Bruce (1990)
and the noncommutative remapping model of
Henriques et al. (1998)
. The details of these models are
shown in Fig. 1. Briefly stated, the
vector-subtraction model remaps the visual target in eye coordinates by
subtracting the displacement vector (
E) of the
intervening saccade. In contrast, the noncommutative model rotates the
old coordinates of the target by the inverse of the 3-D eye rotation,
which was suggested by Henriques et al. (1998)
. An
additional feature overlooked by Henriques et al. (1998)
was that an efferent copy of the headcentric eye rotation was first put
into eye coordinates so that the efferent copy matched the sensory
representation frame. In practice, these collapse into a single
multiplicative comparison between current and desired position. To be
fair, the output of both models was used to drive the same model of the
saccade generator (Crawford and Guitton 1997
), which is
one that converts targets in a visual frame into a motor displacement
command for saccades in Listing's plane (Crawford et al.
1997
; Hepp et al. 1997
). The latter saccade generator model has previously been shown to produce realistic saccades
that are accurate and obey Listing's law (Klier and Crawford 1998
).
|
Simulations
Saccades were simulated with both models using various target configurations. The configuration that most clearly distinguishes between the two models is shown in Fig. 2. This arrangement of peripheral targets formed the basis of our test paradigm, which had a central-fixation light emitting diode (LED) illuminated while two of the corner targets flashed. The task required three saccades between the remembered location of the corner targets in the dark (see Experimental paradigms for details). Figure 2, A and B, shows two simulated gaze trajectories of the test paradigm using both the noncommutative model (Fig. 2A) and the commutative vector-subtraction model (Fig. 2B). Note that the noncommutative model showed no errors in either acquiring the initial target or in the saccades between the remembered locations of the peripheral targets. The commutative model was able to acquire the first target accurately because this did not require remapping. However, the commutative model predicted a cumulative pattern of centripetal errors during subsequent saccades between the remembered locations of the peripheral targets. Further simulations suggested that this pattern of saccades provided the clearest test between the two models, so our experiment was designed to emulate this test.
|
![]() |
METHODS |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
Five head-fixed human subjects, aged 22-43 years, participated in three experimental paradigms. In each paradigm, subjects faced a black tangent screen 110-cm distant while sitting in the dark. The tangent screen had an arrangement of five LEDs: a central-fixation LED located directly in front of the subject and four peripheral target LEDs 30° from the central fixation point and located at the corners of an imaginary upright square.
3-D eye position information was collected using the scleral search
coil technique in three alternating magnetic fields (Klier and
Crawford 1998; Tweed et al. 1990
). Data were
digitized and analyzed offline using in-house software. The experiment
and methods were approved by the York Human Participants Review Subcommittee.
Experimental paradigms
In our test paradigm, subjects were required to saccade repeatedly between adjacent corners of a virtual square outlined by the peripheral target LEDs. Using the upper right and upper left LEDs as an example of the test paradigm (see Fig. 2B for a simulation), the subject fixated the illuminated central LED and the upper right and upper left corner targets flashed. After 400 ms, one of the two corner targets (chosen at random) was flashed again to signal which target was to be first fixated and after 250 ms the central LED was extinguished. An audio tone cued the subject to make a saccade to the remembered location of the first target and then to make three successive saccades back and forth between the two remembered locations of the corner targets. The subject's successive saccades were also paced with an audio tone to ensure that each remembered location was fixated for a consistent amount of time. At the completion of the three successive saccades, a higher tone instructed the subject to again fixate the now illumined central target. This entire task was then repeated for a total of 10 trials using each of the four corner pairs of LEDs, where the initial corner target was randomly selected. This was the main test between the models.
We also conducted a visual control paradigm. This paradigm was identical to the test paradigm, except that the LEDs were illuminated sequentially such that only one LED was illumined at any one time for 1,500 ms. Subjects fixated each LED in turn for the entire time of the illumination. In addition, the audio pacing tones were maintained for consistency. This was done as an extra calibration for ideal gaze directions at the targets (see Analysis).
Finally, as an additional control, we conducted a memory control
paradigm (subjects saccaded between the remembered locations of
peripheral LEDs five times, where one of the pair was randomly chosen
to serve as the initial fixation point instead of the central target).
This was done to quantify position dependent memory errors (Gnadt et al. 1991; White et al. 1994
)
independent of remapping from the center. However, because these
controls did not prove to be necessary for analysis of the main test
paradigm and because Henriques and Crawford (2001)
have
subsequently shown that position-dependent errors in human "memory
saccades" are minimal in this task, we do not include this data here.
Analysis
Perceived peripheral target locations were determined for each
subject by computing the centroid of a data cloud of fixations around
each target during the visual control trials. Theoretical error
predictions for each subject were then generated by inputting, into the
commutative model, the initial fixation positions (determined by the
search coils) for each task, and the order and location of the
peripheral target presentation. These data were first rotated into
alignment with Listing's plane coordinates (Tweed et al. 1990), because this was the coordinate system used by the
models. This was necessary because the primary position in Listing's
coordinates does not generally align with the central position
(Tweed and Vilis 1990
), and conversely, subjects' eye
positions at the center target were not generally aligned with primary
position. In this way we could generate errors like those shown in Fig.
2B, accounting for individual differences in fixation
positions within Listing's coordinates. Again, the noncommutative
model always predicted zero error.
![]() |
RESULTS |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
Figure 2, C and D, shows the gaze trajectories of a typical subject performing the test paradigm using the upper left/right and the lower right/left targets (Fig. 2C) and the upper/lower left and lower/upper-right targets (Fig. 2D). The subject was able to perform the basic elements of the task with some errors of localization for each target. In Fig. 2C this subject tended to misjudge the location of all of the targets to the left, whereas in Fig. 2D a more skewed pattern of errors was seen. However, note that the subject consistently saccaded between the same two incorrect positions, which we called positional error, and did not show the sequentially cumulative pattern predicted by the commutative model.
On average, subjects showed a raw positional error of 2.77° (SD, 1.41°; all subjects, all tasks). To eliminate these positional errors, which simply added noise and were unrelated to remapping, and thus isolate the errors due to internal commutative approximations, we subtracted the mean error made by a subject in acquiring the initial target (where there is no remapping) from those errors made during saccades returning to this target.
Figure 3, A and B, shows a scatter plot between the remapping errors for one subject (vertical axes) and those predicted by the commutative model for the test paradigm across all targets (horizontal axes). Note that the main component of predicted error was always orthogonal to the saccades (Fig. 2B). We plotted the vertical error component for horizontal saccades in Fig. 3A (see Fig. 2C for targets involved) and the horizontal error component for vertical saccades in Fig. 3B (see Fig. 2D for targets involved). Note also that the commutative model predicted a slope of 1, whereas the noncommutative model predicted a slope of 0. It is clear from Fig. 3 that (although there is considerable stochastic scatter in the data) the regression line fit to the actual errors more closely follows the predictions of the noncommutative model than those predicted by the commutative model.
|
To determine if this result was consistent, we performed the same
analysis across subjects. Figure 3 shows the slopes of all subjects for
the vertical error component of horizontal saccades (Fig.
3C) and for the horizontal component of vertical saccades (Fig. 3D). In the vertical saccades task, one subject's
slope () was close enough to include the commutative model's
prediction within the associated confidence intervals (confidence
intervals not shown). The subject's high slope in this task could
reflect a partial failure in the noncommutative mechanism in this
particular task. However, in all other cases the slopes more closely
follow the prediction of the noncommutative model. Indeed, the average slopes for the horizontal and vertical tasks were only
0.026° (SE,
0.036) and 0.038°, respectively (SE, 0.038). Further, a
t-test across the slopes of all subjects (in both
the horizontal and vertical tasks) showed that, as a population, the
subjects' slopes were significantly different from the slope predicted
by the commutative model (P < 0.05) and, conversely,
were not significantly different from the slope predicted by the
noncommutative model (P > 0.7 for the horizontal task
and P > 0.3 for the vertical task).
![]() |
DISCUSSION |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
Several recent studies have suggested that trans-saccadic
remapping in retinal coordinates is an important mechanism in
visuomotor space constancy (Colby and Goldberg 1999;
Duhamel et al. 1992
; Goldberg and Bruce
1990
; Henriques et al. 1998
; Moschovakis
and Highstein 1994
). For this remapping process to be accurate,
we have seen that the noncommutativity of 3-D rotations must be taken into account or systematic errors will be made (Fig. 2A).
Because subjects do not make the systematic errors predicted by the
commutative vector subtraction model, we conclude that real behavior
does take noncommutativity into account.
This is not to claim that subjects performed perfectly in our task. We saw two types of errors: a constant "positional error" (Fig. 2, C and D) and a randomly distributed error (Fig. 3). Presumably, the systematic error was related to an initial "misperception" of target location, because it occurred even in the initial saccade (before any remapping) and was not corrected. But this is not related to commutativity, and neither model can explain these errors.
The issue of noncommutativity in oculomotor control was first raised in
the context of Listing's Law (Ferman et al. 1987a,b
; Straumann et al. 1991
; Tweed and Vilis 1987
,
1990
). Some have argued that the control of 3-D eye rotations
requires a neural solution (Crawford and Guitton 1997
;
Klier and Crawford 1998
; Tweed and Vilis 1987
,
1990
). Others have suggested that a mechanical solution may be
available (Demer et al. 1995
, 2000
) and that a commutative controller is sufficient to control 3-D rotations of the
eye (Quaia and Optican 1998
; Schnabolk and Raphan
1994
). However, there can be no mechanical solution to the
problem of visuospatial updating since the visual cortex must update
the internal representation of the retinal image in
correspondence with the physical rotation of the eye in space,
irrespective of mechanical considerations.
Moreover, the problem of noncommutativity is not unique to this
particular mechanism: it has already been shown that an alternative mechanism for saccadic space constancy (rotating oculocentric vectors
into head coordinates) also requires a noncommutative solution
(Crawford and Guitton 1997). This is not to say that such noncommutative operations must take the form of the quaternion operations shown in our model (Fig. 1B). We have recently
shown that artificial neural networks can implement such
transformations more realistically as position modulations on vectorial
visuomotor commands (Smith and Crawford 2000
).
Physiologically speaking, our findings suggest that the signal that
drives the remapping process must take the form of a 3-D rotation of
2-D retinal representations which, in turn, requires information about
the intended 3-D saccade vector and initial eye orientation (Fig.
1B). The most likely source for such signals is not the
cortex,1 but the
brain stem oculomotor system (Crawford 1994; Henn
et al. 1989
; Van Opstal et al. 1991
;
Waitzman et al. 1991
), although these signals are
probably relayed to the frontal cortex via the thalamus (Lynch
et al. 1994
). However, if this is correct, these brain stem
signals must first be put into retinal
coordinates2 (as in our
model) before they can act correctly on the retinocentric maps of the
cortex and superior colliculus (Andersen et al. 1985
; Cynader and Berman 1972
; Munoz et al.
1990
; Robinson 1972
; Schall et al.
1995
). Thus this model makes specific predictions about the
anatomy and physiology of the internal updating mechanism for saccades.
![]() |
ACKNOWLEDGMENTS |
---|
J. D. Crawford holds a Canada Research Chair.
This work was supported by a Canadian Natural Sciences and Engineering Research Council (NSERC) grant to J. D. Crawford. M. A. Smith was supported by an NSERC scholarship.
![]() |
FOOTNOTES |
---|
Address for reprint requests: M. A. Smith, Dept. of Psychology, York University, 4700 Keele St., Toronto, Ontario M3J 1P3, Canada (E-mail: mas{at}yorku.ca).
2 Simulations which lacked this step (not shown) failed to provide accurate remapping.
1 According to our simulations, a model which used visual signals (available in the cortex) to update the spatial map produced qualitatively similar, but larger, errors than the commutative model tested here.
Received 8 November 2000; accepted in final form 5 July 2001.
![]() |
REFERENCES |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|