Context Compensation in the Vestibuloocular Reflex During Active Head Rotations

W. P. Medendorp, J.A.M. Van Gisbergen, S. Van Pelt, and C.C.A.M. Gielen

Department of Medical Physics and Biophysics, University of Nijmegen, NL 6525 EZ Nijmegen, The Netherlands


    ABSTRACT
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
REFERENCES

Medendorp, W. P., J.A.M. Van Gisbergen, S. Van Pelt, and C.C.A.M. Gielen. Context Compensation in the Vestibuloocular Reflex During Active Head Rotations. J. Neurophysiol. 84: 2904-2917, 2000. The vestibuloocular reflex (VOR) needs to modulate its gain depending on target distance to prevent retinal slip during head movements. We investigated gain modulation (context compensation) for binocular gaze stabilization in human subjects during voluntary yaw and pitch head rotations. Movements of each eye were recorded, both when attempting to maintain gaze on a small visual target at straight-ahead in a darkened room and after its disappearance (remembered target). In the analysis, we relied on a binocular coordinate system yielding a version and a vergence component. We examined how frequency and target distance, approached here by using vergence angle, affected the gain and phase of the version component of the VOR and compared the results to the requirements for ideal performance. Linear regression analysis on the version gain-vergence relationship yielded a slope representing the influence of target proximity and an intercept corresponding to the response at zero vergence ("default gain"). The slope of the fitted relationship, divided by the geometrically required slope, provided a measure for the quality of version context compensation ("context gain"). In both yaw and pitch experiments, we found default version gains close to one even for the remembered target condition, indicating that the active VOR for far targets is already close to ideal without visual support. In near target experiments, the presence of visual feedback yielded near unity context gains, indicating close to optimal performance (retinal slip <0.4°/s). For remembered targets, the context gain deteriorated but was still superior to performance in corresponding passive studies reported in the literature. In general, context compensation in the remembered target paradigm was better for vertical than for horizontal head rotations. The phase delay of version eye velocity relative to head velocity was small (~2°) for both horizontal and vertical head movements. Analysis of the vergence data from the near target experiments showed that context compensation took into account that the two eyes require slightly different VORs. In the DISCUSSION, comparison of the present default VOR gains and context gains with data from earlier passive studies has led us to propose a limited role for efference copies during self-generated movements. We also discuss how our analysis can provide a framework for evaluating two different hypotheses for the generation of binocular VOR eye movements.


    INTRODUCTION
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
REFERENCES

The vestibuloocular reflex (VOR) stabilizes retinal images against rotational and translational movements of the head in space. The main contribution to gaze stabilization by the VOR comes from the vestibular organs. The semicircular canals convey information about the angular acceleration of the head, while the otoliths sense its linear acceleration. Because of interposed filters and time delays, it is understandable that the gain and phase delay of the VOR may be different (and below optimal) at various frequencies of head movements. In the light, the availability of visual feedback allows additional input from the smooth pursuit and the optokinetic system. However, these systems have low-pass characteristics and considerable time delays, which make them effective only at low frequencies (below 1 Hz) or for periodic (e.g., sinusoidal) movements where predictive mechanisms can compensate for delays in optokinetic and VOR pathways.

It has long been thought that the system accounting for the compensatory eye movements during head rotations is little more than a simple reflex. However, studies in the last few decades have made abundantly clear that the system needs considerable sophistication to meet the objective of keeping the image still on the retina whenever the head moves (for a review, see Collewijn 1989). While a default unity gain (i.e., compensatory angular eye velocity divided by angular head velocity) is appropriate for a far target, ideal binocular fixation in near vision requires context-dependent gain adjustments dictated by viewing distance, target eccentricity, and the location of each eye relative to the head rotation axis. That these factors are actually taken into account, at least partially, has been established by a number of studies, mostly on passively rotated subjects (Blakemore and Donaghy 1980; Crane et al. 1997; Paige et al. 1998; Viirre et al. 1986). Studies of VOR responses to sudden passive head rotations have strongly suggested that the system has a default gain setting that is subject to several context-dependent corrections. As these corrections become more sophisticated, they involve longer and longer delays (range 20-100 ms) (see Collewijn and Smeets 2000; Crane and Demer 1998; Crane et al. 1997; Snyder and King 1992; Viirre and Demer 1996). These results suggest that several parallel pathways are involved in the adjustment of VOR gain.

In head movements during near vision, eye translations due to an off-centric head rotation axis have to be taken into account, and it is generally assumed that this information can be derived from otolith input. Accordingly, models of the VOR invariably contain a canal-driven path and an otolith-driven path to implement the various context-dependent gain adjustments. Opinions differ on the mode of cooperation of the two branches: both multiplicative interactions (Snyder and King 1992) and simple additive models (Crane and Demer 1999; Telford et al. 1996, 1998) have been suggested.

The experimental data, which have been collected to validate these models, almost invariably concern gaze stabilization in near vision under passive head movements. As a consequence, there is a lack of data about the performance of humans in active movement situations. The working hypothesis in this study is that the situation may be rather different when head movements are generated actively by the subject, rather than imposed by the experimenter. Therefore the aim of the study is to examine to what extent compensatory eye movements can benefit from additional signals available in the context of voluntary head movements. During self-generated movements, additional information sources about the movement (e.g., efference copies) have a potential to supplement the vestibular information, thereby optimizing stabilization.

The very few studies that have investigated human gaze stabilization on near targets during self-generated head rotations have yielded conflicting conclusions. Active head movements always induce both a rotation and a translation of the eyes due to an off-centric location of the head rotation axis relative to the eyes (see Medendorp et al. 1998). Consequently, to correctly stabilize gaze in near vision during natural head movements, both the rotational and translational component of the head movement must be taken into account (Medendorp et al. 1999). Crane and Demer (1997) found no gain dependence on target distance in darkness for targets at distances of 100, 150, and 500 cm during both active sinusoidal pitch and yaw head movements. Hine and Thorn (1987) and Hine (1991), using electrooculographic (EOG) recordings, observed a gain increase with target proximity, both in darkness and in the light. Thus there is little consensus that the VOR during active movements is superior to that in passive conditions. Actually, Crane and Demer (1997, 1999) have emphasized that the VOR during active head movements is even inferior to its performance during passive head rotations.

An adequate study on the human VOR in near vision during self-generated head movements at various frequencies, requiring technically demanding recordings of eye-head kinematics, has not yet been done. The present study has tried to fill this gap by investigating binocular gaze stabilization of human subjects making voluntary horizontal and vertical head rotations in yaw and pitch at various frequencies in a darkened room. Subjects were instructed to maintain gaze on a small target straight ahead at several distances, which disappeared after a fixed time interval. We examined the effects of target distance and movement frequency on the gain of the VOR to test the hypothesis that gaze stability is better for active movements in comparison with results of similar studies for passive rotations.


    METHODS
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
REFERENCES

Subjects

Seven male subjects, between 20 and 55 yr of age, gave informed consent to participate in the experiments. Six subjects were tested while making horizontal head rotations, and five subjects took part in the vertical head rotation experiment. In each experiment, four subjects were naive as to its purpose. All subjects were free of any known sensory, perceptual, or motor disorders.

Experimental procedure

When describing eye and head movements, the present paper makes a strict distinction between the terms "orientation" and "location." The term "orientation" denotes angular rotations (in deg) of the eye or head relative to a reference angle, whereas "location" refers to a location (in cm) in a three-dimensional (3-D) Cartesian coordinate system.

OPTOTRAK MEASUREMENTS. Head location and orientation as well as the locations of the ears and eyes in space were recorded using an OPTOTRAK 3020 digitizing and motion analysis system (Northern Digital). This device operates by tracking infrared emitting diodes (IREDs), attached to the moving object, through a precalibrated space by means of three lens systems mounted on a fixed frame. To determine head location and head orientation, we constructed a helmet with four IREDs mounted on top and two IREDs mounted at the back side. The total weight of the helmet, which was firmly fixed to the head throughout the entire experiment, was <0.25 kg. Since the OPTOTRAK system looked down from behind the subject, at a distance of approximately 3 m, the IREDs were visible within a work space of about 1.5 m3. The locations of the eyes and ears were calibrated with respect to the IREDs on the helmet as follows. Before the actual experiment began, the subject faced the OPTOTRAK camera while wearing the helmet with four additional temporary IREDs, one near the auditory meatus and one on each closed eyelid. The 3-D locations of these IREDs, which uniquely defined the location of the ears and eyes relative to the helmet, were recorded together with the others for 1 s. From then on, the temporary IREDs were removed, taking great care to ensure that the helmet remained stable on the head during the entire experiment. The recording system provided on-line information about the 3-D location of the IREDs with an accuracy better than 0.2 mm. During the experiment, data were collected using a sampling frequency of 100 Hz and stored on hard disk for off-line analysis.

The coordinates of the IREDs were transformed to a right-handed body-fixed coordinate system whose X-Y plane was aligned with the subject's horizontal plane (see Medendorp et al. 1998). The positive X-axis pointed forward, and the positive Y-axis was directed to the left (i.e., along the shoulder line), seen from the subject. The Z-axis was orthogonal to this plane and pointed upward according to the conventions of a right-handed orthogonal coordinate system. The origin of the coordinate frame coincided with the center of the inter-aural axis when the subject was looking straight ahead. From the helmet data the locations of the ears and eyes in space could be computed for each instantaneous head posture by using the previously collected eye and ear calibration data. The center of rotation of the eyes, which defines their real location, was assumed to be 1.3 cm behind the cornea. The orientation and location of the head were determined with respect to the head-reference posture adopted when the subject was fixating straight ahead by calculating the transformation between the IRED locations at the reference position and the IRED locations at the current head position using a least-squares algorithm (Veldpaus et al. 1988). Although three IREDs would have been sufficient to determine the orientation of the head, additional visible IREDs improved the accuracy of the algorithm. This consideration provided an additional reason for mounting more than three IREDs to the helmet, rather than just for visibility purposes only. With these precautions, head orientations could be determined with an accuracy better than 0.2°.

COIL MEASUREMENTS. Binocular horizontal and vertical eye-in-space orientations (i.e., gaze) were measured using the scleral search coil technique (Collewijn et al. 1975) in a large magnetic field system (Remmel Labs). The system consisted of a cubic frame of welded aluminum with a side length of 3.3 m, which produced three orthogonal magnetic fields at frequencies of 48, 60, and 80 kHz. The center of the cube was near eye level for subjects standing in the middle of the bottom of the cube. By using a large field system, we ensured that the eye coils only moved within a nearly homogeneous region of the magnetic fields (30 × 30 × 30 cm). In this way, the orientation of each eye could be measured accurately, unperturbed by eye-coil translations. After demodulation, the signals from the eye coils were amplified and low-pass filtered (150 Hz) and then sampled at 500 Hz per channel. Data were stored on hard disk for off-line analysis.

To calibrate the eye-coils, subjects adopted a straight-ahead head posture and fixated a series of randomly presented red light-emitting diodes (LEDs, n = 37), attached to a screen in front of the subject. The LEDs were arranged in three concentric circles at different eccentricities (10, 20, and 30°) with respect to the middle of the subject's eyes. By combining the locations of the stimulus and the reconstructed locations of both eyes (using the helmet calibration data), we were able to compute the direction of the LEDs with respect to the subject's eyes. In this way, both eye-coil signals could be matched to the corresponding vertical and horizontal led locations. This procedure yielded eye orientation in space (gaze). In this setup, calibration errors were typically <0.5° on average; resolution was <0.04°.

Two PCs in a master-slave arrangement controlled the experiment. The master PC was equipped with hardware for data acquisition of the search-coil measurements and visual stimulus control. The slave PC contained the hardware and software from the OPTOTRAK system and collected the IRED data, synchronized with the eye-coil sampling.

Experimental paradigm

All experiments were performed in a completely dark room. Subjects were standing and were instructed to generate either horizontal or vertical sinusoidal head rotations, in separate sessions, with an amplitude of about 10°. A metronome guided subjects to obtain head rotations at four different frequencies (0.25, 0.5, 1.0, and 1.5 Hz). Additionally, two subjects performed 3-Hz vertical head rotations. Before the actual measurement started, the subject practiced several trials to become familiar with the movement task. To ensure the correct movement frequency and amplitude, the subject received feedback about his head movement during this practice period. No feedback was given during the data collection.

During the head rotations, subjects were instructed to maintain gaze on an earth-fixed target. The target was at eye level in front of the cyclopean eye when the head was in the straight-ahead position. It consisted of a horizontal/vertical array of four small LEDs. The subject was informed that, after a fixed time interval during the data collection for each condition, the target would disappear and that he had to continue to fixate the remembered target for the same time period. This time interval was 9 s for the 0.25-Hz movement frequency and 7 s for the other frequencies. Thus a 0.25-Hz trial lasted 18 s in total; the other trials took 14 s. Halfway through the remembered target period, either the two horizontal, or the two vertical LEDs of the target were relit for 20 ms. At the end of each trial, the subject had to report the observed orientation of the two LEDs, which he was only able to do correctly when he had properly fixated the remembered target. In this way, we encouraged subjects to keep fixating the remembered target. The target was presented at four different viewing distances of approximately 1.5, 1, 0.5, and 0.2 m. Since the subject inevitably moved in space during a trial, the actual target distance varied by a few centimeters. During each trial, the movement of the head and the responses of both eyes were continuously measured, together with the location of the target. Some rest was provided between the two trials. The complete experiment consisted of 16 different conditions (4 frequencies × 4 target distances). Each condition was tested more than once in most subjects. The total experiment lasted for about 40 min. Three subjects were tested on two different days to check day-to-day reproducibility.

Data analysis

Data analysis was performed using programs written in MATLAB software (The Mathworks). After low-pass filtering at 25 Hz, the OPTOTRAK data, obtained at 100-Hz sampling rate, were interpolated linearly to match the 500-Hz sampling rate of the search coil system. Calibrated eye-in-space orientation signals were low-pass filtered at 75 Hz (FIR filter, Matlab). Eye-in-head orientation was calculated by subtracting head orientation from eye-in-space orientation. Strictly speaking, since rotations do not commute in three dimensions, this subtraction is only allowed for one-dimensional conditions, which were carefully observed in this study (errors <0.2°).

Saccade detection was performed on the eye-in-space orientation signals on the basis of separate velocity and acceleration/deceleration criteria for saccade onset and offset, respectively. All detection markings were visually checked by the experimenter and could be interactively changed, if deemed necessary.

The signals required for perfect gaze stabilization were determined using the instantaneous locations of both eyes with respect to the target (using OPTOTRAK data), from which the ideal eye-in-space orientations could be calculated. Likewise, the ideal eye-in-head orientation signals were also computed from the OPTOTRAK data.

In the present study, we describe binocular gaze stabilization in a coordinate system that distinguishes eye movements in direction (version) from eye movements in depth (vergence). Version angle (conjugate part) was computed from left (L) and right (R) eye-in-head orientation data as (L + R)/2; vergence angle was calculated as (L - R). The vergence signal, the angle between the gaze directions of the two eyes intersecting at the target, provided a measure of the fixation distance of the subject. Vergence was expressed in m-1, or meter-angles (MAs), the reciprocal of fixation distance (Telford et al. 1997). For example, 4 MA would be required for fixating a target at 25 cm. Ideal version and vergence angles were computed based on the OPTOTRAK data, which provided information about the location of the head, the eyes, and the target.

For the binocular analysis, eye and head orientation signals were digitally differentiated using a two-point differentiation technique to obtain velocity signals. Using the velocity signals, we performed the same analysis as described in Paige et al. (1998). Individual cycles in the response traces were identified on the basis of the zero-crossings in the angular head velocity signal. We excluded cycles starting in the first second of the trial and those ending in the last second of the trial. Also cycles that started within 250 ms after target switch-off were discarded from further analysis. In the remaining cycles, saccadic epochs were identified in the eye velocity data, before the data were subjected to harmonic analysis. A least-squares sinusoidal fit to the fundamental frequency, performed on each cycle excluding the saccades, served as the basis to obtain the response parameters gain and phase. Response gain was defined as the ratio of peak conjugate eye velocity (deg/s) and peak angular head velocity (deg/s). Response phase was taken as the phase of conjugate eye velocity relative to the phase of the head velocity. These response parameters of the recorded signals were compared with the ideal response parameters, to obtain a measure for the performance in each subject. Furthermore, we investigated the effect of fixation distance by relating the vergence angle (in MA) to the response parameters for each cycle.

Finally, to calculate the amount of retinal slip during each trial, we subtracted the actual gaze velocity signals from the required gaze velocity signal. The resulting signal was low-pass filtered at 10 Hz (FIR filter, Matlab) before taking the root-mean-square (RMS) velocity of image slip (Crane and Demer 1997).

Kinematic specification of ideal VOR responses

For the exceptional case that the rotation axes of the head and eye coincide, the ideal VOR gain for fixating a space-fixed target equals one, i.e., negative eye velocity equals head velocity. When the eyes are located eccentrically from the head rotation axis, the compensatory eye movements have to account for the rotation of the head as well as for the corresponding eye translations relative to the target. Of course, when the target is far away, eye-translation effects are negligible, and the ideal VOR gain would be one. When the target comes closer, it can be shown that the magnitude of the required compensatory eye movement has a nonlinear relation with the distance of the eye to the rotation axis (ocular eccentricity), as well as with target distance and target eccentricity (Hine and Thorn 1987; Viirre et al. 1986).

For small head rotations and for a target at straight ahead, the required VOR gain can be approximated by G = 1.0 + r/D, with r the distance of the eyes to the rotation axis and D the distance of the target to be fixated (Paige et al. 1998; Telford et al. 1998). Because in the present study the deviations from linearity remained small (<2% for target distance beyond 20 cm), we analyzed our data by fitting a straight line to characterize both the ideal and measured gain-vergence relationships, shown in Fig. 4. In this linear relationship, the slope defines the additional compensation needed for the eye to incorporate the context aspect, whereas the intercept denotes the compensation for head rotation (Telford et al. 1998).


    RESULTS
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
REFERENCES

We will first describe the quality of gaze stabilization during voluntary horizontal head rotations. To introduce the topic, we first examine the monocular signals by comparing the actual responses of each eye to the signals required for perfect stabilization. Subsequently, we express the data in terms of a binocular coordinate system, using version and vergence, to quantify the subject's responses.

Qualitative observations

MONOCULAR SIGNALS. Figure 1 presents the results of a subject making roughly sinusoidal horizontal head rotations of about 1 Hz while fixating a near target at about 20 cm. The data were collected during the 6-s fixation period of a visual target, followed by another 6 s when the subject was attempting to look at the remembered target location, i.e., after the target was switched off. The top panel depicts the orientation of the head as a function of time. The second panel shows the measured orientations of the two eyes relative to the head during the movement (bold traces) superimposed on the signals required for perfect gaze stabilization (thin traces), which were computed on the basis of the OPTOTRAK data, as explained in METHODS. During the visual target period, the measured and required eye orientation signals match so closely that they are hardly distinguishable. Performance becomes clearly worse during the remembered target period as indicated by the intrusion of small saccades or quick phases, which are most prominent in the eye angular velocity traces (see 4th panel). The bold traces in the third panel illustrate the recorded gaze signals of each eye (eye in space) together with the optimal gaze signals (thin traces). The fact that optimal performance necessitates gaze changes as a function of time, although the target is fixed in space, is due to eye translations with respect to the near target. The eye translations result from the fact that the head rotation axis is well behind the eyes during natural head rotations (Medendorp et al. 1998). In the visual target period the measured gaze signals almost perfectly match the optimal gaze signals by compensating for eye translation. Although performance becomes less ideal after target offset, the gaze excursions still continue to realize a considerable percentage of what is required to maintain fixation of each eye on the near target. The fact that the VOR gain is larger than 1 is illustrated in the fourth panel, which shows that the amplitude of the angular velocity of the eyes exceeds that of the head. The bottom panel shows the linear velocities of the two eyes along the (horizontal, body-fixed) Y-axis direction together with the linear velocity of the interaural center (i.e., the location midway between the 2 ears). The linear velocity of the eyes exceeds the velocity of the interaural center, simply as a consequence of the larger distance of the eyes relative to the head rotation axis. Linear velocities of eyes and interaural center are in phase due to the fact that the rotation axis is located behind the interaural center (about 0.5-1 cm).



View larger version (45K):
[in this window]
[in a new window]
 
Fig. 1. Gaze stabilization during active sinusoidal horizontal head movements (1 Hz) while fixating a target at approximately 20 cm. The 1st 6 s show data for the visual target condition; the last 6 s present data after target offset. Top panel: the head orientation (in deg) during the movement. Second panel: the measured eye orientation signal of the 2 eyes (bold) superimposed on the ideal eye orientation signals (thin, based on OPTOTRAK measurements). When the target is visible, measured and required eye orientation signals match almost perfectly. During the remembered target period, performance becomes less ideal and several saccades can be identified. Third panel: the measured gaze (eye in space) excursions of the right (R) and left (L) eye together with the corresponding ideal signals. Although performance becomes worse after target offset, considerable modulations in gaze can still be observed. In the 3 top panels, positive orientations indicate leftward rotations. In panels 2 and 3, the zero baselines correspond to looking straight ahead of the eye within the head and of the eye in space, respectively. Fourth panel: the negative velocity of each eye exceeds head velocity, thereby partially compensating for translation of the eyes due to the head movement. Bottom panel: the linear velocities of the eyes and interaural center are in phase, indicating that the rotation axis must have been located behind the interaural center. Subject SP.

At first glance, the fourth panel suggests that the angular velocities of the two eyes are virtually the same. However, on closer inspection, subtle but consistent differences become apparent, as can be seen in Fig. 2. The top panels illustrate that ideally the angular velocity signals of both eyes must be slightly different, which of course is valid for both the visual target and the remembered target condition. This difference is required due to the different locations of the two eyes relative to the target. Closer scrutiny of the actual data shows similar differences between the left and right eye traces. This effect becomes more clearly visible when the angular velocity of the left eye is plotted against that of the right eye. Small figure-eight patterns can be discerned in the data for both the visual target and the remembered target condition, mimicking the patterns (gray lines) required for optimal gaze stabilization (see Fig. 2, bottom). In this respect, the human data are in correspondence with the passive monkey data reported by Viirre et al. (1986).



View larger version (31K):
[in this window]
[in a new window]
 
Fig. 2. Top: due to geometry requirements, the 2 eyes have to move differently to keep them fixed on the target (left: visual target; right: remembered target). Same data as in Fig. 1. The measured eye signals follow the required pattern (middle), both in the visual and remembered target condition. When plotting measured left-eye velocity vs. measured right-eye velocity, small figure-8 loops appear, which follow the patterns (gray lines) required for optimal performance (bottom).

BINOCULAR SIGNALS. To view the data from a different perspective, we expressed the eye orientation data in a binocular coordinate system by making a distinction between the conjugate part (version) and the disconjugate part (vergence) in the movements of the two eyes (see METHODS). Figure 3 shows the data of Fig. 1 in this coordinate system. The top panel shows the actual version component of the eyes, which is superimposed on the ideal conjugate response, computed from the OPTOTRAK data. There is an almost perfect match between measured and ideal version signals in the visual target condition, but performance deteriorates after target offset. The second panel illustrates the disconjugate or vergence state of the two eyes. Measured vergence shows modulations that correspond nicely to required vergence in the visual target condition. This panel shows a less ideal behavior and a clear vergence drift after target offset. Note also that there is a striking double-frequency modulation of the vergence component, compared with the corresponding 1-Hz version modulation. Due to the rotation of the head about a posterior axis, the eyes trace a circular path in space, causing a variation in target distance during both the leftward and the rightward movements of head rotation, which necessitates a double-frequency modulation of the vergence component. The third panel shows modulation of conjugate gaze direction, which is roughly in line with the optimal gaze during vision of the target, but decays considerably in the remembered target condition. The fourth panel shows that measured version velocity exceeds angular head velocity both for a visible target and after target offset. The bottom panel shows that vergence velocity follows more or less the required velocity, albeit with a small time delay, again indicating that the left and right eye move differently.



View larger version (46K):
[in this window]
[in a new window]
 
Fig. 3. Binocular signals. Same data as Fig. 1 presented in a binocular coordinate system. The top panel shows actual version (conjugate part) superimposed on required version, computed using the OPTOTRAK data. The modulations in the measured vergence signal (disconjugate part) nicely match the required modulation (2nd panels). The third panel shows that the conjugate movement of the eyes in space (i.e., conjugate gaze) is in close correspondence to the required signal when the target is visible, but deteriorates after target offset. The fourth and fifth panels show the velocity signals of version and vergence superimposed on the required velocity signals. The fact that vergence velocity follows required vergence velocity indicates that both eyes move differently.

Quantitative analysis of VOR performance

To characterize the stabilization performance of the version component quantitatively, we parsed the data by stimulus condition, movement frequency, and vergence angle, and calculated gain (peak version velocity divided by peak head velocity) and phase (phase difference between version velocity and head velocity). We first examined the gain and phase of version as a function of fixation distance. To obtain the latter, vergence was expressed in MA (see METHODS) (see also Telford et al. 1997). Figure 4 shows gain and phase as a function of vergence for all movement frequencies tested in subject SP. Open circles represent visual target data, whereas filled circles correspond to data after target offset. For all movement frequencies, the gain data scatter along a straight line with a positive slope, implying a larger gain for more nearby targets as required for perfect gaze stabilization. In the visual target condition (open circles) the gain of the measured version eye movements is in close correspondence to the ideal gain (dashed line; reconstructed from OPTOTRAK data). Stabilization performance for the remembered target condition (filled circles) shows obvious context compensation (slope is positive), but the amount of compensation is clearly less. Also, the scatter of the data for the remembered target condition is larger compared with that in the visual target condition. The bottom panels show the phase of version eye velocity with respect to head velocity, demonstrating that version eye velocity lags head velocity only by a small amount in both target conditions (about 2°), which remains more or less constant across all movement frequencies and vergence angles.



View larger version (29K):
[in this window]
[in a new window]
 
Fig. 4. Gain (top) and phase (bottom) during voluntary horizontal head rotations at various frequencies, plotted as a function of vergence angle for both visual (open circle) and remembered (filled circle) target fixations. Subject SP. The gain data scatter along a straight fit line with a positive slope, indicating a larger gain for nearer targets. The ideal required gain-vergence relationship is given by the dashed line. Note that these lines were not obtained on an assumption or a direct measurement of the distances of the eyes relative to the rotation axis. Using the OPTOTRAK system, it was possible to determine the locations of subject's eyes in space with respect to the location of the target. Accordingly the ideal eye-in-space orientation was computed, and, in combination with head orientation, the ideal vestibuloocular reflex (VOR) gain for that particular fixation distance could be determined. For visual targets, the gains closely match the ideal gain. Although performance becomes worse after target offset, there is still a clear amount of context compensation (slope of the fitted line positive). With regard to the phase behavior, the data demonstrate that version eye velocity lags head velocity by only a limited amount for all movement frequencies.

REGRESSION ANALYSIS. We performed a linear regression analysis to quantify the gain-vergence relationship (see METHODS) (see also Paige et al. 1998). Since this analysis was done on a cycle-to-cycle basis, the number of data points in the regression is dependent on the movement frequency with higher frequencies yielding more data points during the same recording time. As a consequence, the smaller data set for the 0.25-Hz movement frequency renders the computation of the gain-vergence relationship for this frequency less reliable. The solid lines in Fig. 4 (top) represent the gain-vergence regressions for this subject. The slope of the fitted line defines the gain change as a function of vergence and characterizes the VOR compensation for the fact that the eye translates with respect to the target. The intercept determines the gain at zero vergence. The dashed lines (partly hidden) show ideal performance. For all frequencies, the slope of the fitted line is steeper for fixation of visual targets than after target offset, indicating better compensation for eye translation in the visual target condition than in the remembered target condition. Since subjects had slightly different distances of their eyes relative to the head rotation axis (due to a different head geometry), the slope corresponding to optimal performance revealed small variations among subjects. To normalize the amount of translation compensation, we computed the ratio of measured slope and ideal slope (based on the OPTOTRAK data) for both target conditions. This ratio shall be termed context compensation gain (context gain for short) to express how actual performance compares with ideal performance. Accordingly, perfect performance corresponds to a context gain of 1.0. In the data of Fig. 4, the context gain for visual targets varies between 1.1 and 1.0 across frequencies. The context gain for remembered targets was well below unity, ranging from 0.4 to 0.6.

The intercept of the gain-vergence relationship corresponds to the gain for a target at infinity. It represents the gain for rotation compensation only, and will be called default gain. This default gain is closely related to the traditionally measured VOR gain and should be equal to one in the ideal case on the basis of geometric considerations (see METHODS). As shown by the intercept of the regression lines at vergence zero in Fig. 4, the default gain in subject SP is indeed close to 1, ranging between 0.98 and 1.00 without a clear hint of dependence on either target condition (visual vs. remembered target) or movement frequency.

We also examined the relationship between VOR phase and vergence by performing a regression analysis. Note that ideally, the phase angle should be zero. For subject SP, as shown in Fig. 4 (bottom), the mean phase delay was about 2° across all frequencies, and the correlation between phase and vergence in the visual target condition was only significant for 1.0- and 1.5-Hz movement frequencies (1.0 Hz: r = -0.38, P < 0.05, n = 122 and 1.5 Hz: r = -0.40, P < 0.05, n = 178). In the remembered target condition, we found a small but significant correlation for the 1.5-Hz movement frequency (r = -0.21, P < 0.05, n = 158). Unlike the gain, the phase was not significantly related to vergence (except in 1 subject) and will therefore be expressed by the average across vergence for each frequency and stimulus condition (target on/off).

Comparison of subject performance

Figure 5 shows the context gain (with standard deviation) for all subjects and all movement frequencies. The white bars indicate performance in the visual target condition; the black bars represent the performance after target offset. The first six column pairs show the data from individual subjects, whereas the population mean is given in the seventh column. In the visual target condition, the figure shows quite some variability among subjects for the lower frequencies, which is probably related to the small number of data points available for the regression analysis for lower frequencies (see above). This is also reflected in the larger standard deviation in the data for 0.25 Hz. On average, however, the context gain is near one for all frequencies tested, indicating near-ideal gaze stabilization on visual targets in this frequency range. It ranges from 0.98 ± 0.22 (mean ± SD) at 0.25 Hz to 0.91 ± 0.12 at 1.5 Hz, with an overall mean value of 0.95 ± 0.03. An ANOVA did not reveal a significant relationship between context gain and movement frequency [F(3,15) = 0.59, P = 0.63]. For the remembered target condition, the mean context gain (across all frequencies) is 0.57 ± 0.04. This value is far from ideal, but still indicates the existence of a clear amount of context compensation. As in the visual target condition, there is no significant relation between context gain and movement frequency for the remembered target condition [ANOVA F(3,15) = 0.22, P = 0.88]. The differences between the data in the visual target and remembered target condition appeared to be highly significant [ANOVA F(1,23) = 46.1, P 0.001].



View larger version (25K):
[in this window]
[in a new window]
 
Fig. 5. Context gains of all 6 subjects for horizontal head rotations. White bars: visual target condition; black bars: remembered target condition. The average context gain across all subjects is given in the 7th column. Error bars depict SD. Ideal context gain (dashed line) equals 1.0.

Figure 6 summarizes all three response parameters. The top panel shows the mean context gain, and the middle panel plots the default gain as function of movement frequency for both visual and remembered target conditions. In the case of ideal performance, we would expect a default gain of one (see METHODS). The measured default gains appear to be close to one in all cases, indicating that the VOR had close to ideal default settings for targets at infinity, both for a visual and remembered target. The default gain had a mean value of 0.98 ± 0.01 in the visual target condition and 0.97 ± 0.01 in the remembered target condition when averaged across subjects and movement frequencies. The difference between the visual and remembered target condition was not significant [ANOVA F(1,23) = 1.87, P = 0.19]. Furthermore, an ANOVA revealed no significant relationship between default gain and movement frequency in either target condition [visual targets: F(3,5) = 0.39, P = 0.76; remembered targets: F(3,15) = 0.77, P = 0.52].



View larger version (20K):
[in this window]
[in a new window]
 
Fig. 6. Gaze stabilization performance for voluntary horizontal head rotations. Mean context gain, default gain, and phase (averaged across 6 subjects) plotted as function of movement frequency for both visual and remembered target conditions. Errors bars indicate SD. Ideal performance would require context and default gains of 1.0, and a phase angle of 0°.

In the bottom panel of Fig. 6, we show the phase values (means ± SD), averaged across all subjects, of version velocity relative to head velocity for all movement frequencies. In virtually all subjects, we found a phase lag of version eye velocity relative to head velocity, but the phase lag was small in all conditions, with a maximum value of 4° in subject AB. We did not find a significant relation between phase and movement frequency, neither in the visual target condition [F(3,15) = 2.34, P = 0.11] nor in the remembered target condition [F(3,15) = 0.99, P = 0.42]. The differences between the phase in the visual and remembered target condition are not significant [ANOVA F(1,23) = 1.61, P = 0.22].

Retinal slip

Up to this point, we analyzed the subject's stabilization performance using eye-in-head and head velocity signals. As shown in Fig. 6, we found small phase differences between negative eye angular velocity and head angular velocity in both context conditions. These phase differences may not be negligible since the phase differences shown in Figs. 1 and 3 (panels 3) between measured gaze and optimal gaze seem consistent and significant. In principle, such a phase difference between measured gaze and ideal gaze could induce considerable retinal slip (even with optimal gains). This could affect visual perception of the target since visual acuity is compromised already by just a limited amount of retinal slip (>2-4°/s) (see Barnes and Smith 1981). To explore this possibility, we computed the amount of retinal slip on the basis of the required and measured gaze velocity signals (see METHODS). Figure 7 presents retinal slip velocity (RMS value over the range 0-10 Hz) as a function of vergence for each movement frequency pooling data from all subjects. The data reveal a positive correlation between retinal slip velocity and vergence for both the visual target and remembered target condition. The correlation varied between 0.6 and 0.71 for visual targets and ranged from 0.36 to 0.63 after target offset, and was highly significant in all conditions (P < 0.01), yet significantly smaller than 1. In the presence of visual feedback, retinal slip appeared to be smaller than approximately 4°/s. Without visual feedback, virtual retinal slip velocity is larger, reaching values that would clearly blur vision.



View larger version (33K):
[in this window]
[in a new window]
 
Fig. 7. Retinal slip velocity [root-mean-square (RMS) value] of the version component as a function of vergence angle for various movement frequencies. Note that vergence signals were left out of consideration. Pooled data from all subjects. Left: visual target condition. Right: remembered target condition. For visual targets, retinal slip velocity remains below 4°/s (dotted line), whereas for remembered target it reaches values that would clearly blur vision. Fitted lines reveal a significant positive correlation between retinal slip velocity and vergence angle.

VOR performance during vertical head rotations

We also tested five subjects making vertical head rotations. We applied the same analytic approach as above by using the binocular signals and quantifying the gaze stabilization performance on the basis of the context gain, the default gain, and the mean phase as a function of movement frequency. In short, the context gain quantifies how well the subject compensates for translation of the eyes relative to the target and the default gain represents the gain at zero vergence, thus compensating for the rotational aspect of the movement. Additionally, two subjects performed vertical head rotations at a movement frequency of 3 Hz. The mean results are given in Fig. 8.



View larger version (22K):
[in this window]
[in a new window]
 
Fig. 8. Gaze stabilization performance for voluntary vertical head rotations. Results from 5 subjects in the same format of Fig. 6. Data at 3-Hz movement frequency are from just 2 subjects. Ideal performance would require context and default gains of 1.0, and a phase angle of 0°.

The top panel depicts the context gain as a function of movement frequency. In the visual target condition, the context gain is not significantly different from one for movements <= 1.5 Hz (t-test, P < 0.05) indicating a near ideal stabilization performance. In the two subjects who performed 3-Hz movements, the results indicate a clear decrease in context gain to a value near 0.75. The context gains in the remembered target condition are significantly lower than those in the visual target condition [ANOVA F(1,4) = 21.7, P < 0.01]. In the dark, for frequencies <= 1 Hz the context gain scatters around a value of about 0.8 and decreases to about 0.5 for the 3-Hz movement frequency. So, just as for horizontal head rotations, there is context compensation for vertical head rotations both with and without visual feedback of the target. In the visual target condition, the differences in context compensation between vertical and horizontal head rotations are not significant [ANOVA F(1,41) = 1.47, P = 0.47]. Strikingly, context compensation in the dark is significantly better for vertical than for horizontal head rotations [ANOVA F(1,41) = 14.3, P < 0.001].

The default gain remains close to one, as shown in the middle panel of Fig. 8. In both target conditions (visual and remembered), the default gain (averaged across subjects) is not significantly different from one, for any of the frequencies tested (t-test, P > 0.05). Across frequencies, however, an ANOVA revealed significant differences between the default gains in the visual and remembered target condition [ANOVA F(1,4) = 7.8, P < 0.05]. Furthermore, with respect to the results of the horizontal head rotation experiment, the differences in default gain for horizontal and vertical head movements appeared to be significant for the visual target condition [F(1,41) = 8.2, P = 0.01], but not for the remembered target condition [F(1,41) = 2.1, P = 0.32].

Finally, the bottom panel shows the phase of version eye velocity relative to head velocity, indicating a phase lag in the range between 1 to 4° in all conditions tested. In general, the differences in phase between visual and remembered target conditions are not significant [ANOVA F(1,4) = 0.96, P = 0.38]. In comparison to the results of the horizontal head rotation experiments, there are no systematic differences in phase either during visual target fixation [F(1,41) = 3.8, P = 0.12], or in the remembered target condition [F(1,41) = 4.1, P = 0.10].

DAY-TO-DAY REPRODUCIBILITY. To check day-to-day reproducibility, we repeated the entire vertical head rotation experiment in two subjects (PM and AB). Although some small intrasubject differences in context gain and phase could be observed, the data of the experiment on day 2 were generally similar in both subjects and showed the same tendencies as in the first experiment.


    DISCUSSION
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
REFERENCES

Recapitulation of main results

In this study we investigated human gaze stabilization in near vision during self-generated head rotations in pitch and yaw to characterize the quality of context compensation in the active VOR. Our approach was to accurately measure both the binocular gaze signals and the head angular velocity to determine the gain and phase of the VOR, using a binocular coordinate system. Since we also measured the translation of the eyes relative to the target, the requirements for ideal VOR performance could be established. In this fashion, we were able to quantify the effects of target distance and the availability of visual feedback on the gain and phase of the VOR for various movement frequencies.

The results demonstrate that, for frequencies up to 1.5 Hz, the VOR for both pitch and yaw movements behaves nearly perfectly in the presence of visual feedback. In its absence, VOR behavior is clearly less perfect, indicating the loss of a visual component. However, a significant influence of target proximity could be clearly established, even in the absence of visual feedback.

As expected on theoretical grounds, the gain of the VOR appears to be linearly related to target distance, as derived from vergence angle. A linear regression on the gain-vergence relationship yielded a positive slope, indicating that VOR gain increased with target proximity. Relating this slope to the slope required for ideal stabilization provided a measure for the amount of context compensation (context gain) in the VOR. The intercept obtained from the regression yielded the VOR response at zero vergence (default gain) and specified the gain of the VOR for fixation on targets at infinity.

The context gain was near one under visual feedback, and significantly smaller in darkness. In darkness, context gains were higher for vertical (near 0.8) than for horizontal head rotations (about 0.6). For both movement situations and target conditions, the default gain was close to one, indicating that the active VOR has almost ideal default settings for targets at infinity, even in the absence of visual feedback. In the next sections we relate our results to previous active and passive studies, review possible mechanisms for context compensation in the active VOR, and discuss how our analysis may provide a framework to explore current hypotheses concerning monocular/binocular VOR control.

Relation to previous studies on voluntary head movements

The first question to be faced is how our findings relate to the results of previous active studies. As we noted in the INTRODUCTION, the existing literature is divided about the question of whether the human active VOR shows context compensation. The gain increase observed by Hine and Thorn (1987) for both visual and remembered near-target viewing during natural horizontal head rotations is consistent with the present findings in the sense that both studies found context compensation. However, their measurements, based on monocular EOG recordings, without documenting the complete kinematics of the eye and head motion, did not allow them to relate the actual gain to ideal VOR gain. In other words, the precise value of context gain in their study remains unknown. In the present study we collected all relevant eye-head kinematics data, which allowed us to specify the requirements of the VOR on a moment-to-moment basis for each eye separately.

Crane and Demer (1997), measuring the gain of the VOR during both active sinusoidal pitch and yaw movements for targets at distances of 500, 150, and 100 cm, did not find a relationship between VOR gain and fixation distance in darkness. At first glance, this may seem to contradict the results from our study, but closer inspection of the data resolves this discrepancy. With the smallest target distance of 1 m used in the study by Crane and Demer, compensation effects are just too small to reveal a systematic relationship. This is illustrated in Fig. 4, where data from targets beyond 1 m (vergence values <0.1 MA), considered in isolation, would not support the claim of an overall systematic gain-vergence relationship due to scatter of the data. We conclude that the results of the present study, establishing context-compensation in the active VOR, clarify the confusing picture emerging from the previous active VOR studies.

Relation to passive studies

Another obvious question that needs to be addressed is whether the VOR is better for active head movements than for passive head movements and, if so, in which respect. We are aware that comparing our results with those of corresponding passive studies should be done with caution, because the answer obtained may well depend on the nontrivial question of how the proper control experiment should be designed. In fact, the perfect passive VOR control experiment for our study may even be hardly feasible. For example, passive studies involving whole-body rotations avoid proprioceptive inputs from neck muscles and motor efference copies to the VOR, which might be available during active head rotations. Unfortunately, most passive studies in which the head was rotated on the trunk, either manually (Thurtell et al. 1999) or by a helmet motor (Tabak and Collewijn 1994; Tabak et al. 1997), did not focus on near vision aspects in the VOR. The recent head-on-trunk rotation study by Collewijn and Smeets (2000) is a rare exception. A further point concerns the location of the passive rotation axis, which is unlikely to match the natural location of the head rotation axis unless special precautions are taken (e.g., Paige et al. 1998). Medendorp et al. (1998) showed that for horizontal head rotations (yaw) the location of the head rotation axis remains fixed at a point about 1 cm behind the center of the interaural axis. For vertical head movements the situation is even more complex since the head rotation axis, located below the interaural center, does not stay fixed, but moves up and down along the neck by an amount dependent on movement amplitude.

VOR STUDIES IN FAR VISION. Despite these reservations, to properly relate our results to the results of passive studies, we should compare all three measures; context gain, default gain, and phase. Studies on the gain and phase of the VOR in far vision, where context compensation is not needed, can be related to our default gain and phase results. The general picture emerging from these studies is that for both horizontal (Tabak and Collewijn 1994) and vertical (Demer 1992; Demer et al. 1993) head rotations the passive default VOR gain is close to one in vision, closely resembling the active default gains. In contrast, the passive default VOR gain is usually below unity in darkness, while the active default gain remains close to compensatory. The fact that the active default gain was close to one in all conditions tested in this study indicates that it is not dependent on visual mechanisms.

With regard to the phase behavior, most passive studies reported rather small phase lags (at least for the movement frequencies in our study) for both directions of head movement, which do not deviate from the results of the present study. In other words, the phase results of our active study are not better than those in passive studies. For a more extensive discussion with regard to the interpretation of the phase delays, see section entitled Binocular control of the VOR.

HORIZONTAL VOR STUDIES IN NEAR VISION. Natural head rotations always involve eye translations, which require the ideal VOR gain to vary with the location of the axis of rotation and viewing distance. Recalling our findings of a clear amount of context compensation, default gains near one, and small phase delays both in vision and in darkness, we continue our comparison by concentrating on the VOR in near vision starting with horizontal movements. Several passive studies have shown that target distance significantly influences the gain of the VOR during eccentric head rotations (Crane et al. 1997; Snyder and King 1992; Viirre et al. 1986). These results are qualitatively in correspondence with the present results. Although a thorough quantitative comparison is difficult, we made an effort to roughly estimate the values of the context and default gain during eccentric head rotations (i.e., rotation axis 20 cm behind the eyes, which is about 12 cm behind the otoliths) using the VOR data of Crane et al. (1997). In that study, both gains are near 1 for vision, whereas in the dark the gains clearly decreased to values of 0.14 for context gain and 0.97 for the default gain (estimated at 1.2-Hz movement frequency). Paige et al. (1998) took precautions to rotate their subjects passively about the natural yaw rotation axis, taken to be a few millimeters behind the otoliths. They reported that VOR performance was nearly perfect with visual feedback, consistent with the present results. In the dark, however, these authors found no evidence for context compensation in the passive VOR (context gain was actually negative) for low frequencies. In darkness, we found a clear amount of context compensation, possibly suggesting that during self-generated head rotations in yaw additional nonvestibular signals may come into play in human gaze stabilization.

VERTICAL VOR STUDIES IN NEAR VISION. So far, passive studies on the VOR in near vision allowing a direct comparison with our natural pitch rotations are not available. Even the study performed by Viirre and Demer (1996) does not resemble natural pitch movements, since they measured the VOR gain for near targets during passive vertical rotations around rotation axes along the anterior-posterior axis, whereas during natural pitch rotations these axes are located in a vertical plane in the neck region (Medendorp et al. 1998). In the light, their context and default gains were about 0.7 and 0.9, respectively, which tend to be lower than the values found in our study. In the dark, the two studies differ more strikingly. Even though Viirre and Demer (1996) reported a slight dependence of the VOR gain on target distance, overall performance was poor with overall gains even remaining well below 1.0. By estimation, their context gain was near 0.2 and the default gain about 0.73 (at 1.2-Hz movement frequency). By contrast, we found near-unity default gains and evident context compensation, even higher than the amount of compensation for horizontal head rotations, which clearly indicates that the active VOR performs better for vertical head movements than the passive VOR.

In conclusion, both the active and passive VOR behave similarly in the presence of visual feedback. In the dark, the active VOR clearly performs better both with regard to the default gain and the context gain. In the next section we will address possible mechanisms that allow the active VOR to achieve its superior performance.

Mechanisms for context compensation in the VOR

An important aspect in comparing performance of the active and passive VOR concerns context compensation. We now briefly discuss three different mechanisms for VOR context compensation during active head rotations.

VISUAL CONTRIBUTION. Our experiments with a remembered target have revealed a sizeable amount of context compensation. Yet, the finding that context compensation is better for visual targets than for remembered targets implies the involvement of visual contributions during near visual target fixations (Barnes 1993; Barnes et al. 1978; Koenig et al. 1986; Tomlinson et al. 1980). A limitation of visual tracking systems like the optokinetic system and the smooth pursuit system is that their gain is not perfect and that they are slow. A nonunity smooth pursuit gain (Barnes 1993) might explain the small retinal image slip, observed in this study, especially in near vision where retinal target velocity increases (Fig. 7). Considering that smooth-pursuit cannot be maintained for more than 1 s in darkness, other mechanisms must have been responsible for maintaining a fair amount of context compensation in the remembered target experiments.

CANAL-OTOLITH INTERACTION. Earlier studies have already extensively discussed how purely vestibular mechanisms can increase the gain of the VOR for near targets. During natural head rotations, eye translations occur simultaneously with otolith translations (see bottom panel in Fig. 1), since the rotation axis is located eccentrically from both the eyes and the otoliths. Several studies have suggested that otolith input is scaled with inverse target distance to account for the observed VOR responses in near target viewing (Crane and Demer 1999; Crane et al. 1997; Paige et al. 1998; Telford et al. 1997; Viirre and Demer 1996; Viirre et al. 1986). Telford et al. (1996, 1998) investigated the VOR responses in monkeys during angular, linear, and combined stimulation, and concluded that the canal-otolith contributions add linearly (see also Crane and Demer 1999). The various passive studies suggest that VOR context compensation is only a very modest fraction of what would be needed in ideal circumstances. This suggests that the canal-otolith contribution is limited.

EFFERENCE COPY SIGNALS. The fact that active VOR performance in the dark is much better than in passive studies suggests that nonvestibular signals may help the VOR to increase its performance (Barr et al. 1976). Nonvestibular signals such as efference copy signals may explain why the context gain and the default gain are higher for active than for passive head movements. Several studies interpreted the role of efference copy signals during active eye-head coordination as predictive signals, which might be maintained also in darkness. These studies showed that predictability (or efference copy) improves the compensatory eye movements in healthy normals (Schwarz and Tomlinson 1979), in patients with vestibular pathology (Dichgans et al. 1973), and in monkeys (Dichgans et al. 1974). Finally, the effect of proprioceptive input from the neck on gaze stabilization in humans is rather weak and may not be helpful for vestibularly controlled eye movements (Dichgans et al. 1973; Mergner et al. 1998).

SUMMARY DIAGRAM. To conclude our comparison of VOR performance in both active and passive movement conditions, we have added a graphical sketch to highlight possible mechanisms underlying the active/passive differences in vision and in darkness (see Fig. 9). In general, both passive and active studies show the same results for the VOR in vision (see top panels of Fig. 9). That is, both the default and context gains have values near 1, graphically depicted as lines with equal positive (ideal) slopes and intercepts at 1.0. In darkness, differences between the active and the passive VOR become apparent. The active default gain retains its high value (near 1), whereas the passive default gain is now generally found to be somewhat lower (see bottom panels). In the scheme, this difference is attributed to a modest contribution of efference copy signals in the active VOR (bottom left). The main portion of the default gain, however, is due to a contribution from the canals. As the schemes emphasize, there are also active-passive differences in context compensation. The active context gains in darkness are about three times larger (near 0.6) than corresponding passive context gains (near 0.2). The different slopes for the active and passive VOR portray this difference in context compensation performance. The passive context gains in the dark may reflect canal-otolith interaction contribution (indicated by "otoliths" in the diagram). The increase in active context gain in the dark is credited again to a contribution of efference copy signals. The remaining difference between VOR performance in the dark and in the light, we suggest, is due to visual tracking systems (indicated by "visual"). The scheme implies that this visual contribution is clearly larger during the passive VOR in near vision where the putative efference copy contribution is lacking.



View larger version (20K):
[in this window]
[in a new window]
 
Fig. 9. Summary diagram of signals contributing to the VOR in active and passive movement conditions. In vision, the active and passive VOR behave similarly and are nearly perfect (top). The active and passive VOR diverge in darkness (bottom). The fact that the active default gain is larger than the passive default gain is attributed to a small contribution by efference copy signals ("efference") to the rotational VOR. The main default gain comes from the canal contribution to the VOR (indicated by "canals"). The small value of the passive context gain is attributed to canal-otolith interactions ("otoliths") whose contribution is insufficient for perfect performance. The active context gain in our dark experiments, found to have a higher but still imperfect gain, is thought to express an additional contribution by efference copy signals (efference). Finally, visual tracking mechanisms are invoked to explain the differences between VOR performance in the light and in darkness (indicated by "visual"). The different factors need not simply add linearly.

Binocular control of the VOR

Our presentation of the experimental data has relied mainly on a description in terms of a binocular coordinate system yielding version and vergence signals. This descriptive approach by itself says nothing about the problem of how the VOR in each eye may be controlled by the brain. Before we can address this issue, it is important to establish which version and vergence signals should be expected if the VOR were ideal. We first consider the version component on which we have focused our quantitative analysis, so far. This analysis, following the approach used by others before (Paige et al. 1998), has quantified the gain of context compensation in the active VOR but is not very revealing about the latency of the near-vision adjustments. The reason is that the latency of the version signal in conditions with context compensation presumably reflects the latency of two underlying signals: the default VOR component with a very short latency (Collewijn and Smeets 2000), and a signal with a longer latency that incorporates the context compensation. As the amplitude of the former is generally much larger than that of the latter, the time delay of the context-gain component is hard to determine. To get a better picture, it is necessary to look at the dynamic behavior of the context gain component in isolation.

To illustrate how this component could be separated, suppose we had only a single eye. When looking at a target at optical infinity, its velocity would have to compensate for head velocity with unity gain for the eye to remain perfectly stabilized in space. In that case, gaze velocity is zero. For a near target, the eye should also compensate for its translation relative to the target, due to its eccentric location relative to the head rotation axis. As a consequence, eye velocity now has to exceed head velocity by an amount depending on target distance. This increment in eye velocity causes a change of gaze, hence the term gaze velocity, i.e., eye-in-head velocity = -head velocity - gaze velocity.

The picture emerging from Figs. 4, 6, and 8 is that the VOR has a default setting, with a value close to one, which compensates for head velocity in the far-target condition. Near targets require an additional boost in gain, which manifests itself as conjugate gaze velocity (see Fig. 3). In other words, the gaze velocity signal is a direct expression of the context compensation component, and it is the time lag in this signal that can give us a measure of compensation latency.

Earlier passive VOR studies have suggested that context compensation involves delays that are quite considerable compared with the latency of the default VOR, which is about 8 ms (Collewijn and Smeets 2000). The implementation of context-dependent corrections comes in after longer delays (range 20-100 ms) (Collewijn and Smeets 2000; Crane and Demer 1998; Crane et al. 1997; Snyder and King 1992; Viirre and Demer 1996). In the context of the present study, it is interesting to check whether these delays may be shorter in our actively moving subjects.

LATENCY ASPECTS OF CONTEXT COMPENSATION IN THE ACTIVE VOR. We analyzed the time delay of the conjugate gaze signal, both in the presence and absence of a visual target by computing the time shift to obtain optimal correlation with the ideally required response (i.e., the peak in the cross-covariance function). Only trials yielding significant correlations (P < 0.05) were considered in the analysis. For visual targets, the conjugate gaze delay was 22 ± 21 ms. For remembered targets this value remains more or less the same yielding 28 ± 35 ms. Our value of 28 ms is in the same range as found by previous passive studies (Collewijn and Smeets 2000; Crane and Demer 1998; Crane et al. 1997; Snyder and King 1992; Viirre and Demer 1996).

The fact that we actually have two eyes at different locations, implies that a single signal cannot meet the geometrical requirement for each eye separately: this requires independent VOR modulations in the two eyes that vary on a moment-to-moment basis. These different VOR modulations were indeed observed in the present study. They are expressed in the double-frequency modulations of the vergence signal (Fig. 3) and in the loops shown in Fig. 2, superimposed on the generally much larger common modulations in the version signal. Thus this study has clearly demonstrated the existence of both the common and the disjunctive part of context compensation. We checked the time delay of the vergence component using the same correlation analysis approach. For visual targets, the average vergence delay was 11 ± 33 ms. This rather short vergence delay indicates that this compensation effect cannot have been visually mediated. Also the fact that the vergence modulation persisted at least partially in the dark, with an average delay of 8 ± 26 ms argues against a visual contribution. The conjugate gaze delays that we found were somewhat larger than those of vergence, both in vision and in darkness.

MONOCULAR OR BINOCULAR VOR CONTROL: MODELING ASPECTS. So far, the terms version and vergence in the present paper have been used in a merely descriptive sense, carefully avoiding any connotation with underlying neural control mechanisms. If we are to understand how the system actually works, there comes a point where this correspondence question can no longer be postponed. This issue has become the focus of recent debate where quite different competing hypotheses are at stake. The classical position assumes that the control of binocular eye movements relies on separate control systems directly yielding the version and vergence signals emerging from our descriptive approach. If this classical scheme is correct, our data imply that context compensation in the VOR requires two different mechanisms: a conjugate subsystem responsible for the conjugate gaze signal and a disconjugate system responsible for our vergence compensation component. Since they are seen as neurally distinct, these two subsystems may conceivably have different characteristics, for example in the time domain. In theory, it would even be possible to find evidence for correction at the conjugate level and none for a vergence component. Anyway, our results, showing that both types of compensation exist, rule out this possibility. An alternative scheme, first proposed by Viirre et al. (1986) for the VOR and by others on more general grounds (Snyder and King 1992; Zhou and King 1998) posits that the movements of each eye are generated by separate controllers. If so, logic implies that context compensation in such a scheme should also operate for each eye separately in which case our distinction between a version and a vergence component of context compensation would be a descriptive artifact. In other words, if this reasoning applies, both the version and the vergence components of context compensation should show the same behavior, both in context gain and in time delay. We checked this by investigating the relationship between the gain and delay of vergence (with respect to the required vergence signal) versus the gain and delay of the conjugate gaze signal (with respect to the required conjugate gaze). We never found a positive correlation indicating that both the delays and the gains of vergence and conjugate gaze show independent fluctuations. In this sense, this limited analysis suggests that our descriptive distinction between a conjugate and a disconjugate component of context compensation may have a parallel in the real system. The differences between the vergence delay and the conjugate gaze delay can be interpreted in the same fashion. Clearly, more work is necessary to establish conclusive evidence concerning monocular/binocular VOR control. We suggest that our analytic approach, allowing the crucial signals to be isolated, provides a tool to undertake a more detailed study on this issue.


    ACKNOWLEDGMENTS

We thank A. Van Beuzekom and D. Henriques for critical comments on an earlier version of the manuscript.


    FOOTNOTES

Address for reprint requests: C.C.A.M. Gielen, Dept. of Medical Physics and Biophysics, University of Nijmegen, Geert Grooteplein 21, NL 6525 EZ Nijmegen, The Netherlands (E-mail: stan{at}mbfys.kun.nl).

Received 22 June 2000; accepted in final form 22 August 2000.


    REFERENCES
TOP
ABSTRACT
INTRODUCTION
METHODS
RESULTS
DISCUSSION
REFERENCES

0022-3077/00 $5.00 Copyright © 2000 The American Physiological Society