 |
INTRODUCTION |
The problem of the interaction of visual and kinesthetic information during pointing movements has been analyzed by Soechting and Flanders (1989a
,b
), who showed that subjects make substantial errors while pointing to remembered targets defined visually (see also Berkinblit et al. 1995
; Darling and Miller 1993
). Soechting and Flanders argued that these errors were not due to poor localization of the target in spatial memory, or to errors in proprioceptive information about the position of the arm, but were the result of faulty integration of the visual and proprioceptive information. The location of a visual target is defined by its coordinates in external space. The location of the arm endpoint in the absence of visual feedback is defined by the orientation of the interlimb segments. Accurate pointing requires a comparison of the target and arm endpoint locations in one frame of reference, and the transformation from one frame of reference to another may be the main source of pointing errors (Berkinblit et al. 1995
; Darling and Miller 1993
; Soechting and Flanders 1989b
). From this point of view, pointing to a kinesthetically defined target has a clear advantage: both the target and the arm position are defined in the same frame of reference by the arm angles. In this case no transformations are required, and one may expect an increase in accuracy. In pointing to remembered targets, visually defined coordinates of the target stored in visual-spatial memory must be compared with the coordinates of the arm endpoint computed from proprioceptive feedback. Thus, in comparison with sensorimotor transformations for movements to physically present targets, pointing to remembered targets requires one additional stage. However, this additional component does not change the prediction that pointing to a visual target will be less accurate than pointing to a kinesthetic target, because, when pointing to remembered kinesthetic targets, arm angles can be compared within one and the same modality. Soechting and Flanders (1989a)
showed that, when the target was presented not only visually but also kinesthetically, by placing the subject's relaxed arm at the target (passive kinesthetic condition), pointing accuracy was higher than it was in the case of visually defined remembered targets. Finally, when the subject saw the target and actively touched it with his arm (active kinesthetic condition), accuracy improved further.
In another study of pointing to remembered targets (Darling and Miller 1993
), targets were presented either visually or purely kinesthetically, without visual feedback. Darling and Miller (1993)
showed that the accuracy of pointing was higher in the case of kinesthetically presented targets than in the case of visually presented targets. In the Darling and Miller (1993)
study, the kinesthetic target presentation was neither completely active nor completely passive. The experimenter brought the subject's relaxed arm to the target (passive presentation) and released the arm. After that the subject actively maintained his arm in this position, and although he did not move actively, he was able to estimate the moment of the gravity forces and muscle torques that were necessary to keep the arm near the target. Then the subject's arm was passively returned to the initial position by the experimenter. Thus the subjects had not only the kinesthetic information about the arm position, but also the memory of the control signals that is necessary to balance the arm near the target. In this case, the higher accuracy may not be due to the fact that both the target and the arm positions were defined in the same frame of reference, but due to this additional control signal information.
In their pioneering studies, Paillard and Brouchon (1968
, 1974)
demonstrated that subjects pointed more accurately when actively shifting a finger that served as a target as compared with pointing to a passively shifted finger. Paillard and Brouchon (1968
, 1974)
argued that a calibration of proprioceptive signals takes place when subjects actively place a finger in the target position (see also Feldman and Latash 1982
for a similar framework).
In Soechting and Flanders' (1989a,b) study, both visual and kinesthetic information about the target were available to the subject during the target presentation. In this condition there was higher three-dimensional (3D) pointing accuracy than in pointing to purely visual targets. In the present study, we compare pointing accuracy of movements to purely visual versus purely kinesthetic targets. The first aim of our study is to further test the hypothesis that sensorimotor transformations are the primary source of pointing errors by comparing pointing errors to visual versus kinesthetic targets. We hypothesize that there should be larger errors for visual than kinesthetic target presentation. An additional feature of the present study involves use of a programmable robot arm for target presentation, so that we are able to present visual and kinesthetic targets in exactly the same spatial locations. Thus we also examine how the relative accuracy of pointing in the kinesthetic versus visual conditions depend on the location of the target in space.
The second aim of this study is to compare pointing accuracy of movements to purely active versus purely passive kinesthetic targets. In this study, the position of the pointing arm [not of the arm-target as in Paillard and Brouchon (1968
, 1974)
] is actively defined. There are at least two possible sources to consider for any increase in movement accuracy. First, as was shown by Paillard and Brouchon (1968
, 1974)
, the position of the target may be defined more accurately in the active versus the passive condition. Second, the remembered control signals that generated the muscle torques that balanced the arm at the target may allow more accurate planning of the subsequent pointing movement in the active versus the passive condition. We test the hypothesis that during active presentation of kinesthetic targets the memorized control signals will lead to higher pointing accuracy than in the passive condition, and investigate which movement parameters may be improved.
The third aim of our study is to uncover patterns of interjoint coordination and how they might depend on the mode of target presentation. During kinesthetic target presentation, the nervous system may utilize the simplest control strategy: to memorize the arm angles at the time when the arm is near the target, and to reproduce these values in the subsequent movement independently of each other. In this case the movement of each joint can be planned independently. Thus the nervous system would not have to compute sensorimotor transformations, nor would it need to establish any relationships among the joint angles. One may assume that this simplest control strategy might be used in pointing to passive kinesthetic targets. If the same strategy is used in pointing to active kinesthetic targets, there may be more accurate reproduction of the final arm angles due to the possible scaling of the kinesthetic feedback by the information about the control signals (see above). One may also assume that in the active condition, the nervous system introduces specific relationships among arm angles (see, for example, Soechting and Lacquaniti 1981
). Indeed, a number of models have been put forward positing that the movement goal is to reproduce the arm configuration or the final posture (Desmurget et al. 1995
; Flanders et al. 1992
; Rosenbaum et al. 1993
, 1995
). During pointing to a visually remembered target, the nervous system may control the position of the arm endpoint by establishing strict relationships among different angular degrees of freedom (Berkinblit et al. 1986b
; Hinton 1984
).
Of course, even in pointing to kinesthetic targets, one could compute the arm endpoint location from the arm angles at the time of target presentation, or, as was suggested for pointing to visual targets, during the movement. Likewise, in the case of visual target presentation, the nervous system theoretically could compute the necessary arm angles from the remembered target coordinates and then utilize an angular control strategy during the movement. However, it seems more likely that an angular control strategy would be observed in pointing to kinesthetic targets, whereas arm endpoint control would be found when pointing to visual targets. Patterns of interjoint coordination can provide a test of these alternative control strategies. If the nervous system utilizes the simplest angular strategy, the arm angles can be controlled independently of each other. For the other strategies, movement control would be produced by changing arm angles in a correlated fashion through establishing angular synergies.
If each angle is controlled independently, the errors in reproduction of different angles will not be correlated. Each angle "plays on its own." In the case of a weak or zero correlation, one can suggest that control signals directly encode the final values of the arm orientation angles. In contrast, if movements are planned in terms of the arm endpoint (or by specifying angular synergies), the variations in the final values of the different arm angles should be correlated. In the case of a control strategy in terms of arm endpoint position, a deviation of the fingertip from the target due to an "error" in one joint will be partially compensated by appropriate changes in the amplitude of the angular displacement for another joint. In the present paper, we distinguish between these two possibilities by directly measuring the degree of correlation among the final angle values for all movements to the given target location.
A comparison of two types of kinesthetic target presentation, active and passive, can provide additional information about the control strategies used. In the active condition, the subject could plan the movement not only by using the kinesthetic information about the arm configuration in the target position that was available for in the "passive" condition, but could also use the memorized control signals that provided for the balancing of the arm in the final position. We wanted to find out what aspect of motor performance (if any) would be improved by this additional information available in the active kinesthetic condition. In particular, we wanted to know whether pointing in the active kinesthetic condition would be accompanied by increased accuracy in reproducing either the fingertip position or the arm configuration. If the reproduction accuracy for the memorized final arm angles improves, then it is likely that the control signals for each arm angle were memorized separately during the target presentation. On the other hand, the accuracy of reproduction of the arm endpoint position may increase without any concomitant increase in the accuracy of the arm angles. In this case, it would appear that the control signals memorized by the subject during the target presentation in the active condition do not directly encode final arm angle values. Rather, if this result occurred, we can conclude that the control signals establish fixed relationships between the different angular degrees of freedom, either directly or through the control of the arm endpoint in external space.
 |
METHODS |
Subjects
Seven right-handed subjects (3 females and 4 males between 25 and 45 yr of age) participated in the experiments. All subjects gave their informed consent before inclusion in the study. This research received approval by the appropriate ethics committee and therefore was performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki.
Apparatus and data processing
Each subject was comfortably seated with his or her back resting on the back of a straight-backed chair. In the initial position, the right arm was flexed at the elbow so that the forearm was near vertical. The extended index finger was held close to the right eye while the other fingers were clenched into a fist (Fig. 1). The subjects were facing a programmable robot arm (Hudson Robotics, CRS Plus) that presented the target in 3D space. The tip (6 × 6 × 6 mm) of the robot's arm served as the target. A small light-emitting diode (LED) was attached to the tip of the robot's arm for target presentation in the visual condition. The robot arm was part of a computerized system for 3D graphic analysis of human multiple-joint movements (for details, see Jennings and Poizner 1988
; Kothari et al. 1992
; Poizner et al. 1986
). Two optoelectronic cameras (Northern Digital, Optotrak/2010 System) were used to record positions of five infrared emitting diodes (IREDs) that were affixed to the subject's limb segments at consistent positions that were referenced to the following bony landmarks: the acromial process of the scapula (shoulder), the lateral epicondyle of the humerus (elbow), the ulnar styloid process (wrist), as well as on the nail of the index fingertip and on the robot arm tip. The subjects were asked to fully extend their right index fingers and not to move it with respect to the wrist. Two-dimensional (2D) coordinates of the IREDs were monitored by each camera. Data from both cameras were sampled at 100 Hz and stored as 2D binary files. Then they were low-pass filtered using a Butterworth filter with a cutoff frequency of 8 Hz, and 3D coordinates were reconstructed. The robot presented five targets randomly in two planes in space (Fig. 1A). Four targets (lowermost P1, leftmost P2, uppermost P4, and rightmost P5) formed a diamond in a frontal plane centered in front of the right shoulder, with the two diagonals ~50 cm long. Target P3 was located in front of the right shoulder 12 cm farther from the shoulder than this plane, at a distance approximately equal to the length of the subject's arm with clenched fingers (55-70 cm).

View larger version (24K):
[in this window]
[in a new window]
| FIG. 1.
A: a schematic diagram of the subject's arm in the initial position and the 5 targets in a slightly rotated side view (top). Frontal view of the targets and a schematic illustration of calculation of constant and variable errors (bottom). B: arm angles utilized. Two angles for the upper arm, theta and eta were calculated as yaw and elevation angles. Theta was measured as the angle between the upper arm and the vertical. It was considered to be equal to zero when the upper arm was vertical with the elbow lower than the shoulder. Eta was measured as the angle between the projection of the upper arm onto the horizontal plane and the anterior direction. It was equal to zero when the upper arm was oriented in the anterior direction; upper arm rotation to the left was considered to be positive. Two more angles defining the position of the forearm were calculated: 1st, phi was calculated as an elbow joint angle (the angle between the upper arm and the forearm, equal to 180° when the arm is fully extended). The 2nd degree of freedom that defines the orientation of the forearm is the rotation of the arm about the upper arm. This angle, which we will call omega, was calculated as the angle between the vertical plane that goes through the upper arm and the plane that goes through the forearm and the upper arm. It was equal to zero when these 2 planes coincided; rotation of the plane of the arm to the left (counterclockwise) was considered to be positive.
|
|
Procedure
VISUAL TARGET PRESENTATION.
The target was presented to the subject as a point of light (a green LED attached to the tip of the robot's arm) in a dark room for 2 s. Then an auditory tone indicated for the subject to close his eyes while the robot arm retracted. After 1.5 s, another auditory tone instructed the subject to begin moving.
KINESTHETIC TARGET PRESENTATION.
In the second condition, the subject's eyes were closed throughout the experiment and the targets were defined kinesthetically: the experimenter brought the subject's relaxed hand to the target, held it for 2 s in this position with the subject's index fingertip touching the target, and brought it back to the initial position. Only then was the subject free to move. We will refer to this condition as "passive."
In the third condition, the subject actively brought his arm to the target with minimal assistance from the experimenter (the experimenter corrected the arm trajectory, slightly pushing the subject's arm with his hand in the necessary direction to provide a relatively smooth trajectory with minimal interference), and then for ~1 s the subject actively maintained the arm in this position without any help from the experimenter. After a signal the subject returned his arm to the initial position, again without any help from the experimenter, and immediately performed the pointing movement ("active" condition).
The interval between successive trials was ~10 s. The interval between blocks was ~5 min. The whole experiment lasted ~1 h. In all experimental conditions, the following additional instructions were given to the subjects: "move toward the target at a comfortable speed, place your index fingertip at the target as accurately as possible, and bring your arm back to the initial position." No feedback on the accuracy of the pointing was given to the subjects throughout the experiment. The robot's arm retracted just before the initiation of the pointing movement, so that the subjects never touched the robot during the pointing phase of the movement. Moreover, the subjects were not told before the experiment that only five target locations were used. We extensively investigated the possible influence of fatigue in this and in similar experiments by changing the order of conditions, and by dividing the conditions in two blocks, with one being tested at the beginning of the experiment, and the other at the end of the experiment, and by comparing the subject's behavior in these two blocks of trials. We found no effects of fatigue in these conditions. To decrease the possible effects of training in conditions with kinesthetically presented targets, the movements in passive and active conditions were performed after 40 movements toward visually defined targets, and were divided into two separate experimental blocks, so that 20 of 40 trials in the passive condition were performed after the 20 trials in the active condition, then 20 more active trials and 20 more passive trials were performed. Each of the five targets was presented in random order within each experimental block of trials.
HAND KINEMATICS, POINTING ERRORS, AND ARM ANGLES.
The system for 3D graphic analysis of human motion (Jennings and Poizner 1988
; Kothari et al. 1992
; Poizner et al. 1986
) calculated spatial parameters of the IREDs. The following kinematic parameters were calculated for each arm endpoint trajectory: peak velocity, time-to-peak velocity (acceleration time) normalized by movement duration, cumulative distance along path, curvature, and planarity. Curvature was computed as the ratio of the length of the line that connects the initial and the final working point position divided by the maximal distance from this line to the trajectory. Higher ratios reflect increasingly linear trajectories. To compute planarity, the best-fit plane was calculated for each trajectory by minimizing the sum of the distances from each point of the trajectory to a plane. The degree of planarity of the movement trajectory was estimated as the length of the line that connects the first and the last points of the trajectory, divided by the standard deviation of the distances from the points on the trajectory to the best-fit plane. Higher ratios reflect trajectories that were increasingly restricted to a single plane. The pointing movement initiation was defined as the time when the arm endpoint tangential velocity exceeded 3% of its peak value. The end of the pointing movement was defined as the time of zero or minimal tangential velocity. Constant and variable radial distance, azimuth, and elevation errors were calculated in a spherical frame of reference with the origin at the shoulder (see Soechting and Flanders 1989b
for justifications for a spherical shoulder-centered coordinate system for pointing movements). Radial distance, azimuth, and elevation errors were defined as positive if the final arm position was farther than, to the right of, or higher than the target, respectively. In addition, 3D constant errors (hereafter, constant errors) were calculated as the distance between the target and the mean fingertip position across all trials in the visual feedback × target location subcondition (see Fig. 1A).
Variable azimuth, elevation, and radial distance errors were calculated as the standard deviations of azimuth, elevation, and radial distance errors, respectively. 3D variable error was calculated as a global standard deviation in a Cartesian frame of reference of fingertip positions for all trials in a given condition × target location subcondition. The formula used was the following: 3D Variable Error = Square Root {[SD(dx)]2 + [SD(dy)]2 + [SD(dz)]2}, where SD is the standard deviation, dx, dy, and dz are the differences in the coordinates of the target and the final finger position in the x direction (anterior/posterior), the y direction (vertical), and the z direction (lateral), respectively. This computation of 3D variable error gives a measure of the dispersion of the endpoints for a given set of trials. Standard deviations are appropriate measures of dispersion, since recent work (Desmurget et al. 1997
) has shown that the distribution of endpoints in unconstrained pointing movements, such as those in the present study, tend to be normally distributed, whereas the distribution of endpoints for movements constrained by an external contact (e.g., a hand-held cursor on a surface) tend to be elliptical and elongated. We checked the normality of our distributions by applying Z scores for skewness and kurtosis. These tests showed that our distributions were not significantly different from a normal one in >99% of all the cases.
Pointing errors and kinematic parameters were subjected to a repeated measures analysis of variance (ANOVA; 3 conditions × 5 target locations). Post hoc analyses were performed with the Newman-Keuls test.
Arm orientation angles were calculated as follows (Fig. 1B) (see also Darling and Miller 1993
; Soechting and Flanders 1989a
,b
; and Soechting and Ross 1984
for comparison). The orientation of each of the limb segments (upper arm and forearm) can be defined by two angular degrees of freedom. Upper arm elevation, theta, was measured as the angle between the upper arm and the vertical; it was considered to be equal to zero when the upper arm was vertical with the elbow lower than the shoulder. Upper arm yaw, eta, was measured as the angle between the projection of the upper arm onto the horizontal plane and the anterior direction; it was equal to zero when the upper arm was oriented in the anterior direction. Upper arm rotation to the left was considered to be positive. Two more angles defining the position of the forearm were calculated: first, elbow flexion and extension, phi, was calculated as the angle between the upper arm and the forearm; it is equal to 180 when the arm is fully extended. The second degree of freedom that defines the orientation of the forearm is the rotation of the arm about the upper arm. This angle, which we will call omega, was calculated as the angle between the vertical plane that goes through the upper arm and the plane that goes through the forearm and the upper arm. It was equal to zero when these two planes coincided; the rotation of the plane of the arm to the left (counterclockwise from the subject's perspective) was considered to be positive. We will refer to the arm angle values, measured at the time of the contact with the target during the kinesthetic target presentation as the target arm configuration. The analogous measurement at the time just before pointing movement onset will be referred to as the initial configuration, and at the time of pointing movement reversal, as the final configuration.
For each target presentation condition, simple linear regressions between the final values (i.e., values measured simultaneously with pointing errors) of the angles theta, phi, eta, and omega were calculated. The final angular value for each trial was normalized by subtracting the mean angular value for the given presentation mode × target location subcondition. Regressions were applied on a set of 5 targets × 8 trials.
 |
RESULTS |
Movement kinematics did not depend on the target presentation mode
Only movements from the initial position toward the target will be analyzed here. The trajectories were usually smooth, without visible correctional submovements near the target, with a bell-shaped velocity profile, although sometimes with minor bumps. The mean peak velocity values were 1.28, 1.22, and 1.13 m/s for the visual, active, and passive conditions, respectively, and did not differ significantly [F(2,12) =1.84, P = 0.2]. The velocity profiles were highly symmetrical in all subjects, with the overall mean of the ratio of acceleration time to movement duration equal to 0.50 for the active and passive kinesthetic conditions, and 0.47 for the visual condition. No effect of condition or target location on the curvature of trajectory was found [F(2,12) = 1.54, P = 0.25 and F(4,24) = 1.41, P = 0.26, respectively], with the overall mean curvature ratios of 6.6, 6.65, and 7.31 for the active, passive, and visual conditions, respectively. Moreover, the cumulative path distance did not depend on condition, with overall means of 56.9, 55.8, and 53.2 cm, for the active, passive, and visual conditions, respectively. Finally, the planarity of the movement trajectory did not depend on condition [F(2,12) = 2.28]. The mean planarity value was equal to 242 in the visual, 246 in the passive, and 274 in the active condition. In other words, the standard deviation of the distances from the trajectory points to the best-fit plane was >250 times smaller than the length of the line that connects the starting and the final position of the arm endpoint.
Constant and variable pointing errors
CONSTANT POINTING ERRORS WERE INSENSITIVE TO THE TARGET PRESENTATION MODE.
The effect of experimental condition on the constant errors was not significant [F(2,12) = 0.62, P = 0.56]. The same was true for constant elevation [F(2,12) = 1.18, P = 0.34], constant radial distance [F(2,12) = 0.21, P = 0.81], and constant azimuth [F(2,12) = 3.38, P = 0.07] errors. The overall mean constant error values were 6.69 ± 0.42 (SE) cm for the active, 7.36 ± 0.54 cm for the passive, and 7.98 ± 0.76 cm for the visual conditions. On average, the subjects overshot the targets by 4.25 ± 0.53 cm in the active, 3.95 ± 0.59 cm in the passive, and 3.76 ± 0.78 cm in the visual condition. Moreover, on average, subjects pointed below the target by 2.15 ± 0.57° in the active, by 3.27 ± 0.54° in the passive, and by 3.74 ± 0.70° in the visual condition. The sign of the azimuth error did depend on the condition and the target location (see Dependence of pointing errors on target location). The overall mean of the absolute azimuth error was 2.3°.
VARIABLE POINTING ERRORS WERE SIGNIFICANTLY SMALLER IN THE ACTIVE CONDITION.
Figure 2 shows variable radial distance (A), variable elevation (B), variable azimuth (C), and variable 3D (D) errors, averaged across target locations. The effect of target presentation mode on the variable 3D errors was significant [F(2,12) = 8.11, P = 0.006]. The effect was also significant for variable azimuth [F(2,12) = 10.72, P = 0.002], and variable elevation [F(2,12) = 4.26, P = 0.04] errors. However, there was no significant effect for variable radial distance errors [F(2,12) = 2.95, P = 0.09]. Post hoc analysis showed that variable errors in the active condition (overall mean 3.35 cm) were significantly smaller than in the visual (3.96 cm) and the passive (4.61 cm) conditions. The same was true for variable elevation errors (2.07, 2.79, and 2.37°, respectively). Azimuth variable errors in the active (1.70°) and visual (1.75°) conditions were significantly smaller than in the passive (2.52°) condition.

View larger version (37K):
[in this window]
[in a new window]
| FIG. 2.
Mean ± SE radial distance variable (A), elevation variable (B), azimuth variable (C) and 3-dimensional (3D) variable errors (D), pooled across target locations.
|
|
It is possible that these differences in the variability of the final finger position would be due at least in part to different variability in the initial finger position across conditions. However, ANOVA testing revealed no significant differences in the initial finger position variability: F(2,12) = 3.54, P = 0.06. 3D variability was equal to 2.01 cm in the visual condition, 2.39 cm in the active kinesthetic condition, and 2.53 cm in the passive kinesthetic condition.
Dependence of pointing errors on target location
FINAL ARM ENDPOINT POSITIONS WERE CLOSER TO THE CENTER IN THE VISUAL CONDITION THAN IN THE KINESTHETIC CONDITIONS.
Figure 3A shows constant azimuth errors averaged across subjects for each condition and each target location. A significant interaction effect was observed for azimuth constant errors [F(8,48) = 5.00, P = 0.0002]. The mean final position of the arm endpoint in both of the kinesthetic conditions was to the left of the leftmost target P2 and to the right of the rightmost target P5. By contrast, in the visual condition, the average final position of the arm endpoint was to the right of the leftmost target P2 and to the left of the rightmost target P5. This effect is illustrated for one subject in Fig. 3B. Movement trajectories from the active kinesthetic (gray spheres) and visual (dark spheres) conditions are shown in a rotated frontal view. The trajectories are smooth and slightly curved. Note that the dark spheres are closer to the center than the gray spheres. In contrast, no such "space contraction" or "space expansion" effect was found for the other three targets P1, P3, and P4, which were located directly in front of the shoulder.

View larger version (41K):
[in this window]
[in a new window]
| FIG. 3.
A: mean ± SE azimuth constant errors for each target in the 3 target presentation mode conditions. Inset: schematic representation of a frontal projection of the target space. Negative errors represent a shift to the left of the target. The final arm endpoint position is closer to the center than targets P2 and P5 in the visual condition, and farther from the center for targets P2 and P5 in the kinesthetic conditions. This effect is also illustrated in B, which presents arm endpoints and trajectories (front view), produced by 1 subject. In the visual condition, final fingertip positions are represented as dark spheres, and in the active kinesthetic condition by gray spheres.
|
|
RELATIVE POINTING ACCURACY IN THE VISUAL CONDITION VERSUS THE KINESTHETIC CONDITIONS DEPENDED ON TARGET LOCATION.
Another significant target presentation mode × target location interaction effect was observed for constant errors [F(8,48) = 2.20, P = 0.04]. For movements toward the leftmost target P2, the constant errors were smaller in the visual than in either of the kinesthetic conditions. By contrast, for the other four target locations, the constant errors in the visual condition were larger than in either of the kinesthetic conditions. Importantly, target P2 has the smallest eccentricity relative to the subject's eyes. Thus the relative accuracy of pointing to the visual versus the kinesthetic targets was influenced by the location of the target in space.
Arm orientation angles
ANGULAR OVERSHOOTS WERE OBSERVED IN BOTH KINESTHETIC CONDITIONS.
In the active and passive kinesthetic conditions, the values of the arm angles were measured at the time of touching the tip of the robot arm during the target presentation (target values) and at the end of the movement (final values). Figure 4 presents mean values of the differences between the final angle values and the target angle values for angles theta, phi (Fig. 4A), and eta, omega (Fig. 4B). For the upper arm elevation angle (theta) and elbow joint angle (phi), final values were larger than target values. Neither of these angular overshoots depended on target presentation mode [F(1,6) = 0.46 for angle theta and F(1,6) = 5.08 for angle phi]. The overshoots were on average ~6° for angle theta and ~12° for angle phi. In contrast, the differences between the target and final angle values for upper arm yaw angle (eta) and rotation angle (omega) were influenced by the target presentation mode [F(1,6) = 11.8, P = 0.01 for angle eta and F(1,6) = 8.64, P = 0.03 for angle omega]. In the passive condition, the final values of the rotation angle omega were larger than the target arm configuration values (see Fig. 4B). The same was true for the active condition, but in this case the difference was ~4-6° smaller than in the passive condition. The overshoots for upper arm yaw angle (eta) also differed in the active and passive conditions: final eta values were larger than target values by ~1-2° in the active condition, but were smaller than target values by 1-2° in the passive condition. Thus angular overshoots (final angle value minus target angle value) were different in the two kinesthetic conditions for the angles eta and omega, but not for the angles theta and phi. We tried to understand the origin of these differences by comparing the differences in the final arm configurations, the target arm configurations, and the initial arm configurations in the active and passive conditions. The initial arm configuration was measured at the time of the pointing movement initiation.

View larger version (27K):
[in this window]
[in a new window]
| FIG. 4.
Mean angular overshoots and undershoots of final minus target arm orientation angles for the angles theta and phi (A) and eta and omega in (B) for each target presentation condition and for each target location. Negative values for the angle eta in the passive condition denote undershoots. Note that the final arm orientation angles in general overshot the angle values from the target arm configuration (measured while the subject was touching the target during the kinesthetic target presentation).
|
|
FINAL ARM CONFIGURATION WAS INFLUENCED BY THE TARGET ARM CONFIGURATION.
During the passive target presentation, the subject's arm was relaxed. As a consequence, the arm configuration at the moment of touching the target during the target presentation in the passive condition was different from that in the active condition: there was a more pronounced effect of gravity on the relaxed arm in the passive condition compared with the actively maintained arm posture in the active condition. (The plane of the arm in the passive condition was more vertical). Moreover, a similar difference in the arm configurations was observed at the time when the arm returned to the initial starting position after touching the target during the kinesthetic target presentation. We tried to address which of these differences in arm configurations (target or initial) could influence the final arm configuration. Figure 5 presents the mean differences in the angle values for the angles eta (Fig. 5A) and omega (Fig. 5B) in the active compared with passive conditions. The differences were measured at the time of target presentation (target), at the time of pointing movement initiation (initial), and at the end of the movement (final). For the upper arm yaw angle eta, the target angle values were ~5° less in the active condition than in the passive condition (see Fig. 5A). This difference was almost independent of the target location: it was largest (6°) for the lowermost target P1 and smallest (4°) for the uppermost target P4. Moreover, the final eta angle values also differed in these two conditions by ~2° (4° for the lowermost target P1). Thus approximately one-half of the difference in final eta values in the active and passive conditions can be accounted for by the difference in the target angle values. Figure 5A also shows the difference in the initial values of angle eta for the two kinesthetic conditions. In contrast to the differences in the target and in the final values, the difference in the initial values depended strongly on the target location; it was <1° for the leftmost target P2 and >11° for rightmost target P5. The same was true for the angle omega (Fig. 5B). Thus the difference in the final arm configurations was not correlated with the difference in the initial arm configurations, but rather can be considered as a partially preserved difference in the target arm configuration. In other words, the subject presumably took into account the target arm configuration to plan the movement to the target.

View larger version (23K):
[in this window]
[in a new window]
| FIG. 5.
Mean difference values of the angles eta (A) and omega (B) in the active kinesthetic condition minus the values in the passive kinesthetic condition for the arm in the target position ( ), initial position ( ), and final position ( ). Data are shown separately for each target location. Note that the difference in the target angle values in the 2 kinesthetic (active, passive) conditions was partially preserved in the final angle values.
|
|
UNLIKE FINAL ENDPOINT VARIABILITY, FINAL ARM ANGLE VARIABILITY DID NOT DEPEND ON TARGET PRESENTATION MODE.
ANOVA showed no significant effects of target presentation mode on the standard deviations of the final arm orientation angles for each of the four arm angles [F(2,12) = 0.58, F(2,12) = 1.05, F(2,12) = 0.55, andF(2,12) = 0.73 for angles theta, phi, eta, and omega, respectively]. The overall mean values of the standard deviations in the upper arm elevation angle theta in the active and passive conditions were 3.43 and 3.51°, respectively; for the elbow joint angle phi, 5.25 and 5.61°; for the forearm yaw angle eta, 2.89 and 3.05°; and finally, for the rotation angle omega, the mean variability values were 3.97 and 3.86°, respectively. This invariance in the variability of the final arm angles across conditions stands in contrast to the significant difference in final fingertip position variability in the active and passive conditions (see VARIABLE POINTING ERRORS WERE SIGNIFICANTLY SMALLER IN THE ACTIVE CONDITION). To investigate this apparent difference in control of final arm angles versus final fingertip position, we analyzed the correlations between the final values of the arm joint angles.
DEVIATION OF THE FINAL FINGERTIP POSITION FROM ITS MEAN POSTION DUE TO AN "ERROR" IN ONE ANGLE WAS BETTER COMPENSATED FOR BY CHANGES IN ANOTHER FINAL ANGLE VALUE IN THE ACTIVE CONDITION THAN IN THE PASSIVE CONDITION.
As described in METHODS, the final angle value for each trial was normalized by subtracting the mean angle value for the given condition × target location subcondition. For the two pairs of angles eta-omega and theta-phi, simple linear regressions were applied on the set of 5 targets × 8 trials per condition. An example of the regression lines for one subject for the eta-omega final values correlation is shown in Fig. 6. Deviations of the final eta values from the average for each of the given target locations are plotted versus the respective omega values in the active (left) and passive (right) conditions. While in the active condition, 74% of variance could be accounted for by the correlation between the two angles, in the passive condition no correlation was observed. For each of our subjects, the sign of the correlation coefficients was negative for eta-omega, and was positive for theta-phi pairs of angles. Table 1 presents the R2 values for each subject. The regression coefficients did not depend on the target presentation mode for the theta-phi pair of angles. In contrast, the degree of correlation for the eta-omega pair of angles was very different in the passive condition, as compared with the active and visual conditions. The overall means of the R2 values were 0.18 for the passive, 0.46 for the active, and 0.44 for the visual condition. For three subjects, there was no significant correlation in the passive condition. Moreover, for each subject, the values were higher for the active condition than for the passive condition.

View larger version (18K):
[in this window]
[in a new window]
| FIG. 6.
Simple linear regressions of the final values of the angles omega against eta, applied to a set of 40 (5 target locations × 8 trials) points for subject S2 (see Table 1). Regressions in the active kinesthetic condition are presented in the top panel, and for the passive kinesthetic condition in the bottom panel. Angle values from each trial were normalized by subtracting the mean angle value, averaged across the 8 trials from the given condition × target location subcondition. Note that the final values of angles eta and omega are correlated in the active kinesthetic condition but not in the passive kinesthetic condition.
|
|
View this table:
[in this window]
[in a new window]
|
TABLE 1.
R2 values for simple linear regressions between the final values of eta-omega and theta-phi pairs of angles
|
|
Within-group effects
HIGH WITHIN-GROUP VARIABILITY IN RELATIVE ACCURACY FOR VISUAL VERSUS KINESTHETIC CONDITIONS.
The within-group analysis of constant errors revealed that the seven subjects could be easily divided into two subgroups: one that was more accurate in the visual condition than in either of the kinesthetic conditions, and the other that was more accurate in both of the kinesthetic conditions than in the visual condition. We will refer to the first group as "visual," and to the second group as "kinesthetic." Figure 7 presents constant 3D (A), constant radial distance (B), and constant elevation (C) errors, averaged separately across these two groups of subjects. In one group (4 subjects,
), each subject showed significantly larger constant errors in the visual condition (overall mean 11.1 cm) than in the active and passive kinesthetic conditions [6.7 and 7.8 cm, respectively, F(2,6) = 51.67, P = 0.0002]. In contrast, for each subject from the second group (3 subjects,
), the constant errors showed a tendency to be minimal in the visual condition [F(2,4) = 4.97, P = 0.08]. The overall mean in the visual condition was 3.8 cm, as compared with 6.4 and 6.7 cm in the active and passive conditions, respectively. Moreover, the constant errors in the visual condition for the visual group were even smaller than those for the subjects from the kinesthetic group (Fig. 7A). The same was true for constant radial distance and constant elevation errors (Fig. 7, B and C).

View larger version (51K):
[in this window]
[in a new window]
| FIG. 7.
Mean ± SE constant 3D, radial distance and elevation errors pooled across target locations, for the "visual" ( ) and the "kinesthetic" ( ) groups of subjects (see text for details).
|
|
The same pattern was observed for variable errors that depended on target presentation mode in both groups [F(2,6) = 15.29, P = 0.004 and F(2,4) = 7.16, P = 0.048]. For the subjects in the kinesthetic group, the variable errors were significantly smaller in the active condition (3.2 cm) than in either the visual condition (4.81 cm) or the passive condition (4.50 cm). In contrast, for the subjects in the visual group, the variable errors were smallest in the visual condition (3.35 cm), slightly greater in the active condition (3.58 cm), and significantly larger errors in the passive condition (4.74 cm).
 |
DISCUSSION |
Basic experimental findings
Let us summarize our findings. 1) No significant differences across conditions were found in movement kinematics (peak velocity, length, and curvature of the trajectory). 2) No significant differences across conditions were found in constant pointing errors, presumably due to the high within-group variability and to the significant effect of target location on relative pointing accuracy in the visual condition. Subjects on average overshot the targets and pointed beneath them, with relatively small azimuth errors. 3) For the two most lateral targets, the final arm positions were on average closer to the center than the targets in the visual condition and farther from the center than the targets in the kinesthetic conditions. 4) Variable errors were significantly larger in the passive condition than in the active condition. Variability in final arm orientation angles did not depend on the target presentation mode. 5) In addition to radial distance overshoots, we observed angular overshoots: final values for all four arm angles under consideration were generally larger than the arm angles measured when the subject touched the tip of the robot arm during the kinesthetic target presentation.
Thus the prediction that pointing errors would be larger when pointing to visual as opposed to kinesthetic remembered targets was not confirmed. Moreover, the hypothesis that the accuracy of reproduction of arm angles would be higher in the active kinesthetic than in the passive kinesthetic condition also was not confirmed. However, the hypothesis that the active kinesthetic condition would produce higher endpoint accuracy than the passive kinesthetic condition was confirmed. Interestingly, there was kinematic invariance for movements to kinesthetic and visual targets: trajectories were neither more or less linear nor slower or faster in the visual versus kinesthetic conditions. This kinematic invariance implies that the observed differences in arm endpoint errors or in arm angles for different modes of target presentation cannot be explained in terms of biomechanical factors.
Constant pointing errors
On the basis of previous studies, we hypothesized that pointing accuracy would be higher in the kinesthetic conditions than in the visual condition, because in the kinesthetic conditions it is not necessary to compare the arm endpoint and the target locations defined by afferentation from different modalities. Our analysis revealed a more complicated picture. No significant differences were found for the overall mean constant errors. However, a more detailed analysis showed that constant errors across conditions depended on target location and individual subject.
Comparison of pointing errors in visual and kinesthetic conditions: influence of target location
Soechting and Flanders (1989a
,b
) showed that pointing errors in the visual condition are significantly larger than in conditions in which the subject has both visual and kinesthetic information about the target and the target arm configuration. These error patterns were explained by errors induced by sensorimotor transformations between the visually defined target coordinates and the kinesthetically defined final arm configuration. Our analysis of the influence of target location on pointing accuracy showed that constant errors for movements toward the leftmost target P2 were smaller in the visual condition than in the kinesthetic conditions. In contrast, for movements toward each of the other four targets, the constant errors were larger in the visual condition than in the kinesthetic conditions. Thus there are some locations in space that are good for visual and bad for kinesthetic target definition. Target P2 has the smallest eccentricity relative to the subject's eyes among our five target locations (see Fig. 1); it is located almost in front of the eyes and requires minimal head and eye rotations for aiming. These rotations may introduce additional errors (see, for example, Biguer et al. 1984
; Brotchie et al. 1995
; Fookson et al. 1994
; Gentilucci et al. 1994
), because head position signals are used to encode locations of visual stimuli. Thus the poor accuracy in the visual condition as compared with the kinesthetic condition could be explained not only by the errors in sensorimotor transformations, but also by difficulties in precisely determining head and eye position relative to body position to program the arm movement.
Within-group variability in accuracy for visual versus kinesthetic target presentation
No significant difference in constant pointing errors was found for visual and kinesthetic target presentation (in contrast to Darling and Miller 1993
). This could be explained by the high within-group variability of the pointing accuracy in visual and kinesthetic conditions in our seven subjects. Our within-group analysis showed that four subjects demonstrated poorer pointing accuracy in the visual condition than in the kinesthetic conditions. Therefore one may assume that these subjects had difficulties in the integration of visual and kinesthetic information; that is, in calculating the necessary arm orientation angles from the coordinates of the target in external space, and in integrating visually defined target coordinates with the position of the arm defined kinesthetically during the movement. These transformations are absent in the kinesthetic conditions, which could presumably lead to greater pointing accuracy. However, three of our subjects demonstrated greater accuracy in the visual condition than in the kinesthetic conditions. Thus, in this subject group, all the previously mentioned steps of sensorimotor transformation could be made successfully. It is worth mentioning here that these three subjects showed the smallest constant errors in the visual condition, even when compared with the behavior of other subjects in all experimental conditions. We were not able to find any differences in the subjects (level of education, visual acuity, age, gender, etc.) that could explain these differences in pointing accuracy. Three of our subjects have participated in our previous experiments with pointing to remembered targets presented visually in slightly different conditions, and all three of them to a large degree reproduced their behavioral characteristics. The existence of visual and kinesthetic subgroups of subjects raises the problem of classification of normal, control subjects according to the characteristics of their sensorimotor behavior, and can be associated with psychological studies that showed some subjects as having better visual memory, and others better motor memory.
Overshoots along the movement trajectory as the possible primary source of constant pointing errors
For all of our subjects, the working point trajectories were slightly curved (Fig. 3B), similar to the trajectories of arm movements in a vertical plane, observed by Atkeson and Hollerbach (1985)
. This is in contrast to the previously observed linear trajectories of point-to-point movements in 3D space (Morasso 1981
). Generally speaking, the movement direction of the fingertip in the vicinity of the target was downward from above for all the target locations, often even for the uppermost target P4 that was located slightly higher than the finger in its initial position (see Fig. 3B). Therefore, given the vertical initial starting posture of the arm, as a first approximation the constant errors can be described as overshoots of the fingertip along the movement trajectory. Thus the final fingertip location was lower than the target, and farther from the shoulder than the target in all conditions. This overshoot along the working point trajectory was accompanied by overshoots in the upper arm elevation angle theta and the elbow joint angle phi (Fig. 4A).
In the kinesthetic conditions, the position of the final working point was to the left of the leftmost target P2 and to the right of the rightmost target P5. These errors in the final location of the fingertip were accompanied by overshoots in the arm orientation angles (see Fig. 4). In other words, for the most part, changes in all arm orientation angles during the movement were larger than necessary for the reproduction of the target arm configuration.
Relation between constant pointing errors and final angle values: the "space contraction" along the azimuth axes in the visual condition and the "space expansion" in the kinesthetic conditions
In the initial position, the arm was flexed (see Fig. 1). The angles phi (elbow joint angle) and theta (upper arm elevation angle) were uniformly increasing during the movement. The overshoot in angle phi leads to a radial distance overshoot of the arm working point with respect to the shoulder. At the same time, the overshoots in phi decrease the arm working point elevation (this occurs because the arm plane was close to vertical in our experiments). We found that the overshoots in phi were accompanied by overshoots in theta that increase working point elevation (see Fig. 4). Thus the net result is relatively small elevation errors due to mutual compensations of these angles.
In contrast, overshoots in the angles eta and omega mainly resulted in azimuth errors. Let us consider the influence of these angles on the azimuth errors for the leftmost target P2 and the rightmost target P5. For both target locations, the overshoots in these two angles shift the working point in the same direction, and therefore no compensation takes place. The final working point position was therefore to the left of P2 and to the right of P5 (see Fig. 3). Thus overshoots in eta and omega lead to an "expansion" of the external space along the azimuth axes, so that the final working point positions are on average farther from the center than the targets P2 and P5.
The question is why was this effect not observed in the visual condition (Fig. 3)? This difference may either be due to differences in perception of the targets or differences in motor errors in the visual and kinesthetic conditions. We may hypothesize that the motor errors across conditions are similar due to the observed invariance in movement kinematics. One may hypothesize that visual target presentation leads to a perceptual "range effect" (Poulton 1975
), probably due to errors in precisely determining the head and eye positions relative to body position. (In fact, this can be considered as strong evidence in favor of the hypothesis that this range effect is related to the functioning of perceptual, but not of the motor systems). We recently observed a similar pattern of azimuth errors in pointing from a different initial arm position to the same targets used in the present experiment, presented visually in a lightened room. The average final location of the arm was to the left of the rightmost target P5, and to the right of the leftmost target P2 (Fookson et al. 1994
; Smetanin et al. 1994
). These errors result in a "contraction" of space along the azimuth axes.
Final endpoint variability in the two kinesthetic conditions
In contrast to constant pointing errors, variable errors did depend significantly on the mode of target presentation. In the passive target presentation condition, the subject's planning strategy was based on the kinesthetic information about the target arm configuration. In the active condition, in addition to this kinesthetic information, information about the memorized control signals was available to the subject.
The variable errors were smaller in the active condition than in the passive conditions. This means that the knowledge of the control signals that the subject obtained during the movement associated with the target presentation decreased the variability of the final fingertip location in the subsequent pointing movement. On the other hand, the variability in the final arm orientation angles did not depend on the target presentation mode.
Variable pointing errors depend not only on the variability of the arm orientation angles, but also on the degree of correlation among the angles. An interjoint interaction can result in mutual error compensations (Darling and Miller 1993
; Haggard et al. 1995
). The correlation between different degrees of freedom has been shown to decrease the endpoint variability for a static task (Arutyunyan et al. 1969
), for precision grip movements (Cole and Abbs 1986
), for the wiping reflex in frogs (Berkinblit et al. 1986a
), and for locomotion in cats (Haltberstma 1983). Thus it has been found for different types of movements that variability of the endpoint is smaller than would be expected had there been random summation of the variability in the separate joints.
Our data show that the degree of correlation between the normalized final values of the elbow joint angle phi and the upper arm elevation angle theta were not significantly different in the active and passive conditions. In our experimental conditions, both angles increased during the movement of the arm to the target. The correlation between these two angles can be explained by positing that the nervous system established a positive relationship between these two angles to simplify the control of the system with several degrees of freedom (see Soechting and Lacquaniti 1981
). We will refer to this relationship as a reaching synergy. In a given trial, the deviations from the mean for each of the two angles tended to be of the same sign. Given these unidirectional changes in the two angles, the arm geometry for a movement in a plane close to vertical resulted in a compensation of the endpoint elevation error. The degree of correlation between the deviations of the final values from the mean for the two angles was not different in the two kinesthetic conditions, and therefore this angular synergy may not depend on the control signals memorized in the active condition.
In contrast, the degree of correlation between the final values of the upper arm yaw angle, eta, and of the angle of arm rotation about the humerus, omega, was higher in the active than in the passive target presentation condition. Recall that the variability in the endpoint azimuth differed between the active and passive conditions. The finding of different degrees of angular correlation can explain why an equal variability of the final arm orientation angles can be accompanied by variable azimuth errors that are significantly smaller in the active condition than in the passive condition.
In our experimental conditions, unidirectional changes in the angles eta and omega in the vicinity of a target will result in noncompensated changes in the endpoint azimuth. For example, increasing the angle eta will result in a movement of the arm endpoint to the left, while increasing the angle omega, which leads to a rotation of the arm counterclockwise, will result in a movement of the arm endpoint also to the left. In the case of a positive correlation between the two angles near the target (as was the case for the angles theta and phi), the endpoint variability would increase, not decrease. However, we observed a negative correlation between the deviations of the two angles from their means in the active kinesthetic condition. This correlation resulted in smaller azimuth variable pointing errors. This negative sign of the correlation did not depend on whether or not the changes in the two angles were unidirectional during the movement to the target. The changes in the two angles could be unidirectional or not depending on the target location. For example, during the movement to the leftmost target P2, both eta and omega were increasing. Thus, in contrast to the correlation between the angles theta and phi, in this case the relationship between the angles is not a consequence of the angular synergy that is established for producing the movement of the endpoint along the planned trajectory (reaching synergy). For the stabilizing synergy between eta and omega, the memorized control signals were important. Thus one may suggest a different mechanism of the correlation between these two angles when compared with theta-phi correlation. The use of these stabilizing synergies by the nervous system may indicate that in these conditions the goal of the movement is defined as a point in external space. Although the reaching synergy plays a major role in producing the motion of the endpoint along the planned trajectory, the stabilizing synergy may be important in decreasing the variability in the trajectory and, in particular, in the final endpoint position. Toni et al. (1996)
propose a similar hypothesis on the specification of the movement path and localization of the target as two independent processes.
The observed angular correlations suggest that even in the case of kinesthetic target presentation, the nervous system controls the movement by specifying fixed relationships among the arm angles (angular synergies), or by creating an image of the target in the external space which serves as the movement goal. Thus the knowledge of memorized control signals that the subject obtains during the active target presentation allows him to specify more precisely the correct relationship among the control signals that descend to each separate joint during the movement generation. This observation may imply that the movement goal is specified in these control signals not in terms of final angle values, but on a higher level, presumably as coordinates of the target in external space.
Mixed control strategy
The analysis of coefficients of correlation between different arm orientation angles (in particular, between angles theta and phi) in the passive condition suggests that in this condition the nervous system also uses the combination of angles and, possibly, calculates the location of the arm endpoint that is taken into account during the movement generation. However, it does so less successfully than in the active condition because of the lower eta-omega correlations. Does this mean that we can argue now that movements in our experimental conditions are planned exclusively in terms of angular synergies? As follows from the comparison of the arm configurations in the active and passive kinesthetic conditions (see Fig. 5), the target arm configuration was different in the active and passive conditions. For each of the two angles, eta and omega, this difference in target angle values was partially preserved in the final configuration and was equal to 2-4° for each of the five target locations.
The initial arm configurations (measured at the time of the onset of the pointing movement) were also different in the two conditions. However, we hypothesize that in contrast to the target arm configuration, the initial arm configuration did not influence the final arm configuration. The magnitude of the difference between the arm angles in the active and passive conditions was strongly dependent on the target location (see Fig. 5), whereas the same arm angle differences in the target and final positions of the arm were not. For example, for the angle eta (Fig. 5A), this difference in the initial angle values was the same as the difference in the target angle values for the leftmost target P2, but was three times larger than the difference in target angle values for the rightmost target P5.
We conclude that in these conditions the nervous system does not ignore the remembered values of the target arm orientation angles. Thus the nervous system uses all the available information, both on the level of arm endpoint location in external space and on the level of arm angular configuration to control the goal-directed movement. Thus a sort of "mixed" motor control strategy is used (compare with a similar idea proposed in Cruse and Bruwer 1987
; Dean and Bruwer 1994
; and in Haggard et al. 1995
). How one specific type of information or another is used and how movement planning in external and joint space complement each other are important questions for future experimental and theoretical studies.
Visual versus kinesthetic targets
Let us now consider visual target definition. Given the 3D coordinates of the visual target, the calculation of the necessary joint angles of the arm has no unique solution. This is in contrast to the inverse problem of calculating the coordinates of the fingertip by using known final arm joint angles. In this case, the nervous system has coordinates of the fingertip and of the target stored in memory. Now the nervous system has to plan the trajectory of the fingertip from the initial to the final position and has to provide mechanisms that will decrease the deviations from the planned trajectory. If the nervous system actually uses such a strategy of controlling the position of the endpoint in external space, one may expect some correlation between the final values of the arm joint angles. Suppose, for example, that in a given movement the arm was flexed more than necessary at the elbow joint. This could lead to a fingertip position that is higher than the target position. The nervous system can compensate for this error by decreasing the upper arm elevation or by rotating the arm around the upper arm to shift the fingertip down. With such a control strategy, the errors in the arm angles can be larger than the errors in the fingertip position, due to mutual compensation of the errors in the arm angles. Darling and Miller (1993)
also found this type of interjoint coordination for pointing movements to remembered targets.
The fact that the visual and active kinesthetic conditions showed a similar degree of correlation of final angles suggests that the planning strategy might also be similar in these two conditions. It may imply that in the active condition, the angular coordinates of the arm orientation angles are transformed into the coordinates of the arm working point in external space. These working point coordinates would be used by the nervous system in the planning and generation of the movement in a manner similar to that in the visual condition. In the active kinesthetic condition, the subjects have additional information about the memorized arm angles, whereas, in the visual condition, they do not. This might explain the tendency, albeit nonsignificant, for all components of the variable pointing error to be smaller in the active kinesthetic condition than in the visual condition. On the other hand, movement planning in the passive kinesthetic condition presumably also takes into account the angular coordinates of the arm orientation angles.
Neural systems underlying sensorimotor transformations
There is evidence that the posterior parietal cortex is an important processing stage for the sensorimotor transformations required for movement planning (Anderson 1994
; Anderson et al. 1997
; Freund 1991
; Soechting and Flanders 1989a
,b
). Neurons in posterior parietal cortex perform a coordinate transformation of visual information from a retinotopic to a craniotopic frame of reference (Anderson and Zipser 1988
). There may be a further transformation to body-centered coordinates (Anderson 1994
) and to arm-referenced coordinates (Soechting et al. 1990
). Furthermore, neurons in posterior parietal cortex participate in the representation of a short-term memory trace for the target location (Constantinidis and Steinmetz 1996
).
Individual neurons not only in posterior parietal cortex, but also in ventral premotor cortex and in the putamen often show both motor and sensory functions (Crutcher and DeLong 1984
; Gentilucci et al. 1988
; Rizzolatti et al. 1988
). Gentilucci et al. (1988)
and Rizzolatti et al. (1988)
found bimodal tactile-visual neurons in premotor cortex that fired when a monkey reached toward a target. Graziano and Gross (1993
, 1996)
and Graziano et al. (1994)
provide evidence that neurons in posterior parietal cortex, ventral premotor cortex, and putamen form a distributed system for representing extrapersonal space that may be used to guide movements. Gross and colleagues found cells that respond to both tactile and visual stimuli in each of these areas. They showed in particular that there are cells that measure the location of a stimulus with respect to the arm as opposed to representing the location with respect to the eyes or the head. Moreover, Caminiti et al. (1990)
found that neurons in the dorsal premotor area that fired during arm movement had motor response fields that also were represented in arm-centered coordinates. Thus posterior parietal cortex, premotor cortex, and putamen form sensorimotor interfaces that could encode the location of an object and generate the motor responses to that location (Graziano and Gross 1993
, 1996
).
We observed a mixed control strategy when the nervous system took into account both the position of the arm endpoint in external space and the arm configuration. Scott and Kalaska (1997)
have found that some neurons in motor cortex respond differently during movement along similar endpoint trajectories but with different arm configurations. Scott et al. (1997)
observed a similar effect for neurons in premotor and parietal cortex. The activity of these neurons was also influenced both by the spatial parameters of the endpoint trajectory and by the joint angles of the arm. Thus the neurons studied by Kalaska and colleagues may form part of the neural substrate for the mixed control strategy found in the present study.
Patients with Parkinson's disease, in whom dopaminergic projections to the putamen have severely degenerated, undershoot targets in two-dimensional pointing tasks when vision of the moving arm is occluded (Klockgether and Dichgans 1994
; Klockgether et al. 1995
). These undershoots were observed in both slow, active pointing movements and in slow, passively imposed movements. Klockgether and colleagues suggest that patients with Parkinson's disease have a deficit in kinesthesia in slowly executing movements. Another possibility, however, is that the dysfunction of basal ganglia nuclei in Parkinson's disease has led to deficits in sensorimotor transformations when subjects must transform from visual to proprioceptive coordinate frames. In support of this hypothesis, we are finding in ongoing experiments in which restricted visual information requires the coordination of visual with proprioceptive information that patients with Parkinson's disease show poorer absolute 3D accuracy than do control subjects (Adamovich et al. 1997
). Current experiments underway in our laboratory also are investigating the ability of patients with Parkinson's disease to perform 3D pointing in the visual, active-kinesthetic, and passive-kinesthetic conditions of the present paper. Such studies should provide additional information on the possible neural substrates of these motor phenomena.