 |
INTRODUCTION |
To make accurate reaching movements with the arm, complex neural transformations from visual and somatosensory inputs through to motor outputs are required. There have been several hypotheses to explain how this is achieved (Jeannerod 1990
; Kalaska and Crammond 1992
; Soechting and Flanders 1989
). A consistent feature is that at least three major neural operations are needed for establishing appropriate motor commands. The first is to transform a retinal map of the target into some form of motor map in which the target position is encoded in body-centered coordinates. The second is to specify the position of the moving limb by means of proprioceptive and/or visual afference. The third operation is error correction to ensure accuracy.
It is widely believed that several brain areas are involved in these neural operations. Ungerleider and Mishkin (1982)
distinguished between a "dorsal stream" and a "ventral stream" in the cortical pathway. They proposed that the former, terminating in the posterior parietal region, acts to code spatial information of objects, a "where" system, whereas the latter, reaching the inferotemporal cortex, is a "what" system, which is involved in identifying objects. Later, Goodale and Milner (1992)
suggested that the dorsal stream mediates the sensory motor transformations necessary for visually guided movements.
It is known that parietal lesions cause disturbance of visually guided hand movements in man (Perenin and Vighetto 1988
) and nonhuman primates (Bates and Ettlinger 1960
). Recent neurophysiological studies of monkeys have also identified neuronal activities related to visually guided reaching and hand movements in the posterior parietal cortex (Hyvarinen and Poranen 1974
; Mountcastle et al. 1975
; Sakata and Taira 1994
; Sakata et al. 1995
; Taira et al. 1990
), and it has been suggested that the parietal areas are involved in visual guidance or control of voluntary hand movement.
At the opposite end of this system, motor plans must be generated, executed, and compared with ongoing movement. It is well-known that the premotor neurons receive visual information from the parietal cortex (Cavada and Goldman-Rakic 1989b
; Kurata 1991
; Matelli et al. 1986
) and project to the motor cortex (Muakkasa and Strick 1979
) and spinal cord (He et al. 1993
), and this premotor area (PMA) is now considered to be involved in coding such motor acts as reaching, grasping, and gripping under sensory guidance (Rizzolatti et al. 1988
). Supplementary motor area (SMA) has also been shown to receive visual and somatosensory inputs from the parietal cortex directly or through the cingulate cortex, although they are considered less important than the primary motor area (PMA) for sensory guidance of movement (Tanji 1994
).
Recent positron emission tomography (PET) studies of humans performing visually guided finger or hand movements revealed many areas to demonstrate significant increases in regional cerebral blood flow (rCBF), including the PMA, the SMA, the superior parietal lobule, the cingulate cortex, the occipital areas, the superior frontal area, the basal ganglia, the thalamus, and the cerebellum (Grafton et al. 1992
, 1996
; Kawashima et al. 1994
, 1995
; Matsumura et al. 1996
). None of the previous brain imaging studies, however, specifically attempted to relate rCBF changes to the neural operations that use visual information on hand movement to execute accurate reaching movements despite the fact that an important role of visual feedback of the hand in accurate reaching movements has been described from many psychophysical studies (Jeannerod 1988
; Prablanc et al. 1979
, 1986
).
The present investigation was therefore designed to examine where in the human brain visual feedback of hand movement is processed and utilized to allow accurate pointing. To do this, we measured changes in rCBF using 15O-labeled water (H2 15O) and PET during performance of pointing tasks with and without visual feedback of the moving hand.
 |
METHODS |
Subjects
Nine right-handed normal volunteers (19-26 yr old) participated in this study. Handedness was assessed by the H. N. Handedness Inventory (Hatta and Nakatsuka 1975
). Written informed consent was obtained from each subject in accordance with the guidelines approved by Tohoku University and the Declaration of Human Rights, Helsinki 1975. A high resolution magnetic resonance imaging (MRI) scan (0.5 T) was also performed on a separate occasion.
Task procedure
Before the PET experiments, each subject had a catheter placed into the left brachial vein for tracer administration. The experimental setup is displayed schematically in Fig. 1A. Each subject wore an individual stereotaxic fixation helmet and a head-mounted display (HMD; i-glass; Virtual I/O, Seattle, WA) during the PET measurements. A blackboard bearing six red light-emitting diodes (LEDs) aligned in a circle with a 10-cm radius, 10 cm apart from each other with a green LED at the center, was set at a distance 50 cm in front of the subjects. A charge-coupled device (CCD) camera (XC-999; SONY, Tokyo, Japan) was fixed on the helmet so that the LEDs could be captured at the center of its field of view. These LEDs and the subject's hand movements were monitored using the CCD camera. In all tasks, the center green LED and one of the target six red LEDs were alternately switched on and off. The duration of lighting of the green LED (1 to 3 s) and the red LEDs (0.75 to 1.25 s) and the order of lighting of target LEDs were randomly controlled by a personal computer. Visual stimuli were presented to the subjects on the HMD.

View larger version (16K):
[in this window]
[in a new window]
| FIG. 1.
Experimental setup. A: subject was placed in a supine position on the positron emission tomography (PET) scanner wearing the head mounted display (HMD) over the fixation helmet (not shown). A charge-coupled device (CCD) camera was fixed on the helmet. The subject was instructed to point at a lit target light-emitting diode (LED) on the target board in front of him with or without being able to monitor the hand movement. B: subject gazed at the HMD in front of his face, and he could not see the target board directly. The CCD camera was tilted downward to capture the target board outside the PET scanner, so there was a difference in orientation ( ; 60°) between the target board and the array of targets in the HMD.
|
|
Each subject performed the following three tasks during PET measurement: a control task, a reaching with visual feedback (RwithF) task and a reaching without visual feedback (RwithoutF) task. In the control task, subjects were asked to hold their right hand on their chests in a pointing shape and to simply look at each LED as it was illuminated. In the RwithF task, the subjects were instructed to point to the lit target LED with the right index finger with a natural pointing hand shape, and to return the hand to the resting position on the chest after each trial. In this task, the view of LEDs and subject's moving hand near the target were presented to the subject through the CCD camera on the HMD. The difference in orientation between the target in the HMD and the real target board was 60°, since the subjects gazed at the HMD in front of their eyes with their head placed inside the PET scanner and the target board had to be placed outside the PET scanner (Fig. 1, A and B). In addition, because the same view taken with a CCD camera was presented on the HMD to each of the subject's eyes, they could not get the binocular disparity (see DISCUSSION). This condition caused discrepancies in terms of the subject's original visuomotor coordination of the targets. To allow the subjects to become accustomed to make reaching movements with the HMD monitoring, this task was started 3 min before the bolus injection of radioisotope (see below), enough time for accurate pointing movements to become possible (Clower et al. 1996
; Welch 1986
). In the RwithoutF task, the subjects were instructed as for the RwithF task, but they were not required to point to the target accurately because they could not monitor their hands and no information about error was provided. In this task, the sequence of LED lighting previously recorded on a videotape was presented on the HMD. Because discrepancies between angles to visual targets and reaching targets also existed in this task, it also had to be performed after a learning process. Thus the RwithoutF task always followed the RwithF task. Because the subjects could not monitor their hand movements visually, they had to make reaching movements based on learned visuomotor coordination. However, this did not make the task more difficult, because they were allowed to make errors in pointing the target. To balance task order, the control task was performed before or after these reaching tasks with random assignment to each subject.
The eye movements were not monitored since the HMD caused artifacts on the electrooculogram. We therefore confirmed whether the subject gazed at the target in all task conditions by interviews after each scan was performed. The performance of reaching tasks was recorded with a videorecorder (30 frames/s), and the number of movements, reaction time (duration from the target LED lighting to the beginning of movement), and movement time (duration from the beginning of movement to the pointing to the target) were determined by frame-by-frame analysis. Pointing errors were also analyzed for each subject from video recordings as follows. Errors along the vertical and horizontal axis were referred to as vertical and horizontal errors, respectively. Vertical and horizontal errors were positive when the final finger tip position was upward or to the right of the target and negative when downward or to the left. To measure the pointing accuracy, we calculated constant errors and variable errors (Bock and Eckmiller 1986
; Jeannerod 1988
; Prablanc et al. 1979
, 1986
) as follows. Constant error was calculated as the length of the vector in two-dimensional space of the target board between the target and the mean endpoint location. As the variable error, the product of
× SDv × SDh was calculated, where SDv and SDh are the standard deviations of vertical and horizontal errors, respectively (Georgopoulos et al. 1981
; Smetanin and Popov 1997
).
PET measurements
Each subject was placed comfortably in a supine position on a PET scanner (Shimadzu SET2400W, FWHM 4.0 mm) (Iida et al. 1996
). Transmission data of subjects in the same position were used to correct for attenuation with use of a 68Ge source. The rCBF was measured after administration of a bolus injection of 30 mCi(1,110 MBq) H2 15O.
The control task and the RwithoutF task were started 30 s and the RwithF task 3 min before the bolus injection. Each PET measurement was commenced immediately after radioactive counts were monitored on the PET camera and was continued for a period of 60 s. PET scans were converted to relative rCBF images using a modified autoradiographic method (Herscovitch et al. 1983
; Raichle et al. 1983
). All PET images were smoothed with a three-dimensional Gaussian filter of 10 mm wide, then normalized for global cerebral blood flow of 50 ml/100 g/min.
In the present study, standard anatomic structures of a human brain atlas (HBA) system (Roland et al. 1994
) were fitted interactively to each subject's MRI using both linear and nonlinear parameters. These parameters were subsequently applied to transform normalized rCBF PET images into the standard brain anatomy. After the anatomic standardization of the rCBF images, we made voxel-by-voxel subtraction pictures of the RwithoutF from RwithF images as well as the control from each reaching task images for each subject. Then, mean and variance pictures and descriptive Student's t-pictures for each subtraction were calculated. In the present study, voxels with t-values >3.355 (P < 0.005, without correction for multiple comparisons) were considered to represent regions of significantly changed rCBF in each reaching task minus control task image. The subtraction of RwithF minus RwithoutF was made to pick up the voxels showing statistically significant increases in rCBF in relation to the visual feedback. To exclude apparent increases of rCBF by subtracting decreases in rCBF, we identified only those voxels that also showed statistically significant increases in rCBF in the RwithF minus control task image as specific activation fields related to the visual feedback. Likewise. the voxels activated in the RwithoutF minus RwithF as well as in the RwithoutF minus control image were identified as specific activation fields related to the lack of visual feedback. The voxels showing statistically significant increases in rCBF both in the RwithF minus control image and in the RwithoutF minus control image were identified as the common activation fields related to the reaching movement. Because overlap between two activations require that the first activation is present with a P < 0.005, the probability that an activation in the second image will overlap by chance is <0.005.
Finally, each activation was then superimposed onto the average reformatted MRI of the same nine subjects involved in this study. Anatomic localization of areas of activation in each subtraction was made in relation to the mean reformatted MRI.
 |
RESULTS |
Behavior
All subjects reported that they gazed at target LEDs in all task conditions and did not feel special difficulty in performing any of the tasks. The mean ± SD numbers of reaching movements during PET measurement in the RwithF task and the RwithoutF task were 19.6 ± 0.7 and 19.3 ± 0.3, respectively, and with no statistically significant difference between the two (P > 0.1, paired t-test). The mean ± SD reaction times for reaching movements during PET measurement in the RwithF and the RwithoutF tasks were 0.31 ± 0.05 s and 0.31 ± 0.05 s, respectively. The mean ± SD movement times for reaching movements during PET measurement in the RwithF and RwithoutF tasks were 0.49 ± 0.16 s and 0.49 ± 0.13 s, respectively.
Figure 2 shows the distributions of pointing positions of all subjects in the RwithF task (Fig. 2A) and the RwithoutF task (Fig. 2B). The endpoints of reaching movements were concentrated around each reaching target in the RwithF task. However, they were sparsely distributed over a wide region in the RwithoutF task. The constant errors of the RwithF and RwithoutF task were 0.5 ± 0.2 cm and 3.3 ± 0.9 cm, respectively, and the difference was statistically significant (paired t-test, P < 0.001; Fig. 3A). The variable errors of each task were 5.5 ± 1.8 cm2 and 47.3 ± 18.1 cm2, respectively (paired t-test, P < 0.005; Fig. 3B).

View larger version (10K):
[in this window]
[in a new window]
| FIG. 2.
Positions of the endpoints of pointing movements of all subjects in the reaching with visual feedback (RwithF) task (A) and in the reaching without visual feedback (RwithoutF) task (B). Crosses indicate the central LED and the six target LEDs, 10 cm apart from each other. Black dots indicate the pointing positions on the target board.
|
|

View larger version (18K):
[in this window]
[in a new window]
| FIG. 3.
Constant and variable errors in the RwithF and RwithoutF tasks. Error bars on each data bar indicate standard deviations of the means.
|
|
Cerebral activation pattern
Tables 1-4 summarize the data for anatomic structures, Talairach coordinates (Talairach and Tournoux 1988
), and t-values of peak activations in the RwithF task minus control, the RwithoutF task minus control, RwithF minus RwithoutF, and RwithoutF minus RwithF images, respectively.
SPECIFIC ACTIVATIONS IN RELATION TO THE RWITHF TASK.
Six fields were identified as specific activation fields in relation to the visual feedback of the view of moving hand, because they showed significant activation in the RwithF minus RwithoutF image as well as RwithF minus control image (Fig. 4). These regions are marked with asterisks in Tables 1 and 3. In the parietal cortex, there was one field in the supramarginal gyrus, located in the anterior lip of the superior ramus of the lateral sulcus of the left hemisphere. In the frontal cortex, a field was found in the dorsal part of the PMA of the left hemisphere. The orbitofrontal cortex of the right hemisphere was also activated. Another field was located in the ventral lip of the posterior part of cingulate sulcus of the left hemisphere. No significantly activated fields were detected in the temporal and occipital cortices of each hemisphere. The other three fields were located in the caudate head, the lateral part of thalamus of the right hemisphere, and the vermis. Although statistically not significant, a relatively large increase of rCBF was also found in the lateral lip of the middle portion of the intraparietal sulcus (IPS) of the left hemisphere in the RwithF minus RwithoutF image (t = 2.9).

View larger version (165K):
[in this window]
[in a new window]
| FIG. 4.
Statistically significant (P < 0.005) fields of activation not only in the RwithF minus control image but also in the RwithF minus RwithoutF image are shown in white, superimposed on the normalized mean magnetic resonance imaging (MRI) image. The 4 axial sections (top row and bottom left) are 32, 1, 22, and 51 mm above the anterior-posterior commissural line. The anatomic right is on the left side of the figure. The sagittal section (bottom right) is 6 mm left to the midline.
|
|
SPECIFIC ACTIVATIONS IN RELATION TO THE RWITHOUTF TASK.
Three fields were identified as specific activation fields in relation to the lack of visual feedback of moving hand, because they showed significant activation in the RwithoutF minus RwithF image as well as in the RwithoutF minus control image (Fig. 5). These regions are marked with symbols in Tables 2 and 4. These fields were located in the middle occipital gyrus and the cuneus of the left hemisphere and the prefrontal area (PFA) of the right hemisphere.

View larger version (142K):
[in this window]
[in a new window]
| FIG. 5.
Statistically significant (P < 0.005) fields of activation not only in the RwithoutF minus control image but also in the RwithoutF minus RwithF image are shown in white, superimposed on the normalized mean MRI image. The 2 axial sections are 12 and 51 mm above the anterior-posterior commissural line. The anatomic right is on the left side of the figure.
|
|
COMMON ACTIVATION FIELDS WITH BOTH REACHING TASKS.
Eight fields showed significant activation not only in the RwithF minus control image but also in the RwithoutF minus control images (Fig. 6). These fields were commonly activated with two reaching tasks, as marked with an asterisk in Tables 1 and 2, and were considered to be related to reaching movement per se. In the frontal cortex, there was one field in the anterior lip of the central sulcus of the left hemisphere, located in the primary motor hand and arm area. Another field in the ventral lip of the posterior part of cingulate sulcus of the left hemisphere was located anteriorly and medially to the RwithF task-specific activated field. In the right hemisphere, a field was found in the orbitofrontal cortex. In the temporal cortex, a field in the inferior lip of the lateral sulcus of the right hemisphere was located in the posterior part of the superior temporal gyrus. In the cerebellum, two fields in the right side of the vermis and two fields, in the anterior and posterior lobes, of the right hemisphere were also distinguished. There were a few activations in the right parietal cortex in each reaching task minus the control condition. However, there were no significant activations common to both conditions because the locations and the sizes of the fields were different from each other, and no overlap was detected.

View larger version (163K):
[in this window]
[in a new window]
| FIG. 6.
Fields statistically significantlly (P < 0.005) activated in common in the RwithF minus control image and the RwithoutF minus control image are shown in white, superimposed on the normalized mean MRI image. The 4 axial sections (top row and bottom left) are 43, 19, 8, and 55 mm above the anterior-posterior commissural line and the sagittal section (bottom right) is 10 mm left to the midline.
|
|
 |
DISCUSSION |
In this study, we demonstrated fields of activation related to reaching with and without the visual feedback of hand movements. The principal findings were fields of activation in the supramarginal gyrus, the posterior part of the cingulate cortex, the dorsal PMA in the left hemisphere, and the cerebellum specific to the RwithF task. The results indicate that these areas may play important roles in integrating visual feedback from hand movements and execution of right hand pointing.
Task design
The CCD camera and the HMD made it possible to hide the subject's moving hands from themselves, while presenting the targets in the same spatial location. With such presentation of visual stimuli through a single CCD camera on the HMD, some points should be noted. First, the same view was presented bilaterally on the HMD and resulted in the binocular view without binocular disparity. Therefore no real depth as in the usual binocular view was perceived by the subjects (Koenderink et al. 1994
). This might have caused differences between visually perceived distances and distances proprioceptively recognized during reaching to the targets. In interviews of all subjects after all PET scan sessions, many, but not all, told us that the target seen in the HMD seemed to be more distant than the target they touch, although none of them reported difficulty in making reaching movements after learning trials. Second, subjects could view their hands only in the later stages of reaching movements in the RwithF task because of the restricted field of view of the CCD camera and the HMD, and no view of the hand was presented in the RwithoutF and control tasks. The lack of visual information regarding their hand start positions might have caused less accurate pointing movements even with the RwithF task (Prablanc et al. 1979
). Finally, there were considerable differences in orientation between the arrays of the targets shown in the HMD and the real target board that was tilted in front of the subject in such a way as to be placed in the center of the field of view of the CCD camera (Fig. 1B). Because some kind of visuomotor distortion, as in prism adaptation tasks (Clower et al. 1996
; Welch 1986
), must have taken place, learning trials were needed before making accurate reaching with visual feedback of pointing.
To exclude possible influences from visuomotor learning or adaptation to differences in angle (Clower et al. 1996
) and distances to the targets, scan sessions for the RwithF task were conducted after 3 min of learning trials, followed by the RwithoutF task.
Activation patterns
PARIETAL CORTEX.
In this study, the left supramarginal gyrus was significantly activated specific to the RwithF task and was thus hypothesized to be one area concerned with information processing for visual feedback. In previous PET studies, supramarginal activity was also found during grasping movements (Matsumura et al. 1996
), performance of somatosensory discrimination tasks (O'Sullivan et al. 1994
), and visually instructed motor preparation (Deiber et al. 1996
).
In human lesion studies, Perenin and Vighetto (1988)
have shown that patients with damage to the left posterior parietal area exhibit misreaching with the hand contralateral to the lesion in bilateral visual fields, although they concluded that the inferior parietal lobule is less important than IPS area and the superior parietal lobule. It has been suggested that the left hemisphere may play a more specific role in the control of hand movement than the right hemisphere (Kimura and Archibald 1974
; Perenin and Vighetto 1988
). According to human cytoarchitectonic studies (Eidelberg and Galaburda 1984
; Von Economo 1925
), most of the cortex of the supramarginal gyrus is occupied by area PF, corresponding to area 40 of Brodmann. In nonhuman primates, neurons in area PF (Von Bonin and Bailey 1947
) or area 7b (Vogt and Vogt 1919
) and area VIP (Colby et al. 1993
) are interconnected densely between the inferior PMA (Cavada and Goldman-Rakic 1989b
; Matelli et al. 1986
; Petrides and Pandya 1984
) and respond to visual and somatosensory stimuli (Colby et al. 1993
; Graziano and Gross 1995
; Hyvarinen 1981
; Hyvarinen and Poranen 1974
; Leinonen et al. 1979
). Neuronal activities related to voluntary hand movements have also been recorded in area PF or 7b (Hyvarinen 1981
; Leinonen et al. 1979
; Rizzolatti et al. 1988
), which have been considered as sensory-motor interfaces helping to encode the location of sensory stimuli and to generate the necessary motor responses (Graziano and Gross 1995
). The results of the present study are in line with this view and suggest that the human supramarginal cortex may function in the visuosomatosensory integration for motor commands.
The superior parietal area and/or the IPS area have been demonstrated to be involved in visually guided hand movements by recent PET studies (Grafton et al. 1992
, 1996
; Kawashima et al. 1995
; Matsumura et al. 1996
; Winstein et al. 1997
), human lesion studies (Perenin and Vighetto 1988
), and many studies of nonhuman primates (for a recent review, see Sakata and Taira 1994
). In the present case, a few activations were observed in the RwithF task minus control and the RwithoutF task minus control in the parietal cortex of the bilateral hemisphere, but no field was found to be activated commonly with both reaching tasks. In addition, the lateral lip of the left IPS exhibited a relatively large rCBF increase specific to the RwithF task, although it was not statistically significant in the RwithF minus RwithoutF image.
FRONTAL CORTEX.
We delimited the frontal motor area based on a previous PET study (Kawashima et al. 1994
) and recent anatomic report (Zilles et al. 1995
) because there is no gross anatomic way to delimit the PMA in the human brain from the primary motor or PFA. The hand and arm motor area of the left hemisphere was activated during both reaching tasks, in line with previous PET studies of reaching (Grafton et al. 1996
; Kawashima et al. 1994
), pursuit rotor tasks (Grafton et al. 1992
) and pointing and grasping (Grafton et al. 1996
; Matsumura et al. 1996
) related to right arm and/or hand movements, as in the present case.
The field significantly activated in the dorsal part of PMA specific to the RwithF task was also identified in recent PET studies of visually guided pointing and grasping (Grafton et al. 1996
; Matsumura et al. 1996
), preparation for reaching (Kawashima et al. 1994
), sensory guided reaching (Kawashima et al. 1994
), and visuomotor learning (Kawashima et al. 1995
). The results are thus consistent with the idea that the PMA is involved in visually guided movements and in some aspect of motor preparation (Wise 1985
). It is known that in the monkey the PMA has a relation to hand movement (Kurata and Tanji 1986
; Rizzolatti et al. 1988
) as well as visual and somatosensory responsiveness (Gentilucci et al. 1983
; Graziano and Gross 1995
). In addition, a recent neurophysiological study showed in the monkey that neurons of the PMA medial to the arcuate sulcus discharge in response to detection and/or correction of visuomotor errors, suggesting a role in controlling ongoing movement (Flament et al. 1993
). The present results, therefore, indicate a contribution of the human PMA to the control of ongoing movements with visual feedback of hand movements.
CINGULATE CORTEX.
The present study revealed a field in the ventral lip of the posterior part of the cingulate sulcus to be activated during both reaching tasks. Another juxtaposed field demonstrated change specific to the RwithF in the posteromedial part. Activation of the posterior part of cingulate sulcus in relation to pointing and grasping has been shown in a previous neuroimaging study (Grafton et al. 1996
). The ventral lip of the posterior cingulate sulcus of nonhuman primates largely corresponds to the cytoarchitectonical area 23c (Vogt et al. 1987
), which projects to the spinal cord (Hutchins et al. 1988
) and motor cortex (Muakkasa and Strick 1979
; Shima et al. 1991
) with dense interconnections with the posterior parietal cortex (Cavada and Goldman-Rakic 1989a
). This area shows hand movement related neuronal activity that is less associated with self-paced movements as compared with the anterior cingulate area (Shima et al. 1991
). On the other hand, limb movement elicitation by microstimulation of the cingulate cortex has only been shown for the anterior cingulate area (Luppino et al. 1991
). Neuronal activity related to angle of gaze and size and direction of saccadic eye movements has also been reported (Olson et al. 1996
). This suggests that the field activated during both reaching tasks is presumably related to monitoring hand movements and/or spatial orientation without visual information of the hand. The activation specific to the RwithF task is, on the other hand, more likely to be involved in the process of visuomotor transformation, or monitoring spatial movements, with visual feedback of hand, although a link to simple visual stimulation per se cannot be precluded.
CEREBELLUM.
Hand movement related activation of the ipsilateral cerebellal hemisphere and vermis has been reported in many studies (e.g., Decety et al. 1994
; Grafton et al. 1992
, 1996
; Kawashima et al. 1995
; Matsumura et al. 1996
; O'Sullivan et al. 1994
). The two fields commonly activated in both of the present reaching tasks, located in the anterior and posterior lobes of the vermis extending to the medial part of the right cerebellal hemisphere, may correspond to the classical somatotopic organization of the cerebellum with one whole body representation present in the anterior lobe and two present in the posterior lobe bilaterally (Snider 1952
). Among these areas, the vermis and the intermediate cerebellal have been considered to be involved in ongoing motor execution (for a review, see Brooks and Thach 1981
). In contrast, the lateral part of the hemisphere receives afferents from widely distributed areas of cerebral cortex, including the motor, premotor, and posterior parietal cortices, and sends projections to the PMA in nonhuman primates. It is thought to be involved in motor initiation and/or planning (for a review, see Stein and Glickstein 1992
). We thus consider that the activated fields in the vermis and the most medial part of the right hemisphere are related to the executive control needed for reaching movements. On the other hand, the two fields observed in the lateral part of the right hemisphere may be involved in motor initiation and/or planning for the reaching movements. The field detected specific to the RwithF task at a location intermediate between the fields of the anterior and posterior lobes of the vermis to the medial part of the right hemisphere activated in both reaching tasks is in line with a recent functional MRI study demonstrating cerebellar activation during visuomotor dissociation tasks (Flament et al. 1996
). The present findings support the hypothesis that the cerebellum functions in the detection and correction of visuomotor errors.
Conclusions
Improvement of accuracy of pointing results when visual information of hand is available, this visual input being used for ongoing motor control integrated with nonvisual information about the position of the hand (Jeannerod 1988
; Prablanc et al. 1979
, 1986
). We have used PET to identify the areas involved in accurate pointing with a view of the hand near the target. The results indicate that a network exists in the inferior parietal, premotor, and posterior cingulate cortices, as well as the cerebellum for monitoring own spatial movements and for integration of visual and proprioceptive information with ongoing motor control to achieve accurate pointing.