INSERM CJF 9706. Laboratoire de Neurophysiologie et Neuropsychologie, Marseille, France
![]() |
Abstract |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
![]() |
Introduction |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
One of several temporal cues used in perception and discrimination of stop consonants is the voice onset time (VOT). Lisker and Abramson (Lisker and Abramson, 1964) have shown that the voicing and aspiration differences among stop consonants in a wide variety of languages can be characterized by changes in VOT, which, in turn, reflect differences in the timing of glottal activity relative to supralaringeal events. It has been assumed that the perception of the VOT is under the control of the left hemisphere (Liberman et al., 1952
; Lane, 1965
, Fujisaki and Kawashima, 1971; Studdert-Kennedy, 1976
; Ades, 1977
; Miller, 1977
; Pastore et al., 1977
; Stevens, 1981
; Kuhl and Padden, 1983
; Macmillan, 1987
). Several studies, however, have reported slight deficits in VOT discrimination in patients with damage to the left hemisphere (Oscar-Berman et al., 1975
; Basso et al., 1977
; Blumstein et al., 1977
; Miceli et al., 1978
; Itoh et al., 1986
). Electrophysiological data suggest that the perception of the VOT is controlled by several cortical processes some of which are restricted to the right hemisphere and others of which are common to both hemispheres (Molfese, 1978
). A question which we deal with this study concerns the nature of the VOT cue itself. Is VOT processed by specialized speech mechanisms or by more basic acoustically tuned cortical mechanisms (Pisoni, 1977)?
Recording of cortical auditory evoked potentials (AEP) to speech sounds represents in humans a physiological approach to study the neural activity that underlies the discrimination of speech sounds. AEP studies have actually demonstrated that the auditory cortex of mammals is able to encode precisely acoustic information which changes over time (McGee et al., 1996) and have shown that primary auditory cortex evoked responses reflect encoding of VOT (Steinschneider et al., 1982
, 1994
, 1995
). In prior investigations, we recorded evoked potentials intracerebrally in auditory cortex during presurgical exploration of patients who were candidates for a cortectomy for the relief of intractable epilepsy. Those recordings have shown that those auditory areas reported in several anatomical studies (Brodmann, 1909; Braak, 1978
; Galaburda and Sanides, 1980
) to be morphologically distinct are also functionally distinct. Studies of the intracortical localization of AEP generators have shown an anatomical segregation of components according to their latencies along Heschl's gyrus (HG), demonstrating that earlier components (from 13 to 50 ms) originated from the dorsopostero-medial part of HG (primary cortex) and later ones (from 60 ms) originate from the lateral part of HG and planum temporale (PT, secondary cortex) (Liégeois-Chauvel et al., 1991
, 1994
).
In this study, we examine the neural responses to speech and non speech-sounds within the cortical auditory areas such as HG, PT and the anterior region to HG corresponding to area 22 (Brodmann, 1909). We focus on the stop consonants to investigate the nature of the temporal mechanisms involved in consonant perception. In particular, we seek to assess the degree to which the acoustic versus phonetic nature of the VOT is coded by the different auditory areas, leading to cortical lateralization in the perception of stop consonants.
![]() |
Materials and Methods |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
Seventeen epileptic patients (eight male, nine female, aged 1738 years) participated to this study. It is important to note that these patients have been a posteriori selected in such a way that none of them had their epileptogenic zone including the auditory areas, and no patient showed atypical language representation during specific examinations. Recordings of brainstem evoked potentials carried out before the stereoelectroencephalography (SEEG) confirmed that all the patients had normal afferent pathway conduction.
All patients were informed about the research protocol during the SEEG and gave consent (Ethical Committee, Rennes, November 15, 1993).
Presurgical Exploration
The presurgical SEEG (Bancaud et al., 1965; 1992; Chauvel et al., 1996
) of patients with temporal, temporo-parietal or temporo-frontal epilepsy requires implantation of multiple depth electrodes in the temporal lobe and neighbouring areas of adjacent lobes; its purpose is to determine the anatomical structures involved in the initiation and propagation of seizures and the accurate limits of future cortical excision (Talairach et al., 1974). In addition to the cases where the posterior areas of the superior temporal gyrus (STG) are suspected of constituting the whole or part of the epileptogenic zone, this region is frequently explored because of its strategic position as a pathway for ictal discharge propagation towards inferior parietal and opercular regions (Bancaud and Chauvel, 1987
), as well as for the necessity for functional mapping before surgery.
Anatomical Definition of Depth Electrode Position
The multilead electrodes (0.8 mm diameter, 515 contacts, 2 mm length, 1.5 mm apart) are introduced orthogonally through the usual double grid system fastened to the Talairach stereotaxic frame. Anatomical localization of each lead of the electrode is based on a stereotaxic method which has been described elsewhere (Szikla et al., 1977; Talairach and Tournoux, 1988
; Liégeois-Chauvel et al., 1991
). Owing to the oblique orientation of HG, a single electrode can explore different auditory areas (medial and /or lateral part of HG, and /or PT and/or area 22; see Table 1
).
|
|
|
The auditory stimuli used were speech sounds and speech analogue sounds.
Speech Sounds
We used stop consonants (/ba/, /da/, /ga/, /pa/, /ta/, /ka/) pronounced and recorded by a native French speaker in a sound-attenuated room. It is possible to describe stop voiced consonants in terms of a sequence of acoustic segments. These segments appear on the spectrogram as a series of relatively homogeneous stretches of the signal demarcated by rather abrupt changes in spectral form, marking the boundary between segments (Fant, 1973). In French, the VOT (i.e. the temporal relationship between the release burst in a stop consonant and onset of glottal pulsing) of the voiced consonants (/b/, /d/, /g/) is negative, preceding by ~110 ms the release burst, whereas for voiceless consonants (/p/, /t/, /k/), the VOT is positive, following the release burst (20 ms) (left side of Fig. 3
). Note that there are timing differences in production and perception of VOT across different languages; for English voiced stops, voicing precedes or is nearly coincidental with release (Lisker and Abramson, 1964
).
|
Acoustic Speech Analogues
The four acoustic speech analogues used are displayed at the right side of Figure 3. VOTa, the sound mimicking the voicing, lasts 110 ms and contains three low frequencies (200, 400, 600 Hz). VLa-short, the complex-sound analogue to a vowel, reflects the vowel's spectral content (200, 400, 600, 800, 1500, 2700, 4100 Hz) and duration (223 ms). Va mimicks the syllable's temporal course: it has 110 ms of near silence followed by the seven stronger frequencies present in the two previous complex sounds for a total duration of 320 ms. VLa-long has the same spectral components as Va but without the near silence (pre-voicing) at the beginning; its total duration is 400 ms.
All the patients had heard VOTa and VLa-short sounds and either Va or VLa-long sounds, delivered in a pseudo-random order in the same series.
Recordings
We recorded from 65 different sites of recording in auditory areas: 14 and 23 leads in the right and left HG respectively, 4 and 7 leads in the right and left PT, and 9 and 8 leads in right and left area 22, located anteriorly to HG.
The recordings of intracerebral AEPs were monopolar, with each lead of each depth electrode referenced to an extra-dural lead. All the signals were amplified and filtered simultaneously (bandpass 1 Hz1.5 kHz). Data acquisition started 82 ms before the presentation of the sound and lasted for 819 ms. Average waveforms (100 trials) were digitally low-pass filtered using a Butterworth simulation (cutoff frequency 175 Hz, slope 48 dB/octave).
During a recording session, each patient laid comfortably in a chair in a sound-attenuated room and listened passively to the sounds.
![]() |
Results |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
No difference was found in AEPs among the three VS nor among the three VLS. Therefore, we present the grand average of AEPs in response to voiced versus voiceless stimuli at different recording sites.
Figure 4AH illustrates the pattern of AEPs recorded from eight selected leads at various sites in the left HG to the VS (solid curves) and VLS (dashed curves). The basic sequence of cortical activation in response to both sets of stimuli was similar up to 170 ms mean latency. However, two main features of the intracortical responses are modified by VOT. First, the response seemed to be more complex for VS than for VLS. Except for the earliest components recorded exclusively in the primary cortex, six main successive components (48, 93, 168, 207, 262 and 353 ms mean latency) are recorded in response to VS and only five to VLS (45, 88, 163, 216, 292 ms).
|
Occasionally, the offset of the sound evoked a response that peaked at 277 ms for VLS and at 420 ms for VS.
Interestingly, multiple low-voltage components superimposed over the main components were seen often for VS and occasionally for VLS. These oscillatory components appeared during the formant transitions after the burst and lasted almost the duration of the vowel (Fig. 4A,F). The power spectra of these responses peak at a frequency of ~80 Hz.
Recordings in the Right HG
In the right HG (Fig. 5), VS and VLS elicited response patterns that were similar in the order and duration of response components. Components always present for both sets of stimuli were at 46, 84, 173 and 283 ms (mean latency). The shape of responses depended on neither the duration nor the nature of the stimulus. Thus, the sequential processing of the different parts of VS observed in the left HG is not found in the right HG. Morever, there are no oscillatory responses
|
We analyzed the evoked responses to VS and VLS in the left and right PT and area 22 in order to see if the asymmetry of processing is preserved in these regions.
In the left PT (Fig. 6A,B), evoked responses to VS (full curves) were similar to those recorded in HG, and the same successive components were observed (70, 96, 161, 221, 252, 337 ms mean latency). The first component peaked later than the first one found in HG. The temporal processing of VOT and the vowel seemed to be preserved. However, the difference in the morphology of evoked reponses between VS and VLS was less clear. In some cases (Fig. 6A
), a large slow wave culminating between 500 and 600 ms after stimulus onset could be recorded.
|
Recordings in Area 22
The results of recordings in left and right area 22 are displayed in Figure 7AF. For both VS and VLS, responses had a very similar waveform, a triphasic potential in which the latencies of different components varied as a function of the precise position of the electrode. Any sequential processing did not emerge from these data, as similar AEPs were recorded for both types of syllables
|
|
|
The present results demonstrate that the temporal coding of speech sound is performed in the left HG and PT and not in the right homologous cortical structures. This striking event highlights the functional difference between the area 22, on the one hand, and HG and PT, on the other, since this temporal coding is not observed in the left area 22.
Evoked Responses to Acoustic Speech Analogues
To investigate whether this temporal processing is related to the acoustic or to the linguistic features of the syllable, we analysed evoked responses to acoustic speech analogues of different durations and spectral composition (see Materials and Methods).
Figure 8 displays evoked responses to complex tones within the different auditory cortical areas. The first phenomenon is the waveform similarities of potentials from the right side whatever the duration and the spectral complexity of sound. In contrast, evoked responses from the left side were modulated by the stimulus duration. A striking exemple was seen in Figure 8A,E
. An off-response occurred in all three recordings in the left HG (Fig. 8A
) and in none of the recordings in the right HG (Fig. 8E
). The off-response was correlated with the duration of the sound, beginning at 400, 270 and 180 ms for VLa-long (solid curve), VLa-short (dashed curve) and VOTa (dotted curve) respectively.
|
Sequential processing of the complex sound was predominantly observed in the left HG, to a lesser degree in the left PT, and not in area 22.
![]() |
Discussion |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
A differential coding of voiced and voiceless syllables is preserved in the left PT. In contrast, this specificity has not been found for area 22, located anteriorly to HG, which does not show any asymmetry between both hemispheres in the processing of syllables. These results raise the question of the functional role of the different cortical auditory areas.
Time-locked responses are evoked to the consonant onset as well as to the release burst of the French VS in the left HG. Our data support a temporally based physiological mechanism for the differential perception of VS and VLS and agree with suggestions that this distinction is partially based upon detection of the nonsimultaneous onset of the VOT and the release burst (Stevens, 1980). Furthermore, Dwyer et al. (Dwyer et al., 1982
) have shown that the presence of abrupt onset for stop consonants crucially determined lateralized processing. This mechanism is specifically involved in the left auditory cortex. Similar time-locked responses to speech sounds are also seen in scalp recordings (Kaukoranta et al., 1987
). But these AEPs were bilaterally distributed. This absence of asymetry could be explained by the fact that HG is deeply buried inside the sylvian fissure and that the specific activity of primary area, localized in the dorso-postero-medial part of HG (Liégeois-Chauvel et al., 1991
) is difficult to record from surface-recording techniques (EEG as well as MEG). At the level of the scalp surface, most of the recordable evoked responses presumably originate in cortical regions near the recording electrodes. However, relatively large signals originating from PT and area 22 might often make a significant contribution to the activity observed at a given scalp recording site.
This processing seems to be related to the acoustic rather than to the phonetic composition of the sound since identical time-locked responses were recorded to speech sound and non-speech sound with the same time course. Indeed, similar time-locked responses as those for speech are seen for the speech analogue sounds, implying that these responses are not specific detectors for acoustic speech components. This had already been observed in Kaukoranta's study (Kaukoranta, 1987). Right ear advantage (REA) has been found for auditory sequences suggesting left-hemisphere specialization for processing stimuli characterized by temporal acoustic features (Divenyi and Efron, 1979; Leek and Brandt, 1983). Schwartz and Tallal (Schwartz and Tallal, 1980
) showed a significantly greater REA for discriminating stop consonants that contain transient formant information than for those in which they extended the duration of formant transition or for steady state vowels.
The basic functional organization of the auditory cortex in man (Celesia, 1976; Seldon, 1981
; Liégeois-Chauvel et al., 1991
, 1994
) is close to that of non-human primates. As a matter of fact, studies in monkeys have demonstrated the presence of neurons in the primary auditory cortex that are capable of representing the timing of phonetically important speech components. These cells show spike discharges phase-locked to stimulus periodicities. The (best) neurons could have a temporal resolution close to the millisecond (Brugge and Reale, 1985
; Steinschneider et al., 1992
). In particular, they show responses time-locked to the onset and the release burst of syllables. However, contrary to the results of the present study, these neurons, expressing the temporal content of a sound, are equally present in both hemispheres in monkey (Steinschneider et al., 1980
, 1982
, 1994
, 1995
). Nevertheless, Glass and Wollberg (Glass and Wollberg, 1983
) have found a significantly higher percentage of responding units to natural vocalizations in the primary compared to the secondary left auditory cortex of squirrel monkeys, and this difference was not significant for the right hemisphere. Several behavioural studies with Japanese monkeys reported a significant REA for discriminating their own calls (Petersen et al., 1978
, 1984
; May et al., 1989
). It is possible that the asymmetry found in monkey cortex is already indicative of some differentiation in the left auditory cortex, associated with the analysis of the meaning of the auditory input. However, similar results have been demonstrated in the rats which showed significantly better discrimination of tone sequences with the right ear than with the left ear (Fitch et al., 1993
). These data suggest that auditory temporal processing could not be related to linguistic processing.
A second interesting result of the present study is the existence of high-frequency, low-amplitude responses superimposed over the AEPs recorded in left HG and PT to verbal sounds. These oscillations, absent during the voicing, appear at the burst and last almost the duration of the vowel. The power spectrum is around 80 Hz. Different works have studied the auditory cortex with periodic amplitude modulations, and there is a widespread agreement that cortical neurons can fire phaselocked as long as the periodicities in the signal are below 100 Hz (Creutzfeldt et al., 1980; Muller-Preuss, 1986
; Schreiner and Urbas, 1988
; Eggermont, 1991
). The frequency of these responses might be in linear temporal relationship with the fundamental frequency (Fo) of the speaker's voice (near 180 Hz in our female speaker). But, although neural encoding of Fo is apparent at peripheral levels of the auditory pathway, including the VIIIth nerve and brainstem (Delgutte and Kiang, 1984
; Blackburn and Sachs, 1990
), it is not strongly represented at the level of auditory cortex (Steinschneider et al., 1982
; Phillips, 1993a
,b
). So, it seems more likely that our high-frequency superimposed responses may be one of the phenomena indicative of population coding with neurons whose activities are temporally correlated with each other.
The fact that we observed these oscillations for the speech sounds and not for the speech analogue sounds could be interpreted in two ways. First, it is possible that the formantic transitions within the speech stimuli, that were not incorporated into the non-speech analogue stimuli, might have driven the lateralized oscillatory responses. Eggermont (Eggermont, 1994) showed that the neural synchronization coefficient increased more during stimulus presentation as compared with spontaneous activity, especially when the stimulus was an amplitudeand/or frequency-modulated burst. Second, since the high-frequency superimposed responses are observed only in response to speech sounds and recorded in the left auditory cortex, this oscillatory activity could represent binding functions among the different acoustic features, each of which is detected by different individual neurons leading to a unified meaningful percept. Similar oscillations have been extensively studied since the work of Galambos et al. (Galambos et al., 1981) as gamma-band activity supporting general perception mechanisms in the central auditory system (Mäkelä and Hari, 1987
; Pantev, 1995
). However, DeCharms and Merzenich (DeCharms and Merzenich, 1996) showed that neurons in the primary auditory cortex can coordinate the relative timing of their action potentials during the entire duration of continuous stimuli. They demonstrated that the relative timing of cortical action potentials can signal stimulus acoustic features themselves, a function even more basic that perceptual feature grouping.
The present work has yielded direct evidence that highfrequency responses are generated in the human auditory cortex [confirming our previous data (Liégeois-Chauvel et al., 1994)]. Future studies using speech analogues with other acoustic features may shed light on our understanding of the mechanisms underlying the high-frequency responses.
Relevance to Human Anatomy
This functional asymmetry between the left and right HG may be related to anatomical differences. It is well known that the morphological asymmetry observed in the posterior temporosylvian region in man [especially in the associative auditory areas, such as the PT (Braak, 1978; Galaburda and Sanides, 1980
; Steinmetz et al., 1989
; Rademacher et al., 1993
; Kulynych et al., 1994
)] is related to the left hemisphere specialization for speech. Seldon (Seldon, 1981
) found that, although the total area of primary cortex is the same on both sides, neurons in the left HG generally have a larger tangential dendritic arborization extent in the cell columnal organization than those in the right HG. Recently, morphological imagery studies showed an anatomical asymmetry that arises from a difference in the volume of white matter, which is larger in left than in right HG, which could be due to the presence of thicker or a larger number of myelinated fibers (Penhune et al., 1996
). The larger volume of cortical white matter of left HG, involving a greater number of connecting fibres and/or more myelinated fibres, could be responsible for the observed enhanced temporal processing capability in left HG.
Impairment in Temporal Processing Observed in Language Disorders
Left-hemisphere or bilateral lesions to auditory cortices lead to word deafness. Not all aspects of speech discrimination are disrupted to the same extent in these pathological situations. There is a general agreement that the discrimination of steady-state vowels is preserved, while discrimination of consonants, especially stops, is not (Saffran et al., 1976; Auerbach et al., 1982
; Mendez and Geehan, 1988
; Yaqub et al., 1988). Recently, evidence has been provided suggesting that pure word deafness may be a manifestation of a basic temporal processing disorder. Two studies have questioned the ability of word-deaf patients to discriminate VOT. Auerbach et al. (Auerbach et al., 1982
) reported a patient with bilateral lesions who showed a normal labelling function but an impaired discrimination of consonant vowel syllables when VOT differed by only 20 ms, even when the syllables had VOT values close to those defining the phonetic boundary between the relevant voiced and voiceless stop consonants. This suggests an underlying deficit in the temporal processing of successive acoustic events. Miceli (Miceli, 1982
) presented comparable data for a patient with a more generalized cortical auditory disorder whose deficit in the discrimination of verbal and non-verbal sounds could be at least partly explained by confusions in the temporal processing of successive acoustic events [for review see Phillips and Farmer (Phillips and Farmer, 1990
)]. These clinical data are in agreement with our results demonstrating that a non-verbal sound is also temporally coded in the left auditory cortex.
As already mentioned above, since speech signals have a complex time course, a strong performance in auditory temporal processing is necessary in order to process speech information. The level of temporal representation used by the nervous system in the processing of speech signals is an open question. Indeed, the discrimination of steady-state vowels is maintained after bilateral lesions of the primary auditory cortex (Auerbach et al., 1982; Von Stockert, 1982
; Mendez et al., 1988). The waveforms of these phonemes are quite different. Vowels have no time structure above the level of the periodicities in their waveforms, and the fact that they are perceived after bilateral lesions presumably reflects the activation of other cortical auditory fields unaffected by the pathology. In contrast, stop consonants contain many transient acoustic events that are closely spaced in time (range around tens milliseconds). Perhaps, the currently most valid theory of speech recognition is the acoustic one: it is based on the fact that each phoneme has a unique acoustic signature (short-term spectrum and temporal variations) (Blumstein and Stevens, 1979
, 1980
) and that the perceptual mechanisms use a sliding temporal window to sample the stream of sound. The resulting samples then are matched against internal templates for phonetic tagging (Moore et al., 1988
; Viemeister and Wakefield, 1991; Phillips, 1993b
).
In conclusion, this enhanced sensitivity to temporal acoustic characteristics of sounds of left human auditory cortex probably reflects information processing necessary for further speech processing. The task of the left primary and secondary auditory cortex would be to provide the cerebral language processor with a representation of the acoustic signal that has sufficient spectral and temporal pertinence to permit the phonetic tagging. That could explain why the perceptual elaboration of sounds with the relevant time frame of auditory content should be so dependent on the auditory cortex as is shown in language disorders.
A remaining question is why the auditory temporal processing in man should be more developed in the left auditory cortex whereas this mechanism is bilateral in monkey. Given the fact that the processing of syllables in the left hemisphere is already present in 3 month old infants (Dehaene-Lambertz and Dehaene, 1994), further research is needed to reveal to what extent the innate brain mechanisms of speech perception can be modulated by learning. This will give important knowledge about cortical plasticity.
![]() |
Notes |
---|
Address correspondence to C. Liégeois-Chauvel, Laboratoire de Neurophysiologie et Neuropsychologie, INSERM CJF 9706, Université de la Méditerranée, 27 Bd Jean Moulin, 13385 Marseille Cedex 5, France. Email: liegeois{at}arles.timone.univ-mrs.fr.
![]() |
References |
---|
![]() ![]() ![]() ![]() ![]() ![]() ![]() |
---|
Auerbach SH, Allard T, Naeser M, Alexander MP, Albert ML (1982) Pure word deafness. Analysis of a case with bilateral lesions and a defect at the prephonemic level. Brain 105:271300.[Abstract]
Bancaud J, Talairach J, Bonis A, Schaub C, Szikla G, More P, Bordas-Ferrer M (1965) La stéréoélectroencephalographie dans l'epilepsie: informations neurophysiopathologiques apportées par l'investigation fonctionnelle stéréotaxique. Paris: Masson.
Bancaud J, Chauvel P (1987) Surgical treament of the epilepsies (Engel J Jr, ed.), pp. 289296. New York: Raven Press.
Bancaud J, Talairach J (1992) Clinical semiology of frontal lobe seizures. In: Advances in neurology, Vol. 57: Frontal lobe seizures and epilepsies (Chauvel P et al., eds), pp. 359. New York: Raven Press.
Basso A, Casati G, Vignolo LA (1977) Phonemic identification defect in aphasia. Cortex 13:8595.[ISI][Medline]
Blackburn CC, Sachs MB (1990) The representation of the steady-state vowel sound /e/ in the discharge patterns of cat anteroventral cochlear nucleus neurons. J. Neurophysiol 63:11911212.
Blumstein SE, Baker E, Goodglass H (1977) Phonological factors in auditory comprehension in aphasia. Neuropsychologia 15:1930.[ISI][Medline]
Blumstein SE, Stevens KN (1979) Acoustic invariance in speech production: evidence from measurements of the spectral characteristics of stop consonants. J Acoust Soc Am 66:10011017.[ISI][Medline]
Blumstein SE, Stevens KN (1980) Perceptual invariance and onset spectra for stops consonants in different vowels environments. J Acoust Soc Am 67:648662.[ISI][Medline]
Braak H (1978) Architectonics of the human telencephalic cortex. Berlin: Springer Verlag.
Brugge JF, Reale R.A (1985) Auditory cortex. In: Cerebral cortex, Vol. 3: Association and auditory cortices (Peters A, Jones EG, eds), pp. 229271. New York: Plenum Press.
Celesia GG (1976) Organization of auditory cortical areas in man. Brain 99:403414.[ISI][Medline]
Chauvel P, Vignal JP, Biraben A, Badier JM, Scarabin JM (1996) Stereoelectro-encephalography. In: Multimethodological assessment of the localization-related epilepsy (Pawlik G, Stefan H, eds), pp. 135163.
Creutzfeldt O, Hellweg FC, Schreiner C (1980) Thalamocortical transformation of responses to complex auditory stimuli. Exp Brain Res 39:87104.[ISI][Medline]
Dehaene-Lambertz G and Dehaene S (1994) Speed and cerebral correlates of syllable discrimination in infants. Nature 370:292295.[ISI][Medline]
Delgutte B and Kiang NYS (1984) Speech coding in the auditory nerve. I. Vowel like sounds. J Acoust Soc Am 75:866878.[ISI][Medline]
DeCharms RC, Merznich MM (1996) Primary cortical representation of sound by the coordination of action potential timing. Nature 381:610613.[ISI][Medline]
Divenyi PL, Efron R (1979) Spectral vs temporal features in dichotic listening. Brain Lang 7:375386.[ISI][Medline]
Dwyer J, Blumstein S, Ryalls J (1982) The role of duration and rapid temporal processing on the lateral perception of consonants and vowels. Brain Lang 17:272286.[ISI][Medline]
Efron R (1963) Temporal perception, aphasia and déjà vu. Brain 86:403424.[ISI]
Eggermont JJ (1991) Rate and synchronization measures of periodicity coding in cat primary auditory cortex. Hearing Res 56:153167.[ISI][Medline]
Eggermong JJ (1994) Neural interaction in cat primary auditory cortex. II. Effects of sound stimulation. J Neurophysiol 71:246270.
Fant G (1973). Speech sounds and features. Cambridge, MA: MIT Press.
Fitch RH, Brown CP, O'Connor K, Tallal P (1993) Functional lateraliztion for auditory temporal processing in male and female rats. Behav Neurosci 107:844850.[ISI][Medline]
Fukisaki H, Kawashima T (1971) A model for the mechanisms of speech perception: quantitative analysis of categorical effects in discrimination. Annu Rep Engng Res Inst 30:5968.
Galaburda AM, Sanides F (1980) Cytoarchitectonic organization of the human auditory cortex. J Comp Neurol 190:597610.[ISI][Medline]
Galambos R, Makeig S, Talmachoff PJ (1981) A 40-Hz auditory potential recorded from the human scalp. Proc Natl Acad Sci USA 4:26432647.
Glass I, Wollberg Z (1983) Responses of cells in the auditory cortex of awake squirrel monkeys to normal and reversed species-specific vocalizations. Hear Res 9:2733.[ISI][Medline]
Itoh M, Tatsumi IF, Sasanuma S, Fukusako Y (1986) Voice onset time perception in Japanese aphasic patients. Brain Lang 28:7185.[ISI][Medline]
Kaukoranta E, Hari R, Lounasmaa OV (1987) Responses of the human auditory cortex to vowel onset after fricative consonants. Exp Brain Res 69:1923.[ISI][Medline]
Kuhl PK, Padden DM (1983) Enhanced discriminability at phonetic boundaries for the place feature in macaques. J Acoust Soc Am 73:10031010.[ISI][Medline]
Kulynych JJ, Vladar K, Jones DW, Weinberger DR (1994) Gender differences in the normal lateralization of the supratemporal cortex: MRI surface-rendering morphometry of Heschl's gyrus and the planum temporale. Cereb Cortex 4:107118.[Abstract]
Lane H (1965) Motor theory of speech perception: a critical review. Psychol Rev 72:275309.[ISI]
Liberman AM, Delattre PC, Cooper, FS (1952) The role of selected stimulus variables in the perception of the unvoiced stop consonants. Am J Psychol 65:497516.[ISI]
Liberman AM (1996) Speech. A special Code. Cambridge, MA: MIT Press.
Leek MR, Bradt JF (1983) Lateralization of rapid auditory sequences. Neuropsychologia 21:6777.[ISI][Medline]
Liegeois-Chauvel C, Musolino A, Chauvel P (1991) Localization of the primary auditory area in man. Brain 114:139151.[Abstract]
Liegeois-Chauvel C, Musolino A, Badier JM, Marquis P, Chauvel P (1994) Auditory evoked potentials recorded from cortical areas in man: evaluation and topography of middle and long latency components. Electroenceph Clin Neurophysiol 92:204214.[ISI][Medline]
Lisker L, Abramson AS (1964) A cross-language study of voicing in initial stops: acoustical measurements. Word 20:384422.[ISI]
McGee T, Kraus N, King C, Nicol T, Carrel TD (1996) Acoustic elements of speechlike stimuli are reflected in surface recorded responses over the guinea pig temporal lobe. J Acoust Soc Am 99:36063614.[ISI][Medline]
Macmillan NA (1987) A psychophysical approach to processing modes In: Categorical perception (Harnad S, ed.), pp. 5385. Cambridge: Cambridge University Press.
Makela JP, Hari R (1987) Evidence for cortical origin of the 40 Hz auditory evoked response in man. Electroenceph Clin Neurophysiol 66:539546.[ISI][Medline]
May B, Moody DB, Stebbins WC (1989) Categorical perception of nonspecific communication sounds by japanese macaques. J Acoust Soc Am 85:837847.[ISI][Medline]
Mendez MF, Geehan GR (1988) Cortical auditory disorders: clinical and psychoacoustic features. J Neurol Neurosurg Psychiat 51:19.[Abstract]
Merzenich MM, Jenkins WM, Johnston P, Schreiner C, Miller ST, Tallal P (1996) Temporal processing deficits of language-leraning impaired children ameliorated by training. Science 271:7781.[Abstract]
Miceli G, Caltagirone C, Gainotti G, Payer-Rigo P (1978) Discrimination of voice versus place contrasts in aphasia. Brain Lang 6:4751.[ISI][Medline]
Miceli G (1982) The processing of speech sounds in a patient with cortical auditory disorder, Neuropsychologia 20:520.[ISI][Medline]
Miller JL (1977) Non independence of feature processing in initial consonant. J Speech Hear Res 20:519528.[ISI][Medline]
Molfese DL (1978) Left and right hemisphere involvement in speech perception: electrophysiological correlates. Percept Psychophys 23:237243.[ISI][Medline]
Moore BCJ, Glasberg BR, Plack CJ, Biswas AK (1988) The shape of the ear's temporal window. J Acoust Soc Am 83:11021116.[ISI][Medline]
Muller-Preuss P (1986) On the mechanisms of call coding through auditory neurons in the squirrel monkey. Eur Arch Psychiat Neurol Sci 236:5055.[Medline]
Oscar-Berman M, Zurif EB, Blumstein S (1975) Effects of unilateral brain damage on the processing of speech sounds. Brain Lang 2:345355.[ISI][Medline]
Pantev C (1995) Evoked and induced gamma-band activity of the human cortex. Brain Topogr 7:321330.[Medline]
Pastore RE, Ahroon WA, Baffuto KJ, Friedman C, Puleo JS, Fink EA (1977) Common-factor model of categorical perception. J Exp Psychol Hum Percept Perform 3:686696.[ISI]
Penhune VB, Zatorre RJ, Mac Donald JD, Evans AC (1996) Interhemispheric anatomical differences in human primary auditory cortex: probalistic mapping and volume measurement from magnetic resonance scans. Cereb Cortex 6:661672.[Abstract]
Petersen M, Beecher M, Zoloth S, Moody DB Stebbins WC (1978) Neural lateralization of species-specific vocalizations by Japanese macaques. Science 202:324327.[ISI][Medline]
Petersen MR, Beecher MD, Zoloth SR,Green S, Marler PR, Moody DB, Stebbins WC (1984) Neural lateralization vocalizations by Japanese monkeys: communicative sifnicance is more important than acoustic structure. Behav Neurosci 98:779790.[ISI][Medline]
Phillips DP, Farmer ME (1990) Acquired word deafness, and the temporal grain of sound representation in the primary auditory cortex. Behav Brain Res 40:8594.[ISI][Medline]
Phillips DP (1993a) Neural representation of stimulus times in the primary auditory cortex. Ann NY Acad Sci 682: 104118.[ISI][Medline]
Phillips DP (1993b) Representation of acoustic events in the primary auditory cortex. J Exp Psychol Hum Percept Perform 19:203216.[ISI][Medline]
Pisoni DB, Tash J (1974) Reaction times to comparisons within and across phonetic categories. Percept Psychophys 15:285290.[ISI]
Rademacher J, Caviness VS, Steinmetz H, Galaburda AM (1993) Topographical variation of the human primary cortices: complications for neuroimaging, brain mapping and neurobiology. Cereb Cortex 3:313329.[Abstract]
Robin DA, Tranel D, Damasio H (1990) Auditory perception of temporal and spectral events in patients with focal left and right cerebral lesions. Brain Lang 39:539555.[ISI][Medline]
Saffran EM, Marin SM, Yeni-Komshian GH (1976) An analysis of speech perception in cord deafness. Brain Lang 3:209228.[ISI][Medline]
Schreiner CE, Urbas JV (1988) Representation of amplitude modulation in the auditory cortex of the cat. II. Comparison between cortical fields. Hear Res 32:4964.[ISI][Medline]
Schwartz J, Tallal P (1980) Rate of acoustic change may underlie hemispheric specialization for speech perception. Science 207: 13801381.[ISI][Medline]
Schwartz D (1998) Localisation des générateurs intra-cérébraux de l'activité MEG et EEG: Evaluation de la précision spatiale et temporelle. Ph.D. thesis, University of Rennes 1.
Seldon HL (1981) Structure of human auditory cortex. I. Cytoarchitectonics and dentritic distribution. Brain Res 229:277294.[ISI][Medline]
Steinmetz H, Rademacher J, Huang Y, Zilles K, Thron A, Freund HJ (1989) Cerebral asymmetry: MR planimetry of the human planum temporale. J Comput Assist Tomogr 13:9961005.[ISI][Medline]
Steinschneider M, Arezzo JC, Vaughan HG (1980) Phase-locked cortical responses to a human speech sound and low-frequency tones in the monkey. Brain Res 198:7584.[ISI][Medline]
Steinschneider M, Arezzo JC, Vaughan HG (1982) Speech evoked activity in the auditory radiations and cortex of the awake monkey. Brain Res 252:353365.[ISI][Medline]
Steinschneider M, Tenke CE, Schroeder CE, Javitt DC, Simpson GV, Arezzo JC, Vaughan, HG (1992) Cellular generators of the cortical auditory evoked potential initial component. Electroenceph Clin Neurophysiol 84:196200.[ISI][Medline]
Steinschneider M, Schroeder CH, Arezzo JC, Vaughan HG (1994) Speech-evoked activity in primary auditory cortex effects of voice onset time. Electroenceph Clin Neurophysiol 92:3043.[ISI][Medline]
Steinschneider M, Schroeder CH, Arezzo, JC, Vaughan, HG (1995) Physiologic correlates of the voice onset time boundary in primary auditory cortex (A1) of the awake monky: temporal respons patterns. Brain Lang 48:326340.[ISI][Medline]
Stevens KN (1980) Acoustic correlates of some phonetic categories. J Acoust Soc Am 68:836842.[ISI][Medline]
Stevens KN (1981) Constraints imposed by the auditory system on the properties used to classify speech sounds: data from phonology, acoustics and psycho-acoustics. In: The cognitive representation of speech (Myers TF et al., eds). Amsterdam: North Holland.
Studdert-Kennedy M (1976) Speech perception. In: Contemporary issues in experimental phonetics (Lass NJ, ed.), pp. 243293. New York: Academic Press.
Szikla G, Bouvier G, Hori T, Petrov V (1977) Atlas of vascular patterns and stereotactic cortical localization. Berlin: Springer Verlag.
Talairach J, Tournoux P (1988) Co-planar stereotaxic atlas of the human brain. 3-dimensional proportional system: an approach to cerebral imaging. Stuttgart: Georg Thieme Verlag.
Tallal P, Piercy M (1973) Defects of non verbal auditory perception in children with developmental aphasia. Nature 241:468469.[ISI][Medline]
Tallal P, Newcombe P (1978) Impairment of auditory perception and language comprehension in dysphasia. Brain Lang 5:1334.[ISI][Medline]
Tallal P, Miller ST, Bedi G, Byma G, Wang X, Nagarajan SS, Schreiner C, Jenkins WM Merzenich MM (1996) Language comprehension in language-learning impaired children improved with acoustically modified speech. Science 271:8184.[Abstract]
Viemeister NF, Whakefield GH (1991) Temporal integration and multiple looks. J Acoust Soc Am 90:858865.[ISI][Medline]
Von Stockert TR (1982) On the structure of word deafness and mechanisms underlying the fluctuation of disturbances of higher cortical functions. Brain Lang 16:133146.[ISI][Medline]
Yakub BA, Gascon GG, Al-Nosha M, Whitaker H (1988) Pure word deafness (acquired verbal auditory agnosia) in an arabic speaking patient. Brain 111:457466.[Abstract]