Blog Archives

Functional ear (a)symmetry in brainstem neural activity relevant to encoding of voice pitch: A precursor for hemispheric specialization?

Pitch processing is lateralized to the right hemisphere; linguistic pitch is further mediated by left cortical areas. This experiment investigates whether ear asymmetries vary in brainstem representation of pitch depending on linguistic status. Brainstem frequency-following responses (FFRs) were elicited by monaural stimulation of the left and right ear of 15 native speakers of Mandarin Chinese using two synthetic speech stimuli that differ in linguistic status of tone. One represented a native lexical tone (Tone 2: T2); the other, T2′, a nonnative variant in which the pitch contour was a mirror image of T2 with the same starting and ending frequencies. Two 40-ms portions of f0 contours were selected in order to compare two regions (R1, early; R2 late) differing in pitch acceleration rate and perceptual saliency. In R2, linguistic status effects revealed that T2 exhibited a larger degree of FFR rightward ear asymmetry as reflected in f0 amplitude relative to T2′. Relative to midline (ear asymmetry = 0), the only ear asymmetry reaching significance was that favoring left ear stimulation elicited by T2′. By left- and right-ear stimulation separately, FFRs elicited by T2 were larger than T2′ in the right ear only. Within T2′, FFRs elicited by the earlier region were larger than the later in both ears. Within T2, no significant differences in FFRS were observed between regions in either ear. Collectively, these findings support the idea that origins of cortical processing preferences for perceptually-salient portions of pitch are rooted in early, preattentive stages of processing in the brainstem.

from Brain and Language

Pitch Characteristics of Homosexual Males

Results do not confirm the stereotype that gay male speech mirrors the patterns of women’s speech with respect to pitch characteristics. It would seem that the pitch patterns of gay male speakers constitute an example of sociophonetic variation.

from the Journal of Voice

The Effects of Humming and Pitch on Craniofacial and Craniocervical Morphology Measured Using MRI

Traditional voice research occurs within a phonetic context. Accordingly, pitch-related contributions are inseparable from those due to articulator input. In humming, articulator input is negligible. Using magnetic resonance imaging, we test the hypothesis that voice production is accompanied by pitch-related adjustments unrelated to articulatory or postural input.

from the Journal of Voice

Auditory cortex tracks the temporal regularity of sustained noisy sounds

Neuroimaging studies have revealed dramatic asymmetries between the responses to temporally regular and irregular sounds in the antero-lateral part of Heschl’s gyrus. For example, the magnetoencephalography (MEG) study of Krumbholz et al. [Cerebr. Cortex 13, 765-772 (2003)] showed that the transition from a noise to a similar noise with sufficient temporal regularity to provoke a pitch evoked a pronounced temporal-regularity onset response (TRon response), whereas a comparable transition in the reverse direction revealed essentially no temporal-regularity offset response (TRoff response). The current paper presents a follow-up study in which the asymmetry is examined with much greater power, and the results suggest an intriguing reinterpretation of the onset/offset asymmetry. The TR-related activity in auditory cortex appears to be composed of a transient (TRon) and a TR-related sustained response (TRsus), with a highly variable TRon/TRsus amplitude ratio. The TRoff response is generally dominated by the break-down of the TRsus activity, which occurs so rapidly as to preclude the involvement of higher-level cortical processing. The time-course of the TR-related activity suggests that TR processing might be involved in monitoring the environment and alerting the brain to the onset and offset of behaviourally relevant, animate sources.

from Hearing Research

Pitch Comparisons between Electrical Stimulation of a Cochlear Implant and Acoustic Stimuli Presented to a Normal-hearing Contralateral Ear

Four cochlear implant users, having normal hearing in the unimplanted ear, compared the pitches of electrical and acoustic stimuli presented to the two ears. Comparisons were between 1,031-pps pulse trains and pure tones or between 12 and 25-pps electric pulse trains and bandpass-filtered acoustic pulse trains of the same rate. Three methods—pitch adjustment, constant stimuli, and interleaved adaptive procedures—were used. For all methods, we showed that the results can be strongly influenced by non-sensory biases arising from the range of acoustic stimuli presented, and proposed a series of checks that should be made to alert the experimenter to those biases. We then showed that the results of comparisons that survived these checks do not deviate consistently from the predictions of a widely-used cochlear frequency-to-place formula or of a computational cochlear model. We also demonstrate that substantial range effects occur with other widely used experimental methods, even for normal-hearing listeners.

from JARO — Journal of the Association for Research in Otolaryngology

Language-dependent pitch encoding advantage in the brainstem is not limited to acceleration rates that occur in natural speech

Experience-dependent enhancement of neural encoding of pitch in the auditory brainstem has been observed for only specific portions of native pitch contours exhibiting high rates of pitch acceleration, irrespective of speech or nonspeech contexts. This experiment allows us to determine whether this language-dependent advantage transfers to acceleration rates that extend beyond the pitch range of natural speech. Brainstem frequency-following responses (FFRs) were recorded from Chinese and English participants in response to four, 250-ms dynamic click-train stimuli with different rates of pitch acceleration. The maximum pitch acceleration rates in a given stimulus ranged from low (0.3 Hz/ms; Mandarin Tone 2) to high (2.7 Hz/ms; 2 octaves). Pitch strength measurements were computed from the FFRs using autocorrelation algorithms with an analysis window centered at the point of maximum pitch acceleration in each stimulus. Between-group comparisons of pitch strength revealed that Chinese exhibit more robust pitch representation than English across all four acceleration rates. Regardless of language group, pitch strength was greater in response to acceleration rates within or proximal to natural speech relative to those beyond its range. Though both groups showed decreasing pitch strength with increasing acceleration rates, pitch representations of the Chinese group were more resistant to degradation. FFR spectral data were complementary across acceleration rates. These findings demonstrate that perceptually salient pitch cues associated with lexical tone influence brainstem pitch extraction not only in the speech domain, but also in auditory signals that clearly fall outside the range of dynamic pitch that a native listener is exposed to.

from Brain and Language

Cortical encoding of pitch: recent results and open questions

It is widely appreciated that the key predictor of the pitch of a sound is its periodicity. Neural structures which support pitch perception must therefore be able to reflect the repetition rate of a sound, but this alone is not sufficient. Since pitch is a psychoacoustic property, a putative cortical code for pitch must also be able to account for the relationship between the amount to which a sound is periodic (i.e. its temporal regularity) and the perceived pitch salience, as well as limits in our ability to detect pitch changes or to discriminate rising from falling pitch. Pitch codes must also be robust in the presence of changes in nuisance variables such as loudness or timbre. Here, we review a large body of work on the cortical basis of pitch perception, which illustrates that the distribution of cortical processes that give rise to pitch perception is likely to depend on both the acoustical features and functional relevance of a sound. While previous studies have greatly advanced our understanding, we highlight several open questions regarding the neural basis of pitch perception. These questions can begin to be addressed through a cooperation of investigative efforts across species and experimental techniques, and, critically, by examining the responses of single neurons in behaving animals.

from Hearing Research

Enhanced Pure-Tone Pitch Discrimination among Persons with Autism but not Asperger Syndrome

Persons with Autism Spectrum Disorders (ASD) display atypical perceptual processing in visual and auditory tasks. In vision, Bertone et al. (2005) found that enhanced and diminished visual processing is linked to the level of neural complexity required to process stimuli, as proposed in the neural complexity hypothesis. Based on these findings, Samson et al. (2006) proposed to extend the neural complexity hypothesis to the auditory modality. They hypothesized that persons with ASD should display enhanced performance for simple tones that are processed in primary auditory cortical regions, but diminished performance for complex tones that require additional processing in associative auditory regions, in comparison to typically developing individuals. To assess this hypothesis, we designed four auditory discrimination experiments targeting pitch, non-vocal and vocal timbre, and loudness. Stimuli consisted of spectro-temporally simple and complex tones. The participants were adolescents and young adults with autism, Asperger syndrome, and typical developmental histories, all with IQs in the normal range. Consistent with the neural complexity hypothesis and enhanced perceptual functioning model of ASD (Mottron et al., 2006), the participants with autism, but not with Asperger syndrome, displayed enhanced pitch discrimination for simple tones. However, no discrimination-thresholds differences were found between the participants with ASD and the typically developing persons across spectrally and temporally complex conditions. These findings indicate that enhanced pure-tone pitch discrimination may be a cognitive correlate of speech-delay among persons with ASD. However, auditory discrimination among this group does not appear to be directly contingent on the spectro-temporal complexity of the stimuli.

from Neuropsychologia

Active stream segregation specifically involves the left human auditory cortex

An important aspect of auditory scene analysis is the sequential grouping of similar sounds into one “auditory stream” while keeping competing streams separate. In the present low-noise fMRI study we presented sequences of alternating high-pitch (A) and low-pitch (B) complex harmonic tones using acoustic parameters that allow the perception of either two separate streams or one alternating stream. However, the subjects were instructed to actively and continuously segregate the A from the B stream. This was controlled by the additional instruction to listen for rare level deviants only in the low-pitched stream.

Compared to the control condition in which only one non-separable stream was presented the active segregation of the A from the B stream led to a selective increase of activation in the left auditory cortex (AC). Together with a similar finding from a previous study using a different acoustic cue for streaming, namely timbre, this suggests that the left auditory cortex plays a dominant role in active sequential stream segregation. However, we found cue differences within the left AC: Whereas in the posterior areas, including the planum temporale, activation increased for both acoustic cues, the anterior areas, including Heschl’s gyrus, are only involved in stream segregation based on pitch.

from Hearing Research

Early integration of vowel and pitch processing: A mismatch negativity study

The underadditivity of the MMN responses suggests that vowel and pitch differences are processed by interacting neural networks.

from Clinical Neurophysiology

Auditory Attentional Control and Selection during Cocktail Party Listening

In realistic auditory environments, people rely on both attentional control and attentional selection to extract intelligible signals from a cluttered background. We used functional magnetic resonance imaging to examine auditory attention to natural speech under such high processing-load conditions. Participants attended to a single talker in a group of 3, identified by the target talker’s pitch or spatial location. A catch-trial design allowed us to distinguish activity due to top-down control of attention versus attentional selection of bottom-up information in both the spatial and spectral (pitch) feature domains. For attentional control, we found a left-dominant fronto-parietal network with a bias toward spatial processing in dorsal precentral sulcus and superior parietal lobule, and a bias toward pitch in inferior frontal gyrus. During selection of the talker, attention modulated activity in left intraparietal sulcus when using talker location and in bilateral but right-dominant superior temporal sulcus when using talker pitch. We argue that these networks represent the sources and targets of selective attention in rich auditory environments.

from Cerebral Cortex

The relationship between tinnitus pitch and the edge frequency of the audiogram in individuals with hearing impairment and tonal tinnitus

Some theories of mechanisms of tinnitus generation lead to the prediction that the pitch associated with tonal tinnitus should be related to the “edge frequency” of the audiogram, fe, the frequency at which hearing loss worsens relatively abruptly. However, previous studies testing this prediction have provided little or no support for it. Here, we reexamined the relationship between tinnitus pitch and fe, using 11 subjects selected to have mild-to-moderate hearing loss and tonal tinnitus. Subjects were asked to compare the pitch of their tinnitus to that of a sinusoidal tone whose frequency and level were adjusted by the experimenter. Prior to testing in the main experiment, subjects were given specific training to help them to avoid octave errors in their pitch matches. Pitch matches made after this training were generally lower in frequency than matches made before such training, often by one or two octaves. The matches following training were highly reproducible. A clear relationship was found between the values of fe and the mean pitch matches following training; the correlation was 0.94. Generally, the pitch matches were close in value to the values of fe.

from Hearing Research

Brainstem pitch representation in native speakers of Mandarin is less susceptible to degradation of stimulus temporal regularity

It has been demonstrated that neural encoding of pitch in the auditory brainstem is shaped by long-term experience with language. To date, however, all stimuli have exhibited a high degree of pitch saliency. The experimental design herein permits us to determine whether experience-dependent pitch representation in the brainstem is less susceptible to progressive degradation of the temporal regularity of iterated rippled noise (IRN). Brainstem responses were recorded from Chinese and English participants in response to IRN homologues of Mandarin Tone 2 (T2IRN). Six different iterations steps were utilized to systematically vary the degree of temporal regularity in the fine structure of the IRN stimuli in order to produce a pitch salience continuum ranging from low to high. Pitch-tracking accuracy and pitch strength were computed from the brainstem responses using autocorrelation algorithms. Analysis of variance of brainstem responses to T2IRN revealed that pitch-tracking accuracy is higher in the native tone language group (Chinese) relative to the non-tone language group (English) except for the three lowest steps along the continuum, and moreover, that pitch strength is greater in the Chinese group even in severely degraded stimuli for two of the six 40-ms sections of T2IRN that exhibit rapid changes in pitch. For these same two sections, exponential time constants for the stimulus continuum revealed that pitch strength emerges 2-3 times faster in the tone language than in the non-tone language group as a function of increasing pitch salience. These findings altogether suggest that experience-dependent brainstem mechanisms for pitch are especially sensitive to those dimensions of tonal contours that provide cues of high perceptual saliency in degraded as well as normal listening conditions.

from Brain Research

Pitch, Harmonicity and Concurrent Sound Segregation: Psychoacoustical and Neurophysiological Findings

Harmonic complex tones are a particularly important class of sounds found in both speech and music. Although these sounds contain multiple frequency components, they are usually perceived as a coherent whole, with a pitch corresponding to the fundamental frequency (F0). However, when two or more harmonic sounds occur concurrently, e.g., at a cocktail party or in a symphony, the auditory system must separate harmonics and assign them to their respective F0s so that a coherent and veridical representation of the different sounds sources is formed. Here we review both psychophysical and neurophysiological (single-unit and evoked-potential) findings, which provide some insight into how, and how well, the auditory system accomplishes this task. A survey of computational models designed to estimate multiple F0s and segregate concurrent sources is followed by a review of the empirical literature on the perception and neural coding of concurrent harmonic sounds, including vowels, as well as findings obtained using single complex tones with “mistuned” harmonics.

from Hearing Research

The relationship between tinnitus pitch and the audiogram

We studied the relationship between tinnitus pitch and the audiogram in 195 patients. Patients with tone-like tinnitus reported a higher pitch (mean = 5385 Hz) compared to those with a noise-like quality (mean = 3266 Hz). Those with a flat audiogram were more likely to report: a noise-like tinnitus, a unilateral tinnitus, and have a pitch < 2000 Hz. The average duration of bilateral tinnitus (12 years) was longer than that of unilateral tinnitus (5 years). Older subjects reported a less severe tinnitus handicap questionnaire score. Patients with a notched audiogram often reported a pitch ≤8000 Hz. Subjects with normal hearing up to 8000 Hz tended to have a pitch ≥8000 Hz. We failed to find a relationship between the pitch and the edge of a high frequency hearing loss. Some individuals did exhibit a pitch at the low frequency edge of a hearing loss, but we could find no similar characteristics among these subjects. It is possible that a relationship between pitch and audiogram is present only in certain subgroups.

from the International Journal of Audiology