Blog Archives

Auditory cortical N100 in pre- and post-synaptic auditory neuropathy to frequency or intensity changes of continuous tones

Abnormalities of auditory cortical N100 in AN reflect disorders of both temporal processing (low frequency) and neural adaptation (high frequency). Auditory N100 latency to the low frequency provides an objective measure of the degree of impaired speech perception in AN.

from Clinical Neurophysiology

Advertisements

Perception of temporally modified speech in auditory neuropathy

Conclusions: A rehabilitation program for AN should consider temporal modification of speech, training for auditory temporal processing and the use of devices with innovative signal processing schemes. Verbal modifications as well as visual imaging appear to be promising compensatory strategies for remediating the affected phonological processing skills.

from the International Journal of Audiology

Temporal processing ability is related to ear-asymmetry for detecting time cues in sound: A mismatch negativity (MMN) study

Temporal and spectral sound information is processed asymmetrically in the brain with the left-hemisphere showing an advantage for processing the former and the right-hemisphere for the latter. Using monaural sound presentation we demonstrate a context and ability dependent ear-asymmetry in brain measures of temporal change detection. Our measure of temporal processing ability was a gap-detection task quantifying the smallest silent gap in a sound that participants could reliably detect. Our brain measure was the size of the mismatch-negativity (MMN) auditory event-related potential elicited to infrequently presented gap sounds. The MMN indexes discrimination ability and is automatically generated when the brain detects a change in a repeating pattern of sound. MMN was elicited in unattended sequences of infrequent gap-sounds presented among regular no-gap sounds. In Study 1, participants with low gap-detection thresholds (good ability) produced a significantly larger MMN to gap sounds when sequences were presented monaurally to the right-ear than to the left-ear. In Study 2, we replicated a right-ear-advantage for MMN in silence in good temporal processors, but also show that this is reversed to a significant left-ear-advantage for MMN when the same sounds are presented against a background of constant low-level noise. In both studies, poor discriminators showed no ear-advantage, and in Study 2, exhibited no differential sensitivity of the ears to noise. We conclude that these data reveal a context and ability-dependent asymmetry in processing temporal information in non-speech sounds.

from Neuropsychologia

Effects of Age on the Temporal Organization of Working Memory in Deaf Signers

Deaf native signers have a general working memory (WM) capacity similar to that of hearing non-signers but are less sensitive to the temporal order of stored items at retrieval. General WM capacity declines with age, but little is known of how cognitive aging affects WM function in deaf signers. We investigated WM function in elderly deaf signers (EDS) and an age-matched comparison group of hearing non-signers (EHN) using a paradigm designed to highlight differences in temporal and spatial processing of item and order information. EDS performed worse than EHN on both item and order recognition using a temporal style of presentation. Reanalysis together with earlier data showed that with the temporal style of presentation, order recognition performance for EDS was also lower than for young adult deaf signers. Older participants responded more slowly than younger participants. These findings suggest that apart from age-related slowing irrespective of sensory and language status, there is an age-related difference specific to deaf signers in the ability to retain order information in WM when temporal processing demands are high. This may be due to neural reorganisation arising from sign language use. Concurrent spatial information with the Mixed style of presentation resulted in enhanced order processing for all groups, suggesting that concurrent temporal and spatial cues may enhance learning for both deaf and hearing groups. These findings support and extend the WM model for Ease of Language Understanding.

from Aging, Neuropsychology, and Cognition

Auditory Perception in Individuals with Friedreich’s Ataxia

Introduction: Friedreich’s ataxia (FRDA) is an inherited ataxia with a range of progressive features including axonal degeneration of sensory nerves. The aim of this study was to investigate auditory perception in affected individuals. Methods: Fourteen subjects with genetically defined FRDA participated. Two control groups, one consisting of healthy, normally hearing individuals and another comprised of subjects with sensorineural hearing loss, were also assessed. Auditory processing was evaluated using structured tasks designed to reveal the listeners’ ability to perceive temporal and spectral cues. Findings were then correlated with open-set speech understanding. Results: Nine of 14 individuals with FRDA showed evidence of auditory processing disorder. Gap and amplitude modulation detection levels in these subjects were significantly elevated, indicating impaired encoding of rapid signal changes. Electrophysiologic findings (auditory brainstem response, ABR) also reflected disrupted neural activity. Speech understanding was significantly affected in these listeners and the degree of disruption was related to temporal processing ability. Speech analyses indicated that timing cues (notably consonant voice onset time and vowel duration) were most affected. Conclusion: The results suggest that auditory pathway abnormality is a relatively common consequence of FRDA. Regular auditory evaluation should therefore be part of the management regime for all affected individuals. This assessment should include both ABR testing, which can provide insights into the degree to which auditory neural activity is disrupted, and some functional measure of hearing capacity such as speech perception assessment, which can quantify the disorder and provide a basis for intervention.

from Audiology & Neuro-Otology

Measures of Hearing Threshold and Temporal Processing across the Adult Lifespan

Psychophysical data on hearing sensitivity and various measures of supra-threshold auditory temporal processing are presented for large groups of young (18-35 y), middle-aged (40-55 y) and older (60-89 y) adults. Hearing thresholds were measured at 500, 1414 and 4000 Hz. Measures of temporal processing included gap-detection thresholds for bands of noise centered at 1000 and 3500 Hz, stimulus onset asynchronies for monaural and dichotic temporal-order identification for brief vowels, and stimulus onset/offset asynchronies for the monaural temporal masking of vowel identification. For all temporal-processing measures, the impact of high-frequency hearing loss in older adults was minimized by a combination of low-pass filtering the stimuli and use of high presentation levels. The performance of the older adults was worse than that of the young adults on all measures except gap-detection threshold at 1000 Hz. Middle-aged adults performed significantly worse than the young adults on measures of threshold sensitivity and three of the four measures of temporal-order identification, but not for any of the measures of temporal masking. Individual differences are also examined among a group of 124 older adults. Cognition and age were found to be significant predictors, although only 10-27% of the variance could be accounted for by these predictors.

from Hearing Research

Spectral vs. temporal auditory processing in specific language impairment: A developmental ERP study

Pre-linguistic sensory deficits, especially in “temporal” processing, have been implicated in developmental language impairment (LI). However, recent evidence has been equivocal with data suggesting problems in the spectral domain. The present study examined event-related potential (ERP) measures of auditory sensory temporal and spectral processing, and their interaction, in typical children and those with LI (7–17 years; n = 25 per group). The stimuli were three CV syllables and three consonant-to-vowel transitions (spectral sweeps) isolated from the syllables. Each of these six stimuli appeared in three durations (transitions: 20, 50, and 80 ms; syllables: 120, 150, and 180 ms). Behaviorally, the group with LIs showed inferior syllable discrimination both with long and short stimuli. In ERPs, trends were observed in the group with LI for diminished long-latency negativities (the N2–N4 peaks) and a developmentally transient enhancement of the P2 peak. Some, but not all, ERP indices of spectral processing also showed trends to be diminished in the group with LI specifically in responses to syllables. Importantly, measures of the transition N2–N4 peaks correlated with expressive language abilities in the LI children. None of the group differences depended on stimulus duration. Therefore, sound brevity did not account for the diminished spectral resolution in these LI children. Rather, the results suggest a deficit in acoustic feature integration at higher levels of auditory sensory processing. The observed maturational trajectory suggests a non-linear developmental deviance rather than simple delay.

from Cognitive Linguistics

Spectral vs. temporal auditory processing in specific language impairment: A developmental ERP study

Pre-linguistic sensory deficits, especially in “temporal” processing, have been implicated in developmental language impairment (LI). However, recent evidence has been equivocal with data suggesting problems in the spectral domain. The present study examined event-related potential (ERP) measures of auditory sensory temporal and spectral processing, and their interaction, in typical children and those with LI (7–17 years; n = 25 per group). The stimuli were three CV syllables and three consonant-to-vowel transitions (spectral sweeps) isolated from the syllables. Each of these six stimuli appeared in three durations (transitions: 20, 50, and 80 ms; syllables: 120, 150, and 180 ms). Behaviorally, the group with LIs showed inferior syllable discrimination both with long and short stimuli. In ERPs, trends were observed in the group with LI for diminished long-latency negativities (the N2–N4 peaks) and a developmentally transient enhancement of the P2 peak. Some, but not all, ERP indices of spectral processing also showed trends to be diminished in the group with LI specifically in responses to syllables. Importantly, measures of the transition N2–N4 peaks correlated with expressive language abilities in the LI children. None of the group differences depended on stimulus duration. Therefore, sound brevity did not account for the diminished spectral resolution in these LI children. Rather, the results suggest a deficit in acoustic feature integration at higher levels of auditory sensory processing. The observed maturational trajectory suggests a non-linear developmental deviance rather than simple delay.

from Brain and Language

The Effects of Stimulus Modality and Frequency of Stimulus Presentation on Cross-modal Distraction

Selective attention produces enhanced activity (attention-related modulations [ARMs]) in cortical regions corresponding to the attended modality and suppressed activity in cortical regions corresponding to the ignored modality. However, effects of behavioral context (e.g., temporal vs. spatial tasks) and basic stimulus properties (i.e., stimulus frequency) on ARMs are not fully understood. The current study used functional magnetic resonance imaging to investigate selectively attending and responding to either a visual or auditory metronome in the presence of asynchronous cross-modal distractors of 3 different frequencies (0.5, 1, and 2 Hz). Attending to auditory information while ignoring visual distractors was generally more efficient (i.e., required coordination of a smaller network) and less effortful (i.e., decreased interference and presence of ARMs) than attending to visual information while ignoring auditory distractors. However, these effects were modulated by stimulus frequency, as attempting to ignore auditory information resulted in the obligatory recruitment of auditory cortical areas during infrequent (0.5 Hz) stimulation. Robust ARMs were observed in both visual and auditory cortical areas at higher frequencies (2 Hz), indicating that participants effectively allocated attention to more rapidly presented targets. In summary, results provide neuroanatomical correlates for the dominance of the auditory modality in behavioral contexts that are highly dependent on temporal processing.

from Cerebral Cortex

The Role of Temporal Fine Structure Processing in Pitch Perception, Masking, and Speech Perception for Normal-Hearing and Hearing-Impaired People

from JARO — Journal of the Association for Research in Otolaryngology

Abstract Complex broadband sounds are decomposed by the auditory filters into a series of relatively narrowband signals, each of which can be considered as a slowly varying envelope (E) superimposed on a more rapid temporal fine structure (TFS). Both E and TFS information are represented in the timing of neural discharges, although TFS information as defined here depends on phase locking to individual cycles of the stimulus waveform. This paper reviews the role played by TFS in masking, pitch perception, and speech perception and concludes that cues derived from TFS play an important role for all three. TFS may be especially important for the ability to “listen in the dips” of fluctuating background sounds when detecting nonspeech and speech signals. Evidence is reviewed suggesting that cochlear hearing loss reduces the ability to use TFS cues. The perceptual consequences of this, and reasons why it may happen, are discussed.

The Effects of Stimulus Modality and Frequency of Stimulus Presentation on Cross-modal Distraction

from Cerebral Cortex

Selective attention produces enhanced activity (attention-related modulations [ARMs]) in cortical regions corresponding to the attended modality and suppressed activity in cortical regions corresponding to the ignored modality. However, effects of behavioral context (e.g., temporal vs. spatial tasks) and basic stimulus properties (i.e., stimulus frequency) on ARMs are not fully understood. The current study used functional magnetic resonance imaging to investigate selectively attending and responding to either a visual or auditory metronome in the presence of asynchronous cross-modal distractors of 3 different frequencies (0.5, 1, and 2 Hz). Attending to auditory information while ignoring visual distractors was generally more efficient (i.e., required coordination of a smaller network) and less effortful (i.e., decreased interference and presence of ARMs) than attending to visual information while ignoring auditory distractors. However, these effects were modulated by stimulus frequency, as attempting to ignore auditory information resulted in the obligatory recruitment of auditory cortical areas during infrequent (0.5 Hz) stimulation. Robust ARMs were observed in both visual and auditory cortical areas at higher frequencies (2 Hz), indicating that participants effectively allocated attention to more rapidly presented targets. In summary, results provide neuroanatomical correlates for the dominance of the auditory modality in behavioral contexts that are highly dependent on temporal processing.