Vowel Identification by Listeners With Hearing Impairment in Response to Variation in Formant Frequencies
Conclusions: Both increased presentation level for NH listeners and the presence of hearing loss produced a significant change in vowel identification for this stimulus set. Major differences were observed between NH listeners and HI listeners in vowel category overlap and in the sharpness of boundaries between vowel tokens. It is likely that these findings reflect imprecise internal spectral representations due to reduced frequency selectivity.
Hearing impairment in MWS is variable and shows resemblance to previously described intra-cochlear conductive hearing impairment. This could be helpful in elucidating the pathogenesis of hearing impairment in Muckle-Wells syndrome. Other associated symptoms of MWS were mild and nonspecific in the present family. Therefore, even without any obvious syndromic features, MWS can be the cause of sensorineural hearing impairment, especially when combined with (mild) skin rash and musculoskeletal symptoms. An early diagnosis of Muckle-Wells syndrome is essential to prevent irreversible damage from amyloidosis. The effect of IL-1β inhibitors on hearing impairment is more controversial, but an early start of treatment seems to be essential. Therefore, our results are of importance in patient care and counselling.
from Hearing Research
The relationship between binaural benefit and difference in unilateral speech recognition performance for bilateral cochlear implant users
Conclusions: The results indicate that subjects who show symmetry in speech recognition performance between implanted ears in general show a large binaural benefit.
from the International Journal of Audiology
This study examined fMRI activation when perceivers either passively observed or observed and imitated matched or mismatched audiovisual (“McGurk”) speech stimuli. Greater activation was observed in the inferior frontal gyrus (IFG) overall for imitation than for perception of audiovisual speech and for imitation of the McGurk-type mismatched stimuli than matched audiovisual stimuli. This unique activation in the IFG during imitation of incongruent audiovisual speech may reflect activation associated with direct matching of incongruent auditory and visual stimuli or conflict between category responses. This study provides novel data about the underlying neurobiology of imitation and integration of AV speech.
from the Journal of Neurolinguistics
The etiology of developmental dyslexia remains widely debated. An appealing theory postulates that the reading and spelling problems in individuals with dyslexia originate from reduced sensitivity to slow-rate dynamic auditory cues. This low-level auditory deficit is thought to provoke a cascade of effects, including inaccurate speech perception and eventually unspecified phoneme representations. The present study investigated sensitivity to frequency modulation and amplitude rise time, speech-in-noise perception and phonological awareness in 11-year-old children with dyslexia and a matched normal-reading control children. Group comparisons demonstrated that children with dyslexia were less sensitive than normal-reading children to slow-rate dynamic auditory processing, speech-in-noise perception, phonological awareness and literacy abilities. Correlations were found between slow-rate dynamic auditory processing and phonological awareness, and speech-in-noise perception and reading. Yet, no significant correlation between slow-rate dynamic auditory processing and speech-in-noise perception was obtained. Together, these results indicate that children with dyslexia have difficulties with slow-rate dynamic auditory processing and speech-in-noise perception and that these problems persist until sixth grade.
Speech perception in noise: Exploring the effect of linguistic context in children with and without auditory processing disorder
Conclusion: Further study using a larger sample is warranted to deepen our understanding of the nature of APD and identify characteristic profiles to enable better tailoring of therapeutic programs
from the International Journal of Audiology
The effect of cognitive load (CL) on speech recognition has received little attention despite the prevalence of CL in everyday life, e.g., dual-tasking. To assess the effect of CL on the interaction between lexically-mediated and acoustically-mediated processes, we measured the magnitude of the “Ganong effect” (i.e., lexical bias on phoneme identification) under CL and no CL. CL consisted of a concurrent visual search task. Experiment 1 showed an increased Ganong effect under CL. A time-course analysis of this pattern (Experiments 2 and 3) revealed that the Ganong effect decreased over time under optimal conditions, but it did not under CL. Thus, CL appears to be delaying (and perhaps preventing) listeners’ ability to rely on fine phonetic detail to perform the sub-lexical task. This finding, along with an absence of measurable effects at the post-lexical level (Experiment 4) or at the lexical level (Experiment 5) and a clear negative effect of CL on perceptual discrimination (Experiment 6), suggests that the increased reliance on lexically-mediated processes under CL is the cascaded effect of impoverished encoding of the sensory input. Ways of implementing a link between CL and sensory analysis into existing models of speech recognition are proposed.
from the Journal of Memory and Language
Auditory vocal hallucinations are sometimes observed in temporal-lobe epilepsy, but are a frequent sign of psychosis and may rarely be mistaken for the latter. Here we report two patients who suffered from auditory vocal hallucinations, described as unintelligible human voices perceived at their left side during epileptic seizures. MEG revealed interictal epileptic discharges within the anterior partition of the right superior temporal gyrus; signal-to-noise ratio of these discharges was overall poor in EEG. The findings suggest that auditory vocal hallucinations without verbal content can evolve in the right hemisphere and are probably independent of language lateralization. This is in accordance with evidence from functional imaging, whereas most previous reports of seizures with auditory vocal hallucinations were confined to the left hemisphere.
The behavioral and electrophysiological measures used in the present study clearly showed evidence of reduced binaural processing in ∼10 of the subjects in the present study who had symmetrical pure-tone sensitivity. These results underscore the importance of understanding binaural auditory processing and how these measures may or may not identify functional auditory problems.
Transitioning Hearing Aid Users with Severe and Profound Loss to a New Gain/Frequency Response: Benefit, Perception, and Acceptance
Based on the findings of this study, we suggest that undertaking a gradual change to a new gain/frequency response with severely and profoundly hearing-impaired adults is a feasible procedure. However, we recommend that clinicians select transition candidates carefully and initiate the procedure only if there is a clinical reason for doing so. A validated prescriptive formula should be used as a transition target, and speech discrimination performance should be monitored throughout the transition.
“What you encode is not necessarily what you store”: Evidence for sparse feature representations from mismatch negativity
The present study examines whether vowels embedded in complex stimuli may possess underspecified representations in the mental lexicon. A second goal was to assess the possible interference of the lexical status of stimuli under study. Minimal pairs of German nouns differing only in the stressed vowels [e], [ø], [o], and derived pseudowords, were used to measure the Mismatch Negativity (MMN) in a passive oddball-paradigm. The differing vowels were chosen such that the place of articulation information was conflicting vs. non-conflicting in the framework of models assuming underspecified representations in the mental lexicon (i.e. minimizing featural information by omitting redundant information in order to ensure efficient speech processing), whereas models assuming fully specified phonological representations would predict equal levels of conflict in all possible contrasts. The observed pattern of MMN amplitude differences was in accordance to predictions of models assuming underspecified phonological representations. As the possible interferences by other levels of linguistic processing was demonstrated, it seems favourable to use pseudowords for investigating phonological effects by means of MMN.
from Brain Research
During the past ten years, research using Near-infrared Spectroscopy (NIRS) to study the developing brain has provided groundbreaking evidence of brain functions in infants. This paper presents a theoretically-oriented review of this wealth of evidence, summarizing recent NIRS data on language processing, without neglecting other neuroimaging or behavioral studies in infancy and adulthood. We review three competing classes of hypotheses (i.e. signal-driven, domain-driven, and learning biases hypotheses) regarding the causes of hemispheric specialization for speech processing. We assess the fit between each of these hypotheses and neuroimaging evidence in speech perception and show that none of the three hypotheses can account for the entire set of observations on its own. However, we argue that they provide a good fit when combined within a developmental perspective. According to our proposed scenario, lateralization for language emerges out of the interaction between pre-existing left-right biases in generic auditory processing (signal-driven hypothesis), and a left-hemisphere predominance of particular learning mechanisms (learning-biases hypothesis). As a result of this completed developmental process, the native language is represented in the left hemisphere predominantly. The integrated scenario enables to link infant and adult data, and points to many empirical avenues that need to be explored more systematically.
Sentences recognition thresholds in normal hearing individuals in the presence of inciding noise from different angles
CONCLUSION: The following sentence recognition thresholds in the noise, in sound field, were obtained for these signal-to-noise ratios: 0° – 0° = -7.56 dB; -0º – 90º = -11.11 dB; -0º – 180° = -9.75 dB; 0º – 270º = -10.43 dB. The better thresholds were obtained with the incidence angles of 0º – 90º and 0º – 270º, followed by the 0º – 180º condition, and, finally, by the 0º – 0º condition. The most unfavorable hearing condition was that in which the noise was in the same incidence angle of the speech, in front of the evaluated subject.
Evaluation of Different Signal Processing Options in Unilateral and Bilateral Cochlear Freedom Implant Recipients Using R-SpaceTM Background Noise
The results of this study suggest that the use of processing options that utilize noise reduction, like those available in ASC and BEAM, improve a CI recipient’s ability to understand speech in noise in listening situations similar to those experienced in the real world. The choice of the best processing option is dependent on the noise level, with BEAM best at moderate noise levels and ASC best at loud noise levels for unilateral CI recipients. Therefore, multiple noise programs or a combination of processing options may be necessary to provide CI users with the best performance in a variety of listening situations.
The results of the present study indicated that performances of Chinese word recognition were influenced by word frequency, age, and neighborhood density, with word frequency playing a major role. These results were consistent with those in other languages, supporting the application of NAM in the Chinese language. The development of Standard-Chinese version of LNT and the establishment of a database of children of 4–6 years old can provide a reliable means for spoken-word recognition test in children with hearing impairment.