Assessing Multimodal Spoken Word-in-Sentence Recognition in Children With Normal Hearing and Children With Cochlear Implants
Conclusions: The results suggest that children’s audiovisual word-in-sentence recognition can be assessed using the materials developed for this investigation. With further development, the materials hold promise for becoming a test of multimodal sentence recognition for children with hearing loss.
Modified Spectral Tilt Affects Older, but Not Younger, Infants’ Native-Language Fricative Discrimination
Conclusions: The findings suggest that the perceptual reorganization that emerges for consonants at the end of the first year affects 9-month-olds’ discrimination of native speech sounds. Perceptual reorganization is usually indexed by a decline in the ability to discriminate nonnative speech sounds. In this study, 6-month-olds demonstrated an acoustic-based sensitivity to both modified and unmodified native speech sounds, but 9-month-olds were most sensitive to the unmodified speech sounds that adhered to the native spectral profile.
The present study clearly shows that children benefit from the fine structure speech coding strategy in combination with an extended frequency spectrum in the low frequencies, as is offered by the Opus speech processors. This should be taken into consideration when fitting pre- and perilingually deaf children implanted almost a decade previously.
These results suggest that innovative speech processing strategies which enhance temporal cues may benefit individuals with auditory dys-synchrony.
from the Journal of Laryngology and Otology
This article provides a brief overview of the advantages of two-ear hearing in children and discusses the limitations, from a psychophysical and a technical perspective, which may constrain the ability of cochlear implant users to gain these benefits. The latest outcomes for children using bilateral cochlear implants are discussed, which suggest that results are more favorable for children who receive both devices before the age of 3.5 to 4 years. The available studies that have investigated electrophysiological responses for children receiving bilateral implants are discussed. These also support the notion that optimum development of binaural auditory skills may be more difficult after the age of 3.5 to 4 years. Studies that investigate the alternative for some children of using a hearing aid on the opposite ear to the cochlear implant are briefly discussed. These indicate that advantages for speech perception in noise and localization can be obtained consistently for children with significant residual hearing in the nonimplanted ear. The article concludes with an attempt to bring the available scientific evidence into the practical clinical context with suggestions that may assist clinicians in making recommendations for families considering bilateral cochlear implantation. Although the evidence remains limited at this time, it is reasonable to suggest that bilateral cochlear implantation can provide improved auditory skills over a single implant for children with severe and profound bilateral hearing loss. The available data suggest that the benefit may be maximized by introducing both implants as early as possible, at least before 3.5 to 4 years of age.
from Thieme eJournals
A comparison of the speech recognition and pitch ranking abilities of children using a unilateral cochlear implant, bimodal stimulation or bilateral hearing aids
Contrary to findings in postlingually deafened adults, we found no significant bimodal advantage for pitch perception in prelingually deafened children. However, the performance of children using electrical stimulation was significantly poorer than children using only acoustic stimulation. Further research is required to investigate the contribution of the non-implanted ears of users of BMS to pitch perception, and the effect of hearing loss on the development of pitch perception in children.
Auditory cortical N100 in pre- and post-synaptic auditory neuropathy to frequency or intensity changes of continuous tones
Abnormalities of auditory cortical N100 in AN reflect disorders of both temporal processing (low frequency) and neural adaptation (high frequency). Auditory N100 latency to the low frequency provides an objective measure of the degree of impaired speech perception in AN.
Optimizing the perception of soft speech and speech in noise with the Advanced Bionics cochlear implant system
Objective: This study aimed to provide guidelines to optimize perception of soft speech and speech in noise for Advanced Bionics cochlear implant (CI) users. Design: Three programs differing in T-levels were created for ten subjects. Using the T-level setting that provided the lowest FM-tone, sound-field threshold levels for each subject, three additional programs were created with input dynamic range (IDR) settings of 50, 65 and 80 dB. Study sample: Subjects were postlinguistically deaf adults implanted with either the Clarion CII or 90K CI devices. Results: Sound-field threshold levels were lowest with T-levels set higher than 10% of M-levels and with the two widest IDRs. Group data revealed significantly higher scores for CNC words presented at a soft level with an IDR of 80 dB and 65 dB compared to 50 dB. Although no significant group differences were seen between the three IDRs for sentences in noise, significant individual differences were present. Conclusions: Setting Ts higher than the manufacturer’s recommendation of 10% of M-levels and providing IDR options can improve overall speech perception; however, for some users, higher Ts and wider IDRs may not be appropriate. Based on the results of the study, clinical programming recommendations are provided.
from the International Journal of Audiology
Impairments in speech and nonspeech sound categorization in children with dyslexia are driven by temporal processing difficulties
Auditory processing problems in persons with dyslexia are still subject to debate, and one central issue concerns the specific nature of the deficit. In particular, it is questioned whether the deficit is specific to speech and/or specific to temporal processing. To resolve this issue, a categorical perception identification task was administered in thirteen 11-year old dyslexic readers and 25 matched normal readers using 4 sound continua: (1) a speech contrast exploiting temporal cues (/bA/-/dA/), (2) a speech contrast defined by nontemporal spectral cues (/u/-/y/), (3) a nonspeech temporal contrast (spectrally rotated/bA/-/da/), and (4) a nonspeech nontemporal contrast (spectrally rotated/u/-/y/). Results indicate that children with dyslexia are less consistent in classifying speech and nonspeech sounds on the basis of rapidly changing (i.e., temporal) information whereas they are unimpaired in steady-state speech and nonspeech sounds. The deficit is thus restricted to categorizing sounds on the basis of temporal cues and is independent of the speech status of the stimuli. The finding of a temporal-specific but not speech-specific deficit in children with dyslexia is in line with findings obtained in adults using the same paradigm (Vandermosten et al., 2010, Proceedings of the National Academy of Sciences of the United States of America, 107: 10389–10394). Comparison of the child and adult data indicates that the consistency of categorization considerably improves between late childhood and adulthood, particularly for the continua with temporal cues. Dyslexic and normal readers show a similar developmental progress with the dyslexic readers lagging behind both in late childhood and in adulthood.
CONCLUSION: based on the gathered data it can be observed that this potential works as a new tool for understanding the encoding of sound at the brainstem level.
children with phonological disorder can be self-aware of speech impairment; gender and age are not important factors for the development of this ability.
Sensitivity to Structure in the Speech Signal by Children with Speech Sound Disorder and Reading Disability
Purpose: Children with speech sound disorder (SSD) and reading disability (RD) have poor phonological awareness, a problem believed to arise largely from deficits in processing the sensory information in speech, specifically individual acoustic cues. However, such cues are details of acoustic structure. Recent theories suggest that listeners also need to be able to integrate those details to perceive linguistically relevant form. This study examined abilities of children with SSD, RD, and SSD+RD not only to process acoustic cues but also to recover linguistically relevant form from the speech signal.
Method: Ten- to 11-year-olds with SSD (n = 17), RD (n = 16), SSD+RD (n = 17), and Controls (n = 16) were tested to examine their sensitivity to (1) voice onset times (VOT); (2) spectral structure in fricative-vowel syllables; and (3) vocoded sentences.
Results: Children in all groups performed similarly with VOT stimuli, but children with disorders showed delays on other tasks, although the specifics of their performance varied.
from the Journal of Communication Disorders
Left anterior temporal cortex actively engages in speech perception: a direct cortical stimulation study
Recent neuroimaging studies proposed the importance of the anterior auditory pathway for speech comprehension. Its clinical significance is implicated by semantic dementia or pure word deafness. Neurodegenerative or cerebrovascular nature, however, precluded precise localization of the cortex responsible for speech perception. Electrical cortical stimulation could delineate such localization by producing transient, functional impairment. We investigated engagement of the left anterior temporal cortex in speech perception by means of direct electrical cortical stimulation. Subjects were two partial epilepsy patients, who underwent direct cortical stimulation as a part of invasive presurgical evaluations. Stimulus sites were coregistered to presurgical 3D-MRI, and then to MNI standard space for anatomical localization. Separate from the posterior temporal language area, electrical cortical stimulation revealed a well-restricted language area in the anterior part of the superior temporal sulcus and gyrus (aSTS/STG) in both patients. Auditory sentence comprehension was impaired upon electrical stimulation of aSTS/STG. In one patient, additional investigation revealed that the functional impairment was restricted to auditory sentence comprehension with preserved visual sentence comprehension and perception of music and environmental sounds. Both patients reported that they could hear the voice but not understand the sentence well (e.g., heard as a series of meaningless utterance). The standard coordinates of this restricted area at left aSTS/STG well corresponded with the coordinates of speech perception reported in neuroimaging activation studies in healthy subjects. The present combined anatomo-functional case study, for the first time, demonstrated that aSTS/STG in the language dominant hemisphere actively engages in speech perception.
Preschool impairments in auditory processing and speech perception uniquely predict future reading problems
Developmental dyslexia is characterized by severe reading and spelling difficulties that are persistent and resistant to the usual didactic measures and remedial efforts. It is well established that a major cause of these problems lies in poorly specified phonological representations. Many individuals with dyslexia also present impairments in auditory temporal processing and speech perception, but it remains debated whether these more basic perceptual impairments play a role in causing the reading problem. Longitudinal studies may help clarifying this issue by assessing preschool children before they receive reading instruction and by following them up through literacy development. The current longitudinal study shows impairments in auditory frequency modulation (FM) detection, speech perception and phonological awareness in kindergarten and in grade 1 in children who receive a dyslexia diagnosis in grade 3. FM sensitivity and speech-in-noise perception in kindergarten uniquely contribute to growth in reading ability, even after controlling for letter knowledge and phonological awareness. These findings indicate that impairments in auditory processing and speech perception are not merely an epiphenomenon of reading failure. Although no specific directional relations were observed between auditory processing, speech perception and phonological awareness, the highly significant concurrent and predictive correlations between all these variables suggest a reciprocal association and corroborate the evidence for the auditory deficit theory of dyslexia.