Monthly Archives: June 2010

New cochlear implant could improve outcomes for patients

More electrodes and a thinner, more flexible wire inserted further into the inner ear could improve conventional cochlear implants, a team of Medical College of Georgia and Georgia Institute of Technology researchers say.


Restore hearing thanks to new drug

Study uncovers potential drug treatment for noise-induced hearing loss


Lead researchers develop a novel technique to deliver cancer drugs

A new way to deliver cancer drugs using gas bubbles and sound waves is to be developed at the University of Leeds. The project will enable highly toxic drugs to be delivered in small doses directly to tumours, where their toxicity can safely be put to good use. If successful, the technique could easily be adapted for other diseases.


Earplug Lets the Message Through

An earplug with a built-in computer that allows speech to pass but shuts out unwanted and hazardous noise will make life easier in noisy environments.


Key Mechanism in Brain’s Computation of Sound Location Identified

New York University researchers have identified a mechanism the brain uses to help process sound localization. Their findings, which appear in the latest edition of the journal PLoS Biology, focus on how the brain computes the different arrival times of sound into each ear to estimate the location of its source.


Speech Delay in Kids Linked to Later Emotional Problems

Study Shows Language Delays at Age 5 May Lead to Mental Health Issues in Adulthood

Feeding Skills in the Preterm Infant

Suck is a relatively mature ororhythmic motor behavior in a full-term infant and is integral to competent oral feeds. However, preterm infants often demonstrate oromotor discoordination and are unable to suck and feed orally (Comrie & Helm, 1997; Lau, 2006; Barlow, 2009a). This inability represents a serious challenge to both the neonatal intensive care unit (NICU) “graduates” and the physician-provider-parent teams.

from the ASHA Leader

Application of the Kurtosis Statistic to the Evaluation of the Risk of Hearing Loss in Workers Exposed to High-Level Complex Noise

Conclusions: For the same exposure level, the prevalence of NIHL is greater in workers exposed to non-G noise environments than for workers exposed to G noise. The kurtosis metric may be a reasonable candidate for use in modifying exposure level calculations that are used to estimate the risk of NIHL from any type of noise exposure environment. However, studies involving a large number of workers with well-documented exposures are needed before a relation between a metric such as the kurtosis and the risk of hearing loss can be refined.

from Ear and Hearing

Children With Cochlear Implants Recognize Their Mother’s Voice

Conclusions: We attribute child CI users’ success on talker differentiation, even on same-gender differentiation, to their use of two types of temporal cues: variations in consonant and vowel articulation and variations in speaking rate. Moreover, we contend that child CI users’ differentiation of speakers was facilitated by long-term familiarity with their mother’s voice.

from Ear and Hearing

Cochlear Implant-Mediated Perception of Nonlinguistic Sounds

Conclusions: The results suggest that nonlinguistic sounds are difficult for CI users to perceive. The categorization and identification scores suggest that sounds with harmonic structure or sounds with repetitive temporal structure are easier for CI users to perceive. A further developed clinical version of the NLST may be a useful clinical test to measure CI performance and progress, and perception of nonlinguistic sounds should receive greater attention during postimplant auditory rehabilitation.

from Ear and Hearing

Effects of Stimulation Level and Electrode Pairing on the Binaural Interaction Component of the Electrically Evoked Auditory Brain Stem Response

Conclusions: This study demonstrates that stimulation level affects amplitudes of the BIC response. It is possible to record the BIC of the EABR in bilateral CI users even from interaural electrode pairs that have large interaural offsets. This finding suggests that when high-level stimuli are used, there is a broad pattern of current spread within the two cochleae. At lower stimulation levels, the spread of excitation within the cochlea is reduced making the effect of electrode pairing on the amplitude of the BIC more pronounced.

from Ear and Hearing

Effects of Various Articulatory Features of Speech on Cortical Event-Related Potentials and Behavioral Measures of Speech-Sound Processing

Conclusions: The larger response amplitudes and earlier latencies for the cortical ERPs to the vowel versus consonant stimuli are likely related, in part, to the large spectral differences present in these speech contrasts. The measurements of response strength (amplitudes and d-prime scores) and response timing (ERP and RT latencies) for the various cortical ERPs suggest that the brain may have an easier task processing the steady state information present in the vowel stimuli in comparison with the rapidly changing formant transitions in the consonant stimuli.

from Ear and Hearing

Influence of Calibration Method on Distortion-Product Otoacoustic Emission Measurements: II. Threshold Prediction

Objectives: Distortion-product otoacoustic emission (DPOAE) stimulus calibrations are typically performed in sound pressure level (SPL) before DPOAE measurements. These calibrations may yield unpredictable DPOAE response levels, presumably because of the presence of standing waves in the ear canal. Forward pressure level (FPL) has been proposed as an alternative method for stimulus calibration because it avoids complications due to standing waves. DPOAE thresholds after four FPL calibrations and one SPL calibration were compared with behavioral thresholds to determine which calibration results in data that yield the highest correlations between the two threshold estimates.

from Ear and Hearing

Lateralization of Interimplant Timing and Level Differences in Children Who Use Bilateral Cochlear Implants

Conclusions: The results of this study illustrate that children who use bilateral CIs can lateralize stimuli on the basis of level cues, but have difficulty interpreting interimplant timing differences. Perceived lateralization of bilaterally presented stimuli to the second implanted side in many of the stimulus conditions may relate to the use of different device generations between sides. Further differences from normal lateralization responses could be due to abnormal binaural processing, possibly resulting from a period of unilateral hearing before the provision of a second implant or due to insufficiently matched interimplant stimuli. It may be possible to use objective measures such as electrically evoked auditory brain stem responses wave eV amplitudes to provide balanced levels of bilateral stimulation in children who have had no binaural hearing experience.

from Ear and Hearing

Neuroanatomical Characteristics and Speech Perception in Noise in Older Adults

Discussion: These findings suggest that, in addition to peripheral structures, the central nervous system also contributes to the ability to perceive speech in noise. In older adults, a decline in the relative volume and cortical thickness of the PFC during aging can therefore be a factor in a declining ability to perceive speech in a naturalistic environment. These findings are consistent with the decline-compensation hypothesis, which states that a decline in sensory processing caused by cognitive aging can be accompanied by an increase in the recruitment of more general cognitive areas as a means of compensation. We found that a larger PFC volume may compensate for declining peripheral hearing. Clinically, recognizing the contribution of the cerebral cortex expands treatment possibilities for hearing loss in older adults beyond peripheral hearing aids to include strategies for improving cognitive function. We conclude by considering several mechanisms by which the PFC may facilitate speech perception in noise, including inhibitory control, attention, cross-modal compensation, word prediction and phonological working memory, although no definitive conclusion can be drawn.

from Ear and Hearing