Genetic predisposition and sensory experience in language development: Evidence from cochlear-implanted children
Recent neurobiological studies have advanced the hypothesis that language development is not continuously plastic but is governed by biological constraints that may be modified by experience within a particular time window. This hypothesis is tested based on spontaneous speech data from deaf cochlear-implanted (CI) children with access to linguistic stimuli at different developmental times. Language samples of nine children who received a CI between 5 and 19 months are analysed for linguistic measures representing different stages of language development. These include canonical babbling ratios, vocabulary diversity, and functional elements such as determiners. The results show that language development is positively related to the age at which children get first access to linguistic input and that later access to language is associated with a slower-than-normal language-learning rate. As such, the positive effect of early experience on the functional organisation of the brain in language processes is confirmed by behavioural performance.
In the study of voice quality in children with profound hearing loss, it is very important to have information both about the degree of hearing loss and the kind of prosthesis used. Implant users show more altered voice quality than digital hearing aid users. However, the hearing loss they compensate is much more important than the hearing loss compensated by the hearing aids. Therefore, we consider that both prostheses help children with hearing loss to have a more normalized voice quality than what scientific literature has traditionally stated.
Finally, we question the validity of using some acoustic parameters as indicators of voice quality in deaf children having no laryngeal problems.
Associations and Dissociations between Psychoacoustic Abilities and Speech Perception in Adolescents with Severe-to-Profound Hearing Loss
Purpose: To clarify the relationship between psychoacoustic capabilities and speech perception in adolescents with severe-to-profound hearing loss (SPHL).
Method: Twenty-four adolescents with SPHL and young adults with normal hearing were assessed with psychoacoustic and speech-perception tests. The psychoacoustic tests included gap detection (GD), difference limen for frequency, and psychoacoustic-tuning curves. To assess the perception of words that differ in spectral and temporal cues, the speech tests included the Hebrew Early Speech Perception test and the Hebrew Speech Pattern Contrast test (Kishon-Rabin et al., 2002). All tests were conducted for the listeners with normal hearing at low- and high-presentation levels and for the participants with SPHL at 20 dBSL.
Results: Only GD thresholds were comparable across the two groups at similar presentation levels. Psychoacoustic performance was poorer in the group with SPHL, but only selected tests were correlated with speech perception. Poor GD was associated with pattern perception, 1-syllable word identification, and final voicing subtests.
Conclusions: Speech perception performance in adolescents with SPHL could not be predicted solely on spectral and temporal capabilities of the auditory system. However, when GD threshold was greater than 40 ms, speech perception skills were predictable by psychoacoustic abilities.
The latest research, conducted by Dr Jörg T. Albert, a Deafness Research UK research fellow at the UCL Ear Institute, together with scientists at the University of Cologne, shows that fruit flies have ears which mechanically amplify sound signals in a remarkably similar way to the sensory cells found in the inner ear of vertebrates including humans. The finding means that the wealth of genetic techniques already available to study the fruit fly can now be used to target how the ear works.
Engagement during reading instruction for students who are deaf or hard of hearing in public schools
from the American Annals of the Deaf
An observational study of reading instruction was conducted in general education, resource, and self-contained classrooms, grades 1-4, in public schools. Participants included students who were deaf or hard of hearing and their reading teachers. Results indicated that time engaged in reading and/or academically responding varied significantly by grade level enrolled, reading curriculum grade level, and instructional setting, but not level of hearing loss or presence or absence of concomitant conditions. Students working with reading curriculum one grade level below spent significantly less time in reading instruction and reading than students working on grade level or two levels below. Students in general education settings spent significantly more time in reading instruction and reading silently than students in self-contained settings. The probability that students would engage in reading was significantly increased by several teacher and ecological conditions more likely to be observed in general education settings.
from the American Annals of the Deaf
For more than 20 years, two courses, History, Education, and Guidance of the Deaf/Hard of Hearing and Introduction to Instructional Methods for the Deaf/Hard of Hearing, have been taught at Bloomsburg University of Pennsylvania using a traditional lecture format. A state grant provided funding to explore the use of technology to teach online courses to college-age learners who are deaf, hard of hearing, or hearing. Saba Centra software was used as the online tool for the synchronous presentation of course content, which included PowerPoint lecture material, text chat opportunities, sign language-interpreted video, and other forms of class participation (e.g., signaling for questions raised, responding in a “yes/no” format). The present article covers recent successes and challenges in offering online courses in a “virtual classroom” format to deaf and hard of hearing learners, as well as hearing learners, from a qualitative research perspective.
The A§E® is a set of suprathreshold tests for the auditory evaluation of the hearing impaired. A particular population of interest is the hearing-impaired preverbal child. This paper reports on normative data of the A§E® discrimination test in children aged 10 months and of the A§E® identification tests in children aged 2 to 4 years. Normally hearing children of these ages were tested and pass criteria were defined in such a way that 95% of the hearing infants would pass the tests. With these criteria, the A§E® discrimination test is feasible at 10 months of age and the A§E® identification test from 30 months of age. Copyright © 2006 John Wiley & Sons, Ltd.
According to an Italian research team publishing their findings in the current issue of Cell Transplantation (17:6), hearing loss due to cochlear damage may be repaired by transplantation of human umbilical cord hematopoietic stem cells (HSC) since they show that a small number migrated to the damaged cochlea and repaired sensory hair cells and neurons.
Classification and Cue Weighting of Multidimensional Stimuli with Speech-like Cues for Young Normal Hearing and Elderly Hearing-impaired Listeners
from Ear and Hearing
Objective: The purpose of this study was to investigate how young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners make use of redundant speech-like cues when classifying nonspeech sounds having multiple stimulus dimensions.
Design: A total of four experiments were conducted with 10 to 12 listeners per group in each experiment. There were 27 stimuli, making use of all possible combinations of three stimulus values along each of three cue dimensions. Stimuli were comprised of two brief sequential noise bursts separated by a temporal gap. Stimulus dimensions were: (1) the center frequency of the noise bursts; (2) the duration of the temporal gap separating the noise bursts; and (3) the direction of a frequency transition in the second noise burst.
Results: Experiment 1 verified that the stimulus values selected resulted in adjacent steps along each stimulus being easily discriminable [(P(c) >=90%]). In experiment 2, similarity judgments were obtained for all possible pairs of the 27 stimuli. Multidimensional scaling confirmed that the three acoustic dimensions existed as separate dimensions perceptually. In experiment 3, listeners were then trained to classify three exemplar stimuli. After the training, they were required to classify all 27 stimuli and these results led to the derivation of attentional weights for each stimulus dimension. Both groups focused their attention on the frequency-transition dimension during the classification task. Finally, experiment 4 demonstrated that the attentional weights derived in experiment 3 were reliable and that both EHI and YNH participants could be trained to shift their attention to a cue dimension (temporal-gap) not preferred in experiment 3, although older adults required much more training to achieve this shift in attention.
Conclusion: For the speech-like, multidimensional acoustic stimuli used here, YNH and EHI listeners attended to the same dimensions of the stimuli when classifying them. In general, the EHI listeners required more time to acquire the ability to categorize the stimuli, and to change their focus to alternate stimulus dimensions.
from Ear and Hearing
Objectives: Loudness-balance measurements with monaurally impaired subjects have shown that the shape of the loudness versus sound-pressure curve among hearing-impaired persons varies significantly. But the effectiveness of adjusting the compression characteristics of wide-dynamic-range compression hearing aids-the compression ratios, the variation of compression ratio with level, and the threshold of compression-to restore normal loudness growth for the individual patient has never been properly tested; individual loudness measurements have been too uncertain to permit meaningful individual adjustments. Recent investigators have reported standard deviations of such measurements in normal-hearing subjects of 6.4 dB and 7.8 dB. This investigation describes a method of measuring loudness function with a standard deviation in normal-hearing subjects of the order of 1 dB, both significantly lower than that of previous methods and sufficiently accurate for individual-subject adjustments.
Design: Each of nine normal-hearing subjects-seven of them inexperienced and one a 9-year-old was asked to make three successive loudness trisections within an amplitude range of 40 to 80 dB SPL, providing six points from which to plot a loudness-function curve between these limits. The individual and average curves were validated as accurate loudness functions by comparing them to the curve defined by the equation of loudness versus amplitude in current Standards. In a second validation experiment, the loudness functions of masked ears measured by trisection were compared to the loudness function of those ears measured by loudness balance between masked and unmasked ears.
Results: The difference between a loudness function based on the average of subject trisections and the loudness function defined by the ANSI Standard loudness equation was -1.92 dB at the lowest trisection level and +0.05 dB at the highest level. The standard deviations of subject responses were 1.63 dB for the lowest trisection level and 0.68 dB for the highest level, with an average of 1.1 dB. The across-subject standard deviation of the test-retest differences for three subjects was less than 1.7 dB for the first three lower level responses and less than 0.8 dB for the remaining three responses.
Conclusions: A trisection procedure for measuring loudness function showed validity and significantly less variation than previous loudness-measurement procedures. Such a procedure, once it has been validated for hearing-impaired subjects, makes it possible to test hearing aid design and fitting strategies that are based on individual-patient loudness functions.
from Age and Ageing
SIR—The Folstein Mini-Mental State Examination (MMSE), developed in 1975 as a bedside test of cognitive function, has been extensively used in clinical practice and research and is widely accepted as a clinical tool for diagnosing and monitoring dementia . Despite its low sensitivity and specificity (0.56 and 0.73, respectively, in one recent study) , comparable tools, including the Modified MMSE of Teng and Chui  have not received such widespread acceptance.
It contains 11 questions that test orientation, registration, attention, calculation, recall, language and visuospatial functioning, with a maximum score of 30. It takes minutes to administer and is practical for routine clinical use. Most questions are administered verbally. Hearing loss reduces performance on the verbal parts of the examination even in cognitively intact patients, with potential diagnostic error and alteration of management . This is of concern, as hearing impairment affects over one-fourth of people over 65 years of age, and half of those over 75 years in most industrialised nations .
Uhlmann  tested 71 Alzheimer’s disease subjects with varying levels of hearing, using both written and standard versions of the MMSE. Paradoxically, they found that hearing-impaired subjects scored higher in the standard than the written version, while subjects with normal hearing performed better using the written version, although these findings were not statistically significant.
We (M.M.) developed a written version of the MMSE, found it clinically useful, and report here an evaluation of its performance in a hospital-based population of older people.
A group at the University of Washington has developed software that for the first time enables deaf and hard-of-hearing Americans to use sign language over a mobile phone. UW engineers got the phones working together this spring, and recently received a National Science Foundation grant for a 20-person field project that will begin next year in Seattle.
MDI for asynchronous presentation of masker and target in listeners with normal and impaired hearing.
Purpose: The sensitivity to sinusoidal amplitude modulations (SAM) is reduced when other modulated maskers are presented simultaneously at a distant frequency (also referred to as MDI). This paper describes the results of onset differences between masker and target as a parameter.
Method: Carrier frequencies were 1 kHz (target; 625ms. 8Hz SAM) and 2 kHz (masker; 625ms, 8 HzSAM; m=1) presented at 25 dB SL for listeners with impaired hearing (n=8) and at 25 dB and 50 dB SL for listeners with normal hearing (n=6). Masker was delayed by either 0, 125, 250, 500, 625 and 750 ms relative to the target.
Results: Sensitivity to SAM reduces in both groups by a modulated masker simultaneous presentation. Reducing the temporal overlap, i.e., increasing the onset delay between masker and target, increased the sensitivity to SAM in the presence of modulated maskers.
Conclusion: The gradual reduction in MDI with increasing asynchrony between masker and targetsuggests that MDI is not solely related to perceptual grouping. Reduced sensitivity to SAMdue to prior stimulation with SAM stimuli (forward masking), and deficits in across-channelintegration, are other factors that may play a role.
from The Hearing Review
When working with callers who may be frustrated with hearing loss, exceptional telephone skills become even more critical. The first of a three-part series about how telephone skills can make or break hearing care businesses.