Monthly Archives: July 2009
Clinical Characteristics Associated with Language Regression for Children with Autism Spectrum Disorders
Abstract We investigated correlates of language regression for children diagnosed with autism spectrum disorders (ASD). Using archival data, children diagnosed with ASD (N = 114, M age = 41.4 months) were divided into four groups based on language development (i.e., regression, plateau, general delay, no delay) and compared on developmental, adaptive behavior, symptom severity, and behavioral adjustment variables. Few overall differences emerged between groups, including similar non-language developmental history, equal risk for seizure disorder, and comparable behavioral adjustment. Groups did not differ with respect to autism symptomatology as measured by the Autism Diagnostic Observation Schedule and Autism Diagnostic Interview-Revised. Language plateau was associated with better adaptive social skills as measured by the Vineland Adaptive Behavior Scales. Implications and study limitations are discussed.
CONCLUSION: the maturational process of the central hearing system occurs gradually, being the greatest changes observed when comparing children and adults.
CONCLUSIONS: although most of the elderly women presented some level of dysphonia, the vocal disorders did not have an influence on their life quality. However, the physical and total V-RQOL scores were correlated to dysphonia severity, indicating that the more sever the dysphonia, the lower the voice-related quality of life.
CONCLUSION: the findings of this study indicate that MLR and P300 were the potentials that better characterized both groups and the three AEP expressed the neural plasticity post-speech-language treatment.
Mismatch Response to Polysyllabic Nonwords: A Neurophysiological Signature of Language Learning Capacity
Our data thus confirm that people who are poorer at nonword repetition are less efficient in early processing of polysyllabic speech materials, but this impairment is not attributable to deficits in low level auditory discrimination. We conclude by discussing the significance of the observed relationship between LDN amplitude and nonword repetition ability and describe how this relatively little understood ERP component provides a biological window onto processes required for successful language learning.
from PLoS ONE
Social and Emotional Values of Sounds Influence Human (Homo sapiens) and Non-Human Primate (Cercopithecus campbelli) Auditory Laterality
The last decades evidenced auditory laterality in vertebrates, offering new important insights for the understanding of the origin of human language. Factors such as the social (e.g. specificity, familiarity) and emotional value of sounds have been proved to influence hemispheric specialization. However, little is known about the crossed effect of these two factors in animals. In addition, human-animal comparative studies, using the same methodology, are rare. In our study, we adapted the head turn paradigm, a widely used non invasive method, on 8–9-year-old schoolgirls and on adult female Campbell’s monkeys, by focusing on head and/or eye orientations in response to sound playbacks. We broadcast communicative signals (monkeys: calls, humans: speech) emitted by familiar individuals presenting distinct degrees of social value (female monkeys: conspecific group members vs heterospecific neighbours, human girls: from the same vs different classroom) and emotional value (monkeys: contact vs threat calls; humans: friendly vs aggressive intonation). We evidenced a crossed-categorical effect of social and emotional values in both species since only “negative” voices from same class/group members elicited a significant auditory laterality (Wilcoxon tests: monkeys, T = 0 p = 0.03; girls: T = 4.5 p = 0.03). Moreover, we found differences between species as a left and right hemisphere preference was found respectively in humans and monkeys. Furthermore while monkeys almost exclusively responded by turning their head, girls sometimes also just moved their eyes. This study supports theories defending differential roles played by the two hemispheres in primates’ auditory laterality and evidenced that more systematic species comparisons are needed before raising evolutionary scenario. Moreover, the choice of sound stimuli and behavioural measures in such studies should be the focus of careful attention.
from PLoS ONE
Conclusions & Implications: Aphasia appeared to influence text writing on different linguistic levels. The impact on overall structure and coherence was in line with earlier findings from the analysis of spoken and written discourse and the implication of this is that the written modality should also be included in language rehabilitation.
Conclusions & Implications: The results were interpreted to suggest an uneven profile of memory functioning in specific language impairment. On measures of declarative memory, specific language impairment appears to be associated with difficulties learning verbal information. At the same time, procedural memory is also appears to be impaired. Collectively, this study indicates multiple memory impairments in specific language impairment.
CONCLUSIONS: For participants with mild to moderate gradually sloping losses and for those with steeply sloping losses, the UCL – 5 dB and the 2 kHz SL methods resulted in the highest scores without exceeding listeners’ UCLs. For participants with moderately severe/severe losses, the UCL – 5 dB method resulted in the highest phoneme recognition scores.
CONCLUSIONS: It is essential that the CI audiologist not only be aware of the disorder but also be well versed in the resulting implications for the cochlear implant process. A more thorough case history, an expanded candidacy test battery, and knowledge of the typical presentation of SSCN are critical. The diagnosis of SSCN will impact expectations for success with the cochlear implant, and counseling should be adjusted accordingly.
CONCLUSIONS: The results of this study confirmed that the psychometric properties of the IOI-HA questionnaire are strong and are essentially the same for the veteran sample and the original private-pay sample. The veteran norms, however, produced higher outcomes than those established originally, possibly because of differences in the population samples and/or hearing aid technology. Clinical and research applications of the current findings are presented. Based on the results from the current study, the norms established here should replace the original norms for use in veterans with current hearing aid technology.
Recognition and localization of speech by adult cochlear implant recipients wearing a digital hearing aid in the nonimplanted ear (bimodal hearing).
CONCLUSIONS: These findings suggest that a well-fit digital hearing aid worn in conjunction with a cochlear implant is beneficial to speech recognition and localization. The dynamic test procedures used in this study illustrate the importance of bilateral hearing for locating, identifying, and switching attention between multiple speakers. It is recommended that unilateral cochlear implant recipients, with measurable unaided hearing thresholds, be fit with a hearing aid.
BACKGROUND: Hearing threshold data are not particularly predictive of self-perceived hearing handicap or readiness to pursue amplification. Poor correlations between these measures have been reported repeatedly. When a patient is evaluated for hearing loss, it is common to collect both threshold data and the individual’s self-perception of hearing ability. This is done to help the patient make an appropriate choice related to the pursuit of amplification or other communication strategies. It would be valuable, though, for the audiologist to be able to predict which patients are ready for amplification, which patients require more extensive counseling before pursuing amplification, and which patients simply are not ready for amplification regardless of the audiometric data. PURPOSE: The purpose of this study was to evaluate the following question for its potential usefulness as a determinant of patient readiness for amplification: “On a scale from 1 to 10, 1 being the worst and 10 being the best, how would you rate your overall hearing ability?” RESEARCH DESIGN: The test-retest reliability and the predictive value of the question, based on final hearing aid purchase, were evaluated in a private practice setting. STUDY SAMPLE: Eight hundred forty hearing-impaired adults in the age range from 18 to 95 years. COLLECTION AND ANALYSIS: Data were collected retrospectively from patient files. RESULTS AND CONCLUSION: Results were repeatable and supported the use of this question in similar clinical settings.
There have been several studies that have demonstrated a link between the hearing loss of subjects and tinnitus. However, there has been no systematic evaluation of the link between perceived tinnitus distress and an underlying hearing loss. The purpose of the current study is to explore this association, and ascertain whether a subject’s hearing loss contributes to the handicap caused by tinnitus. A group of 96 adults were evaluated with Pure Tone Audiometry and a questionnaire that included the Tinnitus Handicap Inventory (THI). In 58% of the subjects, the side of the unilateral or worse tinnitus corresponded with the ear with poorer hearing thresholds. A subset of the THI, the Two Question Mean (TQM) that was related to questions with regard to communication, correlated significantly with the hearing thresholds in the better hearing ear ( P < 0.01). There was also a significant correlation between the THI and TQM scores ( P < 0.01). These results suggested that in tinnitus subjects with impaired hearing, the underlying hearing loss may be a significant factor in the perceived distress.
from Noise & Health
This study looked at output levels produced by new generation personal music systems (PMS), at the level of eardrum by placing the probe microphone in the ear canal. Further, the effect of these PMS on hearing was evaluated by comparing the distortion product otoacoustic emissions and high frequency pure tone thresholds (from 3 kHz to 12 kHz) of individuals who use PMS to that of age matched controls who did not use PMS. The relationship between output sound pressure levels and hearing measures was also evaluated. In Phase I output SPLs produced by the PMS were measured in three different conditions – a) at volume control setting that was preferred by the subjects in quiet b) at volume control setting that was preferred by the subject in presence of 65 dB SPL bus noise c) at maximum volume control settings of the instrument. In Phase II pure tone hearing thresholds and DPOAEs were measured. About 30% of individuals in a group of 70 young adults listened to music above the safety limits (80 dBA for 8 hours) prescribed by Ministry of Environment and Forests, India. Addition of bus noise did not increase the preferred volume control settings of the subjects significantly. There were no significant differences between the experimental and control groups for mean pure tone threshold and for mean DPOAE amplitude comparisons. However, a positive correlation between hearing thresholds and music levels and a negative correlation between DPOAE measures and music levels were found.
from Noise & Health