Monthly Archives: December 2009

Tips to prevent noise-induced hearing loss in children

Parents and children giving or receiving an electronic device with music this holiday season should give their ears a gift as well by pre-setting the maximum decibel level to somewhere between one-half and two-thirds maximum volume.

from News-Medical.net

Arogyasri to cover cochlear implants

Glad tidings for 356 children and their families from various parts of the State waiting for the government’s green signal to undergo cochlear implant surgeries.

from Topix.net

Master Gene Math1 Controls Framework for Perceiving External and Internal Body Parts

Waking and walking to the bathroom in the pitch black of night requires brain activity that is both conscious and unconscious and requires a single master gene known as Math1 or Atoh1, said Baylor College of Medicine researchers in a report that appears online in the Proceedings of the National Academy of Sciences.

from ScienceDaily.com

Doctors Urge Parents To Preset Volume On Holiday Electronics

Parents and children giving or receiving an electronic device with music this holiday season should give their ears a gift as well by pre-setting the maximum decibel level to somewhere between one-half and two-thirds maximum volume.

from Medical News Today.com

How do we understand written language?

How do we know that certain combinations of letters have certain meanings? Reading and spelling are complex processes, involving several different areas of the brain, but researchers from Johns Hopkins University in the USA have now identified a specific part of the brain – named the left fusiform gyrus – which is necessary for normal, rapid understanding of the meaning of written text as well as correct word spelling. Their findings are published in the February 2010 issue of Cortex (http://www.elsevier.com/locate/cortex), published by Elsevier.

from EurekAlert.org

Speech Pathology Can Improve Ariculation Impairment

Children affected by articulation impairments often have a great amount of difficulty during their social and educational development. Individuals who have trouble pronouncing words, speak with a lisp , or have difficulty in communicating are suffering from an articulation impairment. These impairments can be caused by a variety of reasons and is noticeable in the forms of lisps, substituting letters, or even stuttering. Affected children may have learned something at an early age that was never corrected and it just became the norm for them. Kids have a have a way of being brutally honest with each other regardless of of how much damage it can do to one’s psyche. Even if this is not done in a vicious manner, but pointing out another child’s obvious challenges can cause some social anxieties for that child at a very young age.

from Presence TeleCare

Intervention: Reading Recovery

No studies of Reading Recovery® that fall within the scope of the English Language Learners (ELL) review protocol meet What Works Clearinghouse (WWC) evidence standards. The lack of studies meeting WWC evidence standards means that, at this time, the WWC is unable to draw any conclusions based on research about the effectiveness or ineffectiveness of Reading Recovery® on ELL.

from the U.S. Department of Education, Institute of Education Sciences

Fears About Ears

Q. A family member has been unable to leave the house because of severe vertigo caused by something that sounds like cochlea hydrox. What is it and what might help?

from The New York Times

Early Treatment of Hearing, Vision Helps in Schizophrenia

Identifying sight and hearing problems in teens who are in the early stages of schizophrenia may help doctors fully restore those senses and lessen the impact of the devastating thought disorder, U.S. researchers say.

from Yahoo! Health

Early Treatment of Hearing, Vision Helps in Schizophrenia

Identifying sight and hearing problems in teens who are in the early stages of schizophrenia may help doctors fully restore those senses and lessen the impact of the devastating thought disorder, U.S. researchers say.

from Yahoo! Health

Linear and nonlinear temporal interaction components of mid-latency auditory evoked potentials obtained with maximum length sequence stimulation

A maximum length sequence (MLS) is a quasi-random sequence of clicks and silences that enables simultaneous recording of linear components and nonlinear temporal interaction components (NLTICs). NLTICs are produced when the stimulation rate is fast enough such that several stimuli occur within the memory length of the system. The present study was designed to characterise the NLTICs of auditory mid-latency responses (MLR). Forty normally hearing subjects (19–45-year-old) were tested at MLS rates between 20 and 120 clicks/s. Linear components could be identified at all rates. The NLTICs of the MLS-MLR were identified in only a few subjects. This suggests two possibilities: (1) there may not be strong nonlinear temporal interactions within the MLR generators; (2) the memory length of the MLR is much shorter than expected from the linear component rates. If so, NLTICs should be obtained at higher rates of stimulation.

from Experimental Brain Research

Effects of lowpass and highpass filtering on the intelligibility of speech based on temporal fine-structure or envelope cues

This study aimed to assess whether or not temporal envelope (E) and fine structure (TFS) cues in speech convey distinct phonetic information. Syllables uttered by a male and female speaker were (i) processed to retain either E or TFS within 16 frequency bands, (ii) lowpass or highpass filtered at different cutoff frequencies, and (iii) presented for identification to seven listeners. Psychometric functions were fitted using a sigmoid function, and used to determine crossover frequencies (cutoff frequencies at which lowpass and highpass filtering yielded equivalent performance), and gradients at each point of the psychometric functions (change in performance with respect to cutoff frequency). Crossover frequencies and gradients were not significantly different across speakers. Crossover frequencies were not significantly different between E and TFS speech (1.5 kHz). Gradients were significantly different between E and TFS speech in various filtering conditions. When stimuli were highpass filtered above 2.5 kHz, performance was significantly above chance level and gradients were significantly different from 0 for E speech only. These findings suggest that E and TFS convey important but distinct phonetic cues between 1-2 kHz. Unlike TFS, E conveys information up to 6 kHz, consistent with the characteristics of neural phase locking to E and TFS.

from Hearing Research

Age- and sex-related differences in brainstem auditory evoked potentials in secondary school students living in Northern European Russia

Abstract Brainstem auditory evoked potentials (BAEPs) were studied in 46 1st- to 11th-year students (22 boys and 24 girls) of a rural secondary school in Arkhangel’sk oblast. The objective of this work was to study age- and sex-related differences in BAEP characteristics in children and adolescents, living in the North and assess the BAEP characteristics as compared to reference values. In all three age groups of students, interpeak intervals I–III, III–V, and I–V characterizing the peripheral and central conduction times were shorter in girls than in boys. Interpeak interval III–V tended to increase with age only in boys (at puberty), with a significant increase in the latencies of waves I, III, and V. The BAEP characteristics in the subjects examined included a shorter peak latency and a greater amplitude of wave I (except senior students), relatively prolonged interpeak interval I–III, and more pronounced sex-related differences in BAEPs, especially at puberty. These findings show that it is necessary to revise regional reference values for BAEPs, differentiated by sex and age, including at puberty.

from

Why Speech-Language Pathology?

Did you know that there are approximately as many speech-language pathologists in the United States as there are dentists? A surprising piece of information given the lack of public attention the speech pathology profession receives. The following quote from pathologist Megan Hodge describes why she believes the field of speech therapy is special, “A career in speech-language pathology challenges you to use your intellect (the talents of your mind) in combination with your humanity (the gifts in your heart) to do meaningful work that feeds your soul… I am proud to be a member of what I believe to be the best profession on earth. ” Megan’s strong belief in the nature of the work speech pathologists perform is commonly shared amongst members of the profession.

The rewards of being a speech pathologist are great. The work can be frustrating at times but every day you work to see patients make progress. When a patient is unable to regain basic communication and eating skills, their lives become much more normal and enjoyable. It’s also a real privilege to help families make tough decisions about care for loved ones who are no longer able to eat normally. For more information on the benefits to becoming a speech pathologist check out this blog post on grad2b.com.

from Presence TeleCare

Bilateral Electric Acoustic Stimulation: A Comparison of Partial and Deep Cochlear Electrode Insertion

Background/Aims: A patient with bilateral severe, sloping, high-frequency hearing loss was treated with sequential bilateral electric acoustic stimulation (EAS) using the MED-EL Duet EAS cochlear implant. On one side, a partial 18-mm insertion of the electrode array (M-type) in the cochlea was performed. The contralateral side was implanted 39 months later with a deep 30-mm insertion of the electrode array (FLEXsoft type). The aims were to assess whether low-frequency hearing could be preserved after deep electrode insertion, as well as to assess the benefit of bilateral EAS surgery compared to monaural EAS. Methods: Hearing thresholds and speech recognition outcomes were measured preoperatively and up to 48 months postoperatively. Outcomes from the partial and deep insertion side are compared. The benefit of EAS in daily life was assessed with the Abbreviated Profile of Hearing Aid Benefit questionnaire. Benefits of bilateral EAS were calculated from speech reception thresholds measured using the LINT speech-in-noise number test. Speech was always presented from the front. Noise was either presented from the front, from the left side, or from the right side. Each condition was measured for unilateral and bilateral EAS use. Results: Partial as well as deep insertion of the electrode array resulted in hearing preservation and significant speech recognition in this particular case. Both EAS devices provided more than 80% speech recognition in noise at a 10-dB signal-to-noise ratio. Bilateral EAS was beneficial for speech reception in noise compared to monaural EAS. A head shadow effect of 3.4 dB, binaural squelch effect of 1.2 dB and binaural summation effect of 0.5 dB were measured. Conclusion: Hearing preservation is also possible after cochlear implantation using a FLEXsoft electrode array with a near-full insertion (30 mm) into the cochlea. Bilateral EAS was successfully implemented in this patient providing better speech recognition compared to monaural EAS.

from Cochlear Implants International