Blog Archives

An Exploration of Listener Variability in Intelligibility Judgments

Conclusions: These findings suggest that seemingly objective intelligibility tests are subject to a number of factors that affect scores.

from American Journal of Speech Language Pathology

Advertisements

Augmentative and Alternative Communication in Daily Clinical Practice: Strategies and Tools for Management of Severe Communication Disorders

Research indicates that augmentative and alternative communication (AAC) approaches can be used effectively by patients and their caregivers to improve communication skills. This article highlights strategies and tools for re-establishing communication competence by considering the complexity and diversity of communication interactions in an effort to maximize natural speech and language skills via a range of technologies that are implemented across the continuum of care rather than as a last resort.

from Topics in Stroke Rehabilitation

Reducing the Effects of Background Noise During Auditory Functional Magnetic Resonance Imaging of Speech Processing: Qualitative and Quantitative Comparisons Between Two Image Acquisition Schemes and Noise Cancellation

Conclusions: For the scanning protocols evaluated here, the sparse temporal scheme was the more preferable for detecting sound-evoked activity. In addition, ANC ensures that listening difficulty is determined more by the chosen stimulus parameters and less by the adverse testing environment.

from the Journal of Speech, Language, and Hearing Research

Effects of Noise and Speech Intelligibility on Listener Comprehension and Processing Time of Korean-Accented English

Conclusion: The findings suggest that decreased speech intelligibility and adverse listening conditions can be major challenges for effective communication between foreign-accented speakers and native listeners. The results also indicate that foreign-accented speech requires more effortful processing relative to native speech under certain conditions, affecting both comprehension and processing efficiency.

from the Journal of Speech, Language, and Hearing Research

Perception of speech disorders: Difference between the degree of intelligibility and the degree of severity

Conclusion: There is an argument for measuring intelligibility at the surface code level with a word recognition test or ordinal scales and for allowing the use of interval scales for severity judgment.

from Audiological Medicine

Intelligibility of hearing impaired children as judged by their parents: A comparison between children using cochlear implants and children using hearing aids

Conclusion
The intelligibility of the prelingually deaf CI children is very close to the intelligibility of NH children, while the HA children still show a decreased mean intelligibility.

from the International Journal of Pediatric Otorhinolaryngology

Synthesized Speech Output and Children: A Scoping Review

Conclusions: Overall, there is a paucity of research investigating synthesized speech for use with children. Available evidence suggests that children produce similar trends but lower levels of intelligibility performance when compared with adults. Future areas of applied research are required to adequately define this relationship and the variables that may contribute to improving the intelligibility and comprehension of synthesized speech for children.

from American Journal of Speech Language Pathology

Effects of lowpass and highpass filtering on the intelligibility of speech based on temporal fine-structure or envelope cues

This study aimed to assess whether or not temporal envelope (E) and fine structure (TFS) cues in speech convey distinct phonetic information. Syllables uttered by a male and female speaker were (i) processed to retain either E or TFS within 16 frequency bands, (ii) lowpass or highpass filtered at different cutoff frequencies, and (iii) presented for identification to seven listeners. Psychometric functions were fitted using a sigmoid function, and used to determine crossover frequencies (cutoff frequencies at which lowpass and highpass filtering yielded equivalent performance), and gradients at each point of the psychometric functions (change in performance with respect to cutoff frequency). Crossover frequencies and gradients were not significantly different across speakers. Crossover frequencies were not significantly different between E and TFS speech (1.5 kHz). Gradients were significantly different between E and TFS speech in various filtering conditions. When stimuli were highpass filtered above 2.5 kHz, performance was significantly above chance level and gradients were significantly different from 0 for E speech only. These findings suggest that E and TFS convey important but distinct phonetic cues between 1-2 kHz. Unlike TFS, E conveys information up to 6 kHz, consistent with the characteristics of neural phase locking to E and TFS.

from Hearing Research

Language planning disturbances in children who clutter or have learning disabilities

The primary objective of this paper is to determine to what extent disturbances in the fluency of language production of children who clutter might be related to, or differ from difficulties in the same underlying processes of language formulation seen in children with learning disabilities. It is hypothesized that an increase in normal dysfluencies and sentence revisions in children who clutter reflect different neurolinguistic process to those of children with learning disabilities. To test this idea, 150 Dutch speaking children, aged 10;6 to 12;11 years, were divided in three groups (cluttering, learning difficulties and controls), and a range of speech and language variables were analysed. Results indicate differences in the underlying processes of language disturbances between children with cluttered speech and those with learning disabilities. Specifically, language production of children with learning disabilities was disturbed by problems at the conceptualizator and formulator stages of Levelt’s language processing model, whilst language planning disturbances in children who clutter were considered to arise due to insufficient time to complete the editing phase of sentence structuring. These findings indicate that children who clutter can be differentiated from children with learning disabilities by both the number of main and secondary story plot elements and by the percentage of correct sentence structures.

from the International Journal of Speech-Language Pathology

The use of electropalatography (EPG) in the assessment and treatment of motor speech disorders in children with Down’s syndrome: Evidence from two case studies

Conclusions: Findings from these two case studies demonstrate the potential utility of EPG in both the assessment and treatment of speech motor disorders in DS.

from Developmental Neurorehabilitation

The Effect of Rate Control on Speech Rate and Intelligibility of Dysarthric Speech

Purpose: This study investigated the effect of rate control methods (RCMs) on speaking rate (SR), articulation rate (AR), and intelligibility in dysarthric speakers. Method: Nineteen dysarthric patients (7 unilateral upper motor neuron dysarthria, 6 hypokinetic, 3 flaccid, 3 ataxic) participated. SR, AR and intelligibility ratings were determined on the basis of 1-min recorded reading passages. Seven RCMs were applied: voluntary rate control, hand tapping, alphabet board, pacing board and delayed auditory feedback with a delay of 50, 100 and 150 ms. Results: Almost all methods resulted in lower mean SRs and ARs (p < 0.05). Rate control did not improve overall intelligibility of the dysarthric population. However, a meaningful increase of intelligibility was found in 5 participants. This study indicates that the effect of rate control on intelligibility is independent of habitual speech rate and type of dysarthria. Degree of intelligibility may be an influencing factor. The most effective methods are: voluntary rate control, alphabet board, hand tapping and pacing board. Conclusion: RCMs do result in lower speech rates. Some dysarthric individuals do benefit from one or more RCMs, but rate control may also have an inverse effect on intelligibility.

from Folia Phoniatrica et Logopaedica

Speech assessment in cleft palate patients: A descriptive study

Conclusion
Phonetic and phonological development in cleft child are not only due to the surgical strategies and the surgeon experience, but also influenced by the willingness to collaborate of the patient and especially of the parents, the timeliness of the logopaedic intervention, and by inborn capabilities of the child to control the emission of the air from nasal and oral cavities.

from the International Journal of Pediatric Otorhinolaryngology

The Cortical Dynamics of Intelligible Speech

An important and unresolved question is how the human brain processes speech for meaning after initial analyses in early auditory cortical regions. A variety of left-hemispheric areas have been identified that clearly support semantic processing, although a systematic analysis of directed interactions among these areas is lacking. We applied dynamic causal modeling of functional magnetic resonance imaging responses and Bayesian model selection to investigate, for the first time, experimentally induced changes in coupling among three key multimodal regions that were activated by intelligible speech: the posterior and anterior superior temporal sulcus (pSTS and aSTS, respectively) and pars orbitalis (POrb) of the inferior frontal gyrus. We tested 216 different dynamic causal models and found that the best model was a “forward” system that was driven by auditory inputs into the pSTS, with forward connections from the pSTS to both the aSTS and the POrb that increased considerably in strength (by 76 and 150%, respectively) when subjects listened to intelligible speech. Task-related, directional effects can now be incorporated into models of speech comprehension.

from the Journal of Neuroscience

Speech synthesis in background noise: Effects of message formulation and visual information on the intelligibility of American English DECTalk™

The purpose of the current research was to investigate the intelligibility of synthesized speech in noise, when listeners are able to watch an individual using augmentative and alternative communication (AAC) formulate messages on-line and when they are listening to a speaker without any visual information. A total of 80 participants were randomly assigned to four groups, with 20 participants in each group. Each group listened to sentences delivered using a different message formulation strategy: prestored; audibly formulated (messages are formulated on-line and the listener is able to hear the formulation as the message is being encoded); audibly formulated with no repeat (the full sentence at the end is not repeated); and quietly formulated (the message is formulated on-line, but the listener is not able to hear the system feedback throughout the formulation). The speaker for this study was a 35-year-old woman with cerebral palsy who used a VOCA with DECTalk™ (Beautiful Betty, American English) to communicate. Half of the sentences were presented in an auditory-only condition and half were presented in an auditory-visual condition. The dependent variable was intelligibility, as measured by the percentage of words correctly transcribed by each listener. The overall intelligibility of the sentences in the Audibly Formulated with No Repeat group was statistically significantly lower than in each of the other message formulation type groups. Visual information did not have an effect on intelligibility for this speaker. Clinical implications, limitations, and directions for future research and development are discussed.

from AAC: Augmentative and Alternative Communication

Speech production in deaf implanted children with additional disabilities and comparison with age-equivalent implanted children without such disorders

from the International Journal of Pediatric Otorhinolaryngology

cochlear implant, deaf, speech, intelligibility, children, multiple, additional, disorder, handicap, Conclusion
The majority of deaf children with additional disorders develop connected intelligible speech 5 years following implantation; however, a significant proportion do not develop any speech at all. Thus a third of this group did not realise one of the most important objectives for parents of implantation. Benefit from implantation should not be restricted to speech production alone in this specific population.