Conclusions: These findings suggest that seemingly objective intelligibility tests are subject to a number of factors that affect scores.
Reducing the Effects of Background Noise During Auditory Functional Magnetic Resonance Imaging of Speech Processing: Qualitative and Quantitative Comparisons Between Two Image Acquisition Schemes and Noise Cancellation
Conclusions: For the scanning protocols evaluated here, the sparse temporal scheme was the more preferable for detecting sound-evoked activity. In addition, ANC ensures that listening difficulty is determined more by the chosen stimulus parameters and less by the adverse testing environment.
Effects of Noise and Speech Intelligibility on Listener Comprehension and Processing Time of Korean-Accented English
Conclusion: The findings suggest that decreased speech intelligibility and adverse listening conditions can be major challenges for effective communication between foreign-accented speakers and native listeners. The results also indicate that foreign-accented speech requires more effortful processing relative to native speech under certain conditions, affecting both comprehension and processing efficiency.
Perception of speech disorders: Difference between the degree of intelligibility and the degree of severity
Conclusion: There is an argument for measuring intelligibility at the surface code level with a word recognition test or ordinal scales and for allowing the use of interval scales for severity judgment.
Intelligibility of hearing impaired children as judged by their parents: A comparison between children using cochlear implants and children using hearing aids
The intelligibility of the prelingually deaf CI children is very close to the intelligibility of NH children, while the HA children still show a decreased mean intelligibility.
Conclusions: Overall, there is a paucity of research investigating synthesized speech for use with children. Available evidence suggests that children produce similar trends but lower levels of intelligibility performance when compared with adults. Future areas of applied research are required to adequately define this relationship and the variables that may contribute to improving the intelligibility and comprehension of synthesized speech for children.
Effects of lowpass and highpass filtering on the intelligibility of speech based on temporal fine-structure or envelope cues
This study aimed to assess whether or not temporal envelope (E) and fine structure (TFS) cues in speech convey distinct phonetic information. Syllables uttered by a male and female speaker were (i) processed to retain either E or TFS within 16 frequency bands, (ii) lowpass or highpass filtered at different cutoff frequencies, and (iii) presented for identification to seven listeners. Psychometric functions were fitted using a sigmoid function, and used to determine crossover frequencies (cutoff frequencies at which lowpass and highpass filtering yielded equivalent performance), and gradients at each point of the psychometric functions (change in performance with respect to cutoff frequency). Crossover frequencies and gradients were not significantly different across speakers. Crossover frequencies were not significantly different between E and TFS speech (1.5 kHz). Gradients were significantly different between E and TFS speech in various filtering conditions. When stimuli were highpass filtered above 2.5 kHz, performance was significantly above chance level and gradients were significantly different from 0 for E speech only. These findings suggest that E and TFS convey important but distinct phonetic cues between 1-2 kHz. Unlike TFS, E conveys information up to 6 kHz, consistent with the characteristics of neural phase locking to E and TFS.
from Hearing Research
The primary objective of this paper is to determine to what extent disturbances in the fluency of language production of children who clutter might be related to, or differ from difficulties in the same underlying processes of language formulation seen in children with learning disabilities. It is hypothesized that an increase in normal dysfluencies and sentence revisions in children who clutter reflect different neurolinguistic process to those of children with learning disabilities. To test this idea, 150 Dutch speaking children, aged 10;6 to 12;11 years, were divided in three groups (cluttering, learning difficulties and controls), and a range of speech and language variables were analysed. Results indicate differences in the underlying processes of language disturbances between children with cluttered speech and those with learning disabilities. Specifically, language production of children with learning disabilities was disturbed by problems at the conceptualizator and formulator stages of Levelt’s language processing model, whilst language planning disturbances in children who clutter were considered to arise due to insufficient time to complete the editing phase of sentence structuring. These findings indicate that children who clutter can be differentiated from children with learning disabilities by both the number of main and secondary story plot elements and by the percentage of correct sentence structures.
The use of electropalatography (EPG) in the assessment and treatment of motor speech disorders in children with Down’s syndrome: Evidence from two case studies
Conclusions: Findings from these two case studies demonstrate the potential utility of EPG in both the assessment and treatment of speech motor disorders in DS.
Purpose: This study investigated the effect of rate control methods (RCMs) on speaking rate (SR), articulation rate (AR), and intelligibility in dysarthric speakers. Method: Nineteen dysarthric patients (7 unilateral upper motor neuron dysarthria, 6 hypokinetic, 3 flaccid, 3 ataxic) participated. SR, AR and intelligibility ratings were determined on the basis of 1-min recorded reading passages. Seven RCMs were applied: voluntary rate control, hand tapping, alphabet board, pacing board and delayed auditory feedback with a delay of 50, 100 and 150 ms. Results: Almost all methods resulted in lower mean SRs and ARs (p < 0.05). Rate control did not improve overall intelligibility of the dysarthric population. However, a meaningful increase of intelligibility was found in 5 participants. This study indicates that the effect of rate control on intelligibility is independent of habitual speech rate and type of dysarthria. Degree of intelligibility may be an influencing factor. The most effective methods are: voluntary rate control, alphabet board, hand tapping and pacing board. Conclusion: RCMs do result in lower speech rates. Some dysarthric individuals do benefit from one or more RCMs, but rate control may also have an inverse effect on intelligibility.
Phonetic and phonological development in cleft child are not only due to the surgical strategies and the surgeon experience, but also influenced by the willingness to collaborate of the patient and especially of the parents, the timeliness of the logopaedic intervention, and by inborn capabilities of the child to control the emission of the air from nasal and oral cavities.
An important and unresolved question is how the human brain processes speech for meaning after initial analyses in early auditory cortical regions. A variety of left-hemispheric areas have been identified that clearly support semantic processing, although a systematic analysis of directed interactions among these areas is lacking. We applied dynamic causal modeling of functional magnetic resonance imaging responses and Bayesian model selection to investigate, for the first time, experimentally induced changes in coupling among three key multimodal regions that were activated by intelligible speech: the posterior and anterior superior temporal sulcus (pSTS and aSTS, respectively) and pars orbitalis (POrb) of the inferior frontal gyrus. We tested 216 different dynamic causal models and found that the best model was a “forward” system that was driven by auditory inputs into the pSTS, with forward connections from the pSTS to both the aSTS and the POrb that increased considerably in strength (by 76 and 150%, respectively) when subjects listened to intelligible speech. Task-related, directional effects can now be incorporated into models of speech comprehension.
from the Journal of Neuroscience
Speech production in deaf implanted children with additional disabilities and comparison with age-equivalent implanted children without such disorders
cochlear implant, deaf, speech, intelligibility, children, multiple, additional, disorder, handicap, Conclusion
The majority of deaf children with additional disorders develop connected intelligible speech 5 years following implantation; however, a significant proportion do not develop any speech at all. Thus a third of this group did not realise one of the most important objectives for parents of implantation. Benefit from implantation should not be restricted to speech production alone in this specific population.