Conclusion: The use of different tinnitus sound therapy signals can lead to significantly different effects on the intelligibility of speech. The use of natural sound recordings or combinations of tones may provide the patient with more flexibility to change the stimulation level during treatment.
from the Journal of Laryngology and Otology
Speech-associated labiomandibular movement in Mandarin-speaking children with quadriplegic cerebral palsy: A kinematic study
The purpose of this study was to investigate the speech-associated labiomandibular movement during articulation production in Mandarin-speaking children with spastic quadriplegic (SQ) cerebral palsy (CP). Twelve children with SQ CP (aged 7–11 years) and 12 age-matched healthy children as controls were enrolled for the study. All children underwent analysis of percentage of consonants correct (PCC) and kinematic analysis of speech tasks using the Vicon Motion 370 system. Kinematic parameters included utterance duration, displacement and velocity of the lip and jaw, coefficient of variation (CV) of lip utterance duration, and spatial and temporal coupling of labiomandibular movement of speech produced in mono-syllable (MS) and poly-syllable (PS) tasks. Children with CP showed lower temporal coupling (MS, p = 0.015; PS, p = 0.007), but not spatial coupling, of labiomandibular movement than healthy children. Children with CP had greater CVs (MS, p = 0.003; PS, p = 0.010) and the peak opening displacement and velocity of lower lip and jaw (p < 0.05) and lower PCC (p < 0.001) than healthy children. Children with SQ CP displayed labiomandibular coupling movement impairment, especially in the aspect of temporal coupling. These children also had high temporal oromotor variability and needed to make more effort to coordinate the labiomandibular movement for speech production.
Development of a “Virtual Cocktail Party“ for the Measurement of Speech Intelligibility in a Sound Field
This technical report describes an approach to the measurement of speech intelligibility for sentences presented in a sound field in the presence of 16-talker babble. More specifically, we detail our (1) selection and preparation of target speech materials, (2) selection and preparation of experimental babble, (3) analog instrumentation, (4) software routines for attenuator control, (5) calibration, (6) experimental subjects, and (7) experimental protocol. In the final section of this report we present speech-intelligibility data from 16 young adults (21-30 yr of age) with normal hearing sensitivity for pure-tone signals.
The Influence of Auditory Acuity on Acoustic Variability and the Use of Motor Equivalence During Adaptation to a Perturbation
Conclusion: These results provide support for the mutual interdependence of speech perception and production.
Sentences recognition thresholds in normal hearing individuals in the presence of inciding noise from different angles
CONCLUSION: The following sentence recognition thresholds in the noise, in sound field, were obtained for these signal-to-noise ratios: 0° – 0° = -7.56 dB; -0º – 90º = -11.11 dB; -0º – 180° = -9.75 dB; 0º – 270º = -10.43 dB. The better thresholds were obtained with the incidence angles of 0º – 90º and 0º – 270º, followed by the 0º – 180º condition, and, finally, by the 0º – 0º condition. The most unfavorable hearing condition was that in which the noise was in the same incidence angle of the speech, in front of the evaluated subject.
Relationship Between Age of Hearing-Loss Onset, Hearing-Loss Duration, and Speech Recognition in Individuals with Severe-to-Profound High-Frequency Hearing Loss
The factors responsible for interindividual differences in speech-understanding ability among hearing-impaired listeners are not well understood. Although audibility has been found to account for some of this variability, other factors may play a role. This study sought to examine whether part of the large interindividual variability of speech-recognition performance in individuals with severe-to-profound high-frequency hearing loss could be accounted for by differences in hearing-loss onset type (early, progressive, or sudden), age at hearing-loss onset, or hearing-loss duration. Other potential factors including age, hearing thresholds, speech-presentation levels, and speech audibility were controlled. Percent-correct (PC) scores for syllables in dissyllabic words, which were either unprocessed or lowpass filtered at cutoff frequencies ranging from 250 to 2,000 Hz, were measured in 20 subjects (40 ears) with severe-to-profound hearing losses above 1 kHz. For comparison purposes, 20 normal-hearing subjects (20 ears) were also tested using the same filtering conditions and a range of speech levels (10–80 dB SPL). Significantly higher asymptotic PCs were observed in the early (<=4 years) hearing-loss onset group than in both the progressive- and sudden-onset groups, even though the three groups did not differ significantly with respect to age, hearing thresholds, or speech audibility. In addition, significant negative correlations between PC and hearing-loss onset age, and positive correlations between PC and hearing-loss duration were observed. These variables accounted for a greater proportion of the variance in speech-intelligibility scores than, and were not significantly correlated with, speech audibility, as quantified using a variant of the articulation index. Although the lack of statistical independence between hearing-loss onset type, hearing-loss onset age, hearing-loss duration, and age complicate and limit the interpretation of the results, these findings indicate that other variables than audibility can influence speech intelligibility in listeners with severe-to-profound high-frequency hearing loss.
Assessing Speech Intelligibility in Children With Hearing Loss: Toward Revitalizing a Valuable Clinical Tool
Purpose: The main purposes of this tutorial are to present a rationale for assessing children’s connected speech intelligibility, review important uses for intelligibility scores, and describe time-efficient ways to estimate how well children’s connected speech can be understood. This information is offered to encourage routine assessment of connected speech intelligibility in preschool and school-age children with hearing loss.
The reliability of algorithms for room acoustic simulations has often been confirmed on the basis of the verification of predicted room acoustical parameters. This paper presents a complementary perceptual validation procedure consisting of two experiments, respectively dealing with speech intelligibility, and with sound source front–back localisation.
The evaluated simulation algorithm, implemented in software ODEON®, is a hybrid method that is based on an image source algorithm for the prediction of early sound reflection and on ray-tracing for the later part, using a stochastic scattering process with secondary sources. The binaural room impulse response (BRIR) is calculated from a simulated room impulse response where information about the arriving time, intensity and spatial direction of each sound reflection is collected and convolved with a measured Head Related Transfer Function (HRTF). The listening stimuli for the speech intelligibility and localisation tests are auralised convolutions of anechoic sound samples with measured and simulated BRIRs.
Perception tests were performed with human subjects in two acoustical environments, i.e. an anechoic and reverberant room, by presenting the stimuli to subjects in a natural way, and via headphones by using two non-individualized HRTFs (artificial head and hearing aids placed on the ears of the artificial head) of both a simulated and a real room.
Very good correspondence is found between the results obtained with simulated and measured BRIRs, both for speech intelligibility in the presence of noise and for sound source localisation tests. In the anechoic room an increase in speech intelligibility is observed when noise and signal are presented from sources located at different angles. This improvement is not so evident in the reverberant room, with the sound sources at 1-m distance from the listener. Interestingly, the performance of people for front–back localisation is better in the reverberant room than in the anechoic room.
The correlation between people’s ability for sound source localisation on one hand, and their ability for recognition of binaurally received speech in reverberation on the other hand, is found to be weak.
from Applied Acoustics
Lavandier and Culling [Lavandier, M. and Culling, J. F. 2010. Prediction of binaural speech intelligibility against noise in rooms. J. Acoust. Soc. Am. 127, 387-399] demonstrated a method of predicting human speech reception thresholds for speech in combined noise and reverberation. An updated version of the model is presented, which is substantially more computationally efficient. The updated model makes similar predictions for the SRT data considered by Lavandier and Culling, which tested the model’s ability to predict effects of binaural unmasking and room colouration. In addition, we show here that the model accurately predicts the effects of headshadow and reproduces a range of data sets from the literature, including situations with multiple interfering sounds in anechoic conditions.
from Hearing Research
CONCLUSION: the scale items were validated and demonstrated efficacy in the assessment of speech intelligibility of the studied cases.
CONCLUSION: Most subjects presented balanced resonance or acceptable hypernasality and absence of compensatory articulation, regardless the type of cleft, surgical technique and age range, although no significant differences were found. Among the conducts adopted after the first evaluation following primary palatoplasty, speech therapy was the most frequent.
Reliability and Validity of a Computer Mediated Single-Word Intelligibility Test: Preliminary Findings for Children with Repaired Cleft Lip and Palate.
Conclusions: A computerized, single-word intelligibility test was described which appears to be a reliable and valid measure of global speech deficits in children with CLP. Additional development of the test may further facilitate standardized assessment of children with CLP.
from the Cleft Palate-Craniofacial Journal
Effect on Speech Intelligibility of Changes in Speech Production Influenced by Instructions and Communication Environments
Tips for talking to a person who is hard of hearing often suggest how a talker should modify their speech production (e.g., by slowing speech rate). Some interventions attempt to train the person who is hard of hearing to instruct significant others to modify their speech production, while other interventions attempt to train significant others to alter their own speaking behaviors. This review examines the two main experimental research areas that address how variations in a talker’s speech may affect variations in a listener’s understanding. One area focuses on clear speech or how talkers modify their speech production in an attempt to increase intelligibility by speaking clearly. The other area concerns the Lombard effect or how talkers modify their speech production in response to environmental noise. Findings from both areas of research demonstrate how the intelligibility of speech can be enhanced when talkers modify their speech by decreasing rate, increasing intensity, increasing pitch, and/or increasing high-frequency spectral content. However, more consistent alterations in speech are observed when there is an implicit response to environmental noise as opposed to a response to explicit instructions to speak clearly. Implications for practice and directions for further research are suggested.
from Seminars in Hearing
This study was designed to estimate test-retest reliability of orthographic speech intelligibility testing in speakers with aphasia and AOS and to examine its relationship to the consistency of speaker and listener responses. Monosyllabic single word speech samples were recorded from 13 speakers with coexisting aphasia and AOS. These words were transcribed phonetically by two trained listeners and also presented to non-brain-damaged listeners for identification in a computerized speech intelligibility test. Overall intelligibility scores were computed for each speaker, and word-by-word responses for individual words were examined for both speaker and listener consistency. The clinical feasibility of the approach was supported by a strong correlation between scores from the phonetic transcription and speech intelligibility tests and by strong test-retest reliability for all speakers. Detailed analyses of individual responses indicated that the intelligibility test stability was not due to consistency either in the kind of errors speakers made or in the responses listeners gave when they heard a word different from the target.
from the Journal of Communication Disorders
both transcription analysis and stimulus type influenced the intelligibility scores of the studied population, especially when non-words were used as speech material. The handling of these variables can help to improve intelligibility tests.