Monthly Archives: July 2008
‘Non-vocalization’: A phonological error process in the speech of severely and profoundly hearing impaired adults, from the point of view of the theory of phonology as human behaviour
‘Non-vocalization’ (N-V) is a newly described phonological error process in hearing impaired speakers. In N-V the hearing impaired person actually articulates the phoneme but without producing a voice. The result is an error process looking as if it is produced but sounding as if it is omitted. N-V was discovered by video recording the speech of two groups, profoundly and severely hearing impaired adults in four elicitation tasks of varying difficulty, and analysing 2065 phonological error processes (substitutions, omissions, and N-V) according to 24 criteria resulting in 49,560 data points. Results, which are discussed in view of the theory ‘Phonology as Human Behaviour’ (PHB), indicate that: (a) The more communicative the error process was; the more effort was made for its production and the more frequent its distribution; (b) The easier the elicitation task was, the more frequent the use of communicative error processes; c) The more difficult the elicitation task was, the more frequent the use of the relatively less communicative and easier to produce error processes; and d) The process of N-V functioned like a communicative error process for the group of profoundly hearing impaired adults.
The present paper explores whether the metrical foot is necessary for the description of prosodic systems. To this end, we present empirical findings on the perception of German word stress using event-related brain potentials as the dependent measure. A manipulation of the main stress position within three-syllable words revealed differential brain responses, which (a) correlated with the reorganisation of syllables into feet in stress violations, and (b) differed in strength depending on syllable weight. The experiments therefore provide evidence that the processing of word stress not only involves lexical information about stress positions, but also (quantity-sensitive) information about metrical structures, in particular feet and syllables.
Two factors have been proposed as the main determinants of phonological typology: channel bias, phonetically systematic errors in transmission, and analytic bias, cognitive predispositions making learners more receptive to some patterns than others. Much of typology can be explained equally well by either factor, making them hard to distinguish empirically. This study presents evidence that analytic bias is strong enough to create typological asymmetries in a case where channel bias is controlled. I show that (i) phonological dependencies between the height of two vowels are typologically more common than dependencies between vowel height and consonant voicing, (ii) the phonetic precursors of the height-height and height-voice patterns are equally robust and (iii) in two experiments, English speakers learned a height-height pattern and a voice-voice pattern better than a height-voice pattern. I conclude that both factors contribute to typology, and discuss hypotheses about their interaction.
This paper offers a careful reading of an article published by Rulon Wells in Language in 1949 on the subject of automatic alternations in phonology. Read with a modern eye, it reveals that phonologists were exploring the value and use of phonological derivations, including both abstract representations and intermediate representations, in the late 1940s. Contrary to what has been suggested in the literature, Bloomfield’s explorations in rule ordering published in 1939 were not isolated and without influence. Our conclusion is the null hypothesis: that there is an intellectual continuity from the work of Sapir and Bloomfield, through that of Wells and Harris, to that of Chomsky & Halle. We conclude by offering some suggestions as to why this is not widely recognised in the field.
Japanese shows an asymmetry in the treatment of word-final [n] in loanwords from English and French: while it is adapted as a moraic nasal consonant in loanwords from English, it is adapted with a following epenthetic vowel in loanwords from French. We provide experimental evidence that this asymmetry is due to phonetic differences in the realisation of word-final [n] in English and French, and, consequently, to the way in which English and French word-final [n] are perceived by native speakers of Japanese. Specifically, French but not English word-final [n] has a strong vocalic release that Japanese listeners perceive as their native vowel . We propose a psycholinguistic model in which most loanword adaptations originate in perceptual assimilation, a process which takes place during perception and which maps non-native sounds and sound structures onto the phonetically closest native ones. We compare our model to alternatives couched within phonological theory.
Many students have difficulty achieving reading fluency, and nearly half of fourth graders are not fluent readers in grade-level texts. Intensive and focused reading practice is recommended to help close the gap between students with poor fluency and their average reading peers. In this study, the Quick Reads fluency program was used as a supplemental fluency intervention for fourth and fifth graders with below-grade-level reading skills. Quick Reads prescribes a repeated reading procedure with short nonfiction texts written on grade-appropriate science and social science topics. Text characteristics are designed to promote word recognition skills. Students were randomly assigned to Quick Reads instruction that was implemented by trained paraeducator tutors with pairs of students for 30 minutes per day, 4 days per week, for 18 weeks. At posttest, Quick Reads students significantly outperformed classroom controls in vocabulary, word comprehension, and passage comprehension. Fluency rates for both treatment and control groups remained below grade level at posttest.
Purpose: Northern Digital Instruments (NDI; Waterloo, Ontario, Canada) manufactures a commercially available magnetometer device called Aurora that features real-time display of sensor position tracked in 3 dimensions. To test its potential for speech production research, data were collected to assess the measurement accuracy and reliability of the system.
Method: First, sensors affixed at a known distance on a rigid ruler were moved systematically through the measurement space. Second, sensors attached to the speech articulators of a human participant were tracked during various speech tasks.
Results: In the ruler task, results showed mean distance errors of less than 1 mm, with some sensitivity to location within the measurement field. In the speech tasks, Euclidean distance between jaw-mounted sensors showed comparable accuracy; however, a high incidence of missing samples was observed, positively correlated with sensor velocity.
Conclusions: The real-time positional feedback provided by the system makes it potentially useful in speech therapy applications. The overall missing data rate observed during speech tasks makes use of the system in its current form problematic for the quantitative measurement of speech articulator movements; however, NDI is actively working to improve the Aurora system for use in this context.
Purpose: This study investigated an account of limited short-term memory capacity for children’s speech perception in noise using a dual-task paradigm.
Method: Sixty-four normal-hearing children (7–14 years of age) participated in this study. Dual tasks were repeating monosyllabic words presented in noise at 8 dB signal-to-noise ratio and rehearsing sets of 3 or 5 digits for subsequent serial recall. Half of the children were told to allocate their primary attention to word repetition and the other half to remembering digits. Dual-task performance was compared to single-task performance. Limitations in short-term memory demands required for the primary task were measured by dual-task decrements in nonprimary tasks.
Results: Results revealed that (a) regardless of task priority, no dual-task decrements were found for word recognition, but significant dual-task decrements were found for digit recall; (b) most children did not show the ability to allocate attention preferentially to primary tasks; and (c) younger children (7- to 10-year-olds) demonstrated improved word recognition in the dual-task conditions relative to their single-task performance.
Conclusions: Seven- to 8-year-old children showed the greatest improvement in word recognition at the expense of the greatest decrement in digit recall during dual tasks. Several possibilities for improved word recognition in the dual-task conditions are discussed.
Purpose: This study explored vowel production and adaptation to articulatory constraints in adults with acquired apraxia of speech (AOS) plus aphasia.
Method: Five adults with acquired AOS plus aphasia and 5 healthy control participants produced the vowels [I], , and [æ] in four word-length conditions in unconstrained and bite block conditions. In addition to acoustic and perceptual measures of vowel productions, individually determined idealized vowels based on each participant’s best performance were used to assess vowel accuracy and distinctiveness.
Results: Findings showed (a) clear separation of vowel formants in speakers with AOS; (b) impaired vowel production in speakers with AOS, shown by perceptual measures of vowel quality and acoustic measures of vowel accuracy and contrastivity; and (c) incomplete compensation to bite block compensation both for individuals with AOS and for healthy controls.
Conclusions: Although adults with AOS were less accurate overall in vowel production than unimpaired speakers, introduction of a bite block resulted in similar patterns of decreased vowel accuracy for the two groups. Findings suggest that feedback control for vowel production is relatively intact in these individuals with AOS and aphasia. Predominant use of feedback control mechanisms is hypothesized to account for characteristic vowel deficits of the disorder.
Purpose: The present study examines the brain basis of listening to spoken words in noise, which is a ubiquitous characteristic of communication, with the focus on the dorsal auditory pathway.
Method: English-speaking young adults identified single words in 3 listening conditions while their hemodynamic response was measured using fMRI: speech in quiet, speech in moderately loud noise (signal-to-noise ratio [SNR] 20 dB), and in loud noise (SNR –5 dB).
Results: Behaviorally, participants’ performance (both accuracy and reaction time) did not differ between the quiet and SNR 20 dB condition, whereas they were less accurate and responded slower in the SNR –5 dB condition compared with the other 2 conditions. In the superior temporal gyrus (STG), both left and right auditory cortex showed increased activation in the noise conditions relative to quiet, including the middle portion of STG (mSTG). Although the right posterior STG (pSTG) showed similar activation for the 2 noise conditions, the left pSTG showed increased activation in the SNR –5 dB condition relative to the SNR 20 dB condition.
Conclusion: We found cortical task-independent and noise-dependent effects concerning speech perception in noise involving bilateral mSTG and left pSTG. These results likely reflect demands in acoustic analysis, auditory–motor integration, and phonological memory, as well as auditory attention.
Identification of Children’s Stuttered and Nonstuttered Speech by Highly Experienced Judges: Binary Judgments and Comparisons With Disfluency-Types Definitions
Purpose: The purposes of this study were (a) to determine whether highly experienced clinicians and researchers agreed with each other in judging the presence or absence of stuttering in the speech of children who stutter and (b) to determine how those binary stuttered/nonstuttered judgments related to categorizations of the same speech based on disfluency-types descriptions of stuttering.
Method: Eleven highly experienced judges made binary judgments of the presence or absence of stuttering for 600 audiovisually recorded 5-s speech samples from twenty 2- to 8-year-old children who stuttered. These judgments were compared with each other and with disfluency-types judgments in multiple interval-by-interval assessments and by using multiple definitions of agreement.
Results: Interjudge agreement for the highly experienced judges in the binary stuttered/nonstuttered task varied from 39.0% to 89.1%, depending on methods and definitions used. Congruence between binary judgments and categorizations based on disfluency types also varied depending on methods and definitions, from 21.6% to 100%.
Conclusions: Agreement among highly experienced judges, and congruence between their binary judgments of stuttering and categorizations based on disfluency types, were relatively high using some definitions and very low using others. These results suggest the use of measurement methods other than those based on disfluency types for quantifying or describing children’s stuttering. They also suggest both the need for, and potential methods for, training to increase judges’ accuracy and agreement in identifying children’s stuttering.
Experimental Evaluation of a Preschool Language Curriculum: Influence on Children’s Expressive Language Skills
Purpose: The primary purpose of this study was to investigate child impacts following implementation of a comprehensive language curriculum, the Language-Focused Curriculum (LFC; Bunce, 1995), within their preschool classrooms. As part of this larger purpose, this study identified child-level predictors of expressive language outcomes for children attending at-risk preschool programs as well as main effects for children’s exposure to the language curriculum and its active ingredients—namely, teacher use of language stimulation techniques (LSTs; e.g., open questions, recasts, models).
Method: Fourteen preschool teachers were randomly assigned to 2 conditions. Treatment teachers implemented the experimental curriculum for an academic year; a total of 100 children were enrolled in their classrooms. Comparison teachers maintained their prevailing curriculum; a total of 96 children were enrolled in these classrooms. Teachers’ fidelity of implementation was monitored using structured observations conducted 3 times during the academic year. Children’s growth in expressive language was assessed using measures derived from language samples in the fall and spring, specifically percent complex utterances, rate of noun use, number of different words, and upper bound index.
Results: Children’s language skill in the fall, socioeconomic status (household income), and daily attendance served as significant, positive predictors of their language skill in the spring. The impact of the language curriculum and LST exposure was moderated by children’s classroom attendance, in that the language curriculum accelerated language growth for children who attended preschool regularly; a similar effect was seen for LST exposure.
Conclusions: Adoption of a comprehensive language curriculum may provide a value-added benefit only under highly specific circumstances. Findings suggest that at-risk children who receive relatively large doses of a curriculum (as measured in days of attendance during the academic year) that emphasizes quality language instruction may experience accelerated expressive language growth during pre-kindergarten.
Language-Specific Effects of Task Demands on the Manifestation of Specific Language Impairment: A Comparison of English and Icelandic
Purpose: Previous research has indicated that the manifestation of specific language impairment (SLI) varies according to factors such as language, age, and task. This study examined the effect of task demands on language production in children with SLI cross-linguistically.
Method: Icelandic- and English-speaking school-age children with SLI and normal language (NL) peers (n = 42) were administered measures of verbal working memory. Spontaneous language samples were collected in contexts that vary in task demands: conversation, narration, and expository discourse. The effect of the context-related task demands on the accuracy of grammatical inflections was examined.
Results: Children with SLI in both language groups scored significantly lower than their NL peers in verbal working memory. Nonword repetition scores correlated with morphological accuracy. In both languages, mean length of utterance (MLU) varied systematically across sampling contexts. Context exerted a significant effect on the accuracy of grammatical inflection in English only. Error rates were higher overall in English than in Icelandic, but whether the difference was significant depended on the sampling context. Errors in Icelandic involved verb and noun phrase inflection to a similar extent.
Conclusions: The production of grammatical morphology appears to be more taxing for children with SLI who speak English than for those who speak Icelandic. Thus, whereas children with SLI in both language groups evidence deficits in language processing, cross-linguistic differences are seen in which linguistic structures are vulnerable when processing load is increased. Future research should carefully consider the effect of context on children’s language performance.
Purpose: To examine development of sensitivity to auditory and visual temporal processes in children and the association with standardized measures of auditory processing and communication.
Methods: Normative data on tests of visual and auditory processing were collected on 18 adults and 98 children aged 6–10 years of age. Auditory processes included detection of pitch from temporal cues using iterated rippled noise and frequency modulation detection at 2 Hz, 40 Hz, and 240 Hz. Visual processes were coherent form and coherent motion detection. Test–retest data were gathered on 21 children.
Results: Performance on perceptual tasks improved with age, except for fine temporal processing (iterated rippled noise) and coherent form perception, both of which were relatively stable over the age range. Within-subject variability (as assessed by track width) did not account for age-related change. There was no evidence for a common temporal processing factor, and there were no significant associations between perceptual task performance and communication level (Children’s Communication Checklist, 2nd ed.; D. V. M. Bishop, 2003) or speech-based auditory processing (SCAN-C; R. W. Keith, 2000).
Conclusions: The auditory tasks had different developmental trajectories despite a common procedure, indicating that age-related change was not solely due to responsiveness to task demands. The 2-Hz frequency modulation detection task, previously used in dyslexia research, and the visual tasks had low reliability compared to other measures.
Purpose: The purpose of this study was to examine the influence of phonotactic probability, which is the frequency of different sound segments and segment sequences, on the overall fluency with which words are produced by preschool children who stutter (CWS) as well as to determine whether it has an effect on the type of stuttered disfluency produced.
Method: A 500+ word language sample was obtained from 19 CWS. Each stuttered word was randomly paired with a fluently produced word that closely matched it in grammatical class, word length, familiarity, word and neighborhood frequency, and neighborhood density. Phonotactic probability values were obtained for the stuttered and fluent words from an online database.
Results: Phonotactic probability did not have a significant influence on the overall susceptibility of words to stuttering, but it did impact the type of stuttered disfluency produced. In specific, single-syllable word repetitions were significantly lower in phonotactic probability than fluently produced words, part-word repetitions, and sound prolongations.
Conclusions: In general, the differential impact of phonotactic probability on the type of stuttering-like disfluency produced by young CWS provides some support for the notion that different disfluency types may originate in the disruption of different levels of processing.