We examine the mechanisms that support interaction between lexical, phonological and phonetic processes during language production. Studies of the phonetics of speech errors have provided evidence that partially activated lexical and phonological representations influence phonetic processing. We examine how these interactive effects are modulated by lexical frequency. Previous research has demonstrated that during lexical access, the processing of high frequency words is facilitated; in contrast, during phonetic encoding, the properties of low frequency words are enhanced. These contrasting effects provide the opportunity to distinguish two theoretical perspectives on how interaction between processing levels can be increased. A theory in which cascading activation is used to increase interaction predicts that the facilitation of high frequency words will enhance their influence on the phonetic properties of speech errors. Alternatively, if interaction is increased by integrating levels of representation, the phonetics of speech errors will reflect the retrieval of enhanced phonetic properties for low frequency words. Utilizing a novel statistical analysis method, we show that in experimentally induced speech errors low lexical frequency targets and outcomes exhibit enhanced phonetic processing. We sketch an interactive model of lexical, phonological and phonetic processing that accounts for the conflicting effects of lexical frequency on lexical access and phonetic processing.
The data from the development of the English and Dutch speech sound system show many similar tendencies. Vowels are mastered by the age of three, most consonants by the age of four and most consonant clusters between 5 and 6–8 years of age. Perhaps, there is a universal trend in speech sound development like there is in language development.
CONCLUSION: articulation accuracy of children in the control group was, overall, higher even when considering correctly produced words by the group with phonological disorder containing /l/. The analysis of other acoustic parameters, as well as the application of these parameters to other sounds of the Portuguese language, can help clinicians to make a precise evaluation and, consequently, to improve their therapeutic work.
a lot of the substitutions presented in the speech of children in typical and deviant acquisition process are in fact covert contrasts. Moreover, the acoustic analyses allowed the detection of differences in the fine phonetic detail of children’s speech production.
CONCLUSION: prior basic knowledge of English did not enhance general learning (improvement in pronunciation) of the second language, however, it improved the ability of temporal processing in the used test.
Visual phonetic processing localized using speech and nonspeech face gestures in video and point-light displays
The talking face affords multiple types of information. To isolate cortical sites with responsibility for integrating linguistically relevant visual speech cues, speech and nonspeech face gestures were presented in natural video and point-light displays during fMRI scanning at 3.0T. Participants with normal hearing viewed the stimuli and also viewed localizers for the fusiform face area (FFA), the lateral occipital complex (LOC), and the visual motion (V5/MT) regions of interest (ROIs). The FFA, the LOC, and V5/MT were significantly less activated for speech relative to nonspeech and control stimuli. Distinct activation of the posterior superior temporal sulcus and the adjacent middle temporal gyrus to speech, independent of media, was obtained in group analyses. Individual analyses showed that speech and nonspeech stimuli were associated with adjacent but different activations, with the speech activations more anterior. We suggest that the speech activation area is the temporal visual speech area (TVSA), and that it can be localized with the combination of stimuli used in this study. Hum Brain Mapp, 2010. © 2010 Wiley-Liss, Inc.<p><p>from <a href=”http://onlinelibrary.wiley.com/doi/10.1002/hbm.21139/abstract;jsessionid=BFA721E0D247F9478322CBB6F0EE62EA.d03t01″><em>Human Brain Mapping</em></a></p>
In general, there was no difference between the /s/ in syllabic onset and coda. On the other hand, regarding the , there was a difference between the two positions within the syllable, and children from both groups showed more accuracy in the onset of the syllable.
The aim of the present study was to analyze the role of linguistic variables in the occurrence of substitution processes in the speech of subjects with verbal dyspraxia (VD). Therefore, it was carried out the phonological analysis of the speech of seven subjects with ages ranging from 2:6 (years:months) to 4:2 and diagnostic hypothesis of VD. The occurrences of usual and idiosyncratic substitution processes, assimilations and articulatory variability were statistically analyzed using the computational package VARBRUL. The variable word length was statistically significant for the occurrence of assimilations and unusual substitutions, indicating that trisyllabic and polysyllabic variants favored the highest occurrence of a process. Stress was statistically significant for the occurrence of articulatory variability and usual substitutions, showing that a process had higher probability of occurring in tonic and post-tonic syllables (syllables within the metrical foot of the accent), respectively. The class of sounds was significant for the use of usual substitutions by the subjects studied, occurring when the segments are liquid and fricative phonemes. Finally, the syllabic structure was statistically significant for idiosyncratic substitutions. The positions of final coda and simple medial onset were the most susceptible to the occurrence of a substitution process. The data of this study suggest that substitutions, in general, tend to occur in words with more than two syllables, in liquid and fricative targets, within the metrical foot of the accent (in post-tonic and tonic syllables), in simple medial onset and final coda positions.
Problems in speech sound production in young children. An inventory study of the opinions of speech therapists
The speech therapists taking part in this study have a good view of the speech sound development of young children. However, due to their concern about communication, social–emotional development, and reading and writing abilities later on, they prefer to identify and treat articulation problems at an early age. More detailed research into the variations in speech sound development, in relation to language development, is needed in order to arrive at effective normative data.
CONCLUSIONS: The results showed the need to review language assessment data already established by previous studies and evaluations currently applied.
Laughing is examined auditorily and acoustico-graphically, on the basis of exemplary speech data from spontaneous German dialogues, as pulmonic air stream modulation for communicative functions, paying attention to fine phonetic detail in interactional context. These phonetic case descriptions of laughing phenomena in speaker interaction in a small corpus have as their goal to create an awareness of the phonetic and functional parameters that need to be considered in the future acquisition, acoustic analysis and statistical evaluation of large spontaneous databases.
Children who can read and have good phonetic skills – the ability to recognize the individual sounds within words – may still be poor spellers. In a paper published in the May 2008 issue of Cortex, Elizabeth Eglinton and Marian Annett, at the School of Psychology of Leicester, UK, show that this subgroup of poor spellers is more likely to be right-handed than other poor spellers.
Children who can read and have good phonetic skills the ability to recognise the individual sounds within words may still be poor spellers, a study of primary school children has shown.
Background: The syndrome of deep dysphasia is characterised by an inability to repeat pseudowords and the production of semantic errors in word repetition. Several single case studies revealed that phonological decoding might be outstandingly impaired. Recovery of deep dysphasia has only been illustrated in detail for patient NC (Martin & Saffran, 1992). Dell, Schwartz, Martin, Saffran, and Gagnon (1997) tried to simulate NC’s repetition performance in their connectionist lexical activation model, but it did not fit his error pattern as it assumes perfect recognition of auditory input.
Aims: In this new single case study on recovery of deep dysphasia, we intended to collect further evidence for the assumption that impaired input processing is the crucial cause of the impairment. Moreover, we aimed to explain impairment and psycholinguistic parameter effects in the connectionist semantic-phonological model (Foygel & Dell, 2000) by adding a phonetic input level.
Methods & Procedures: JR’s performance was repeatedly assessed in the course of recovery. Errors in naming and repetition were classified according to the taxonomy of Dell et al. (1997). JR’s error patterns were simulated in the semantic-phonological model to determine the naming disorder and to predict word repetition. In addition, we established an error modality analysis to disentangle input and output impairments in repetition. Thus, the source of each error could be subclassified as belonging to either expressive or receptive components of repetition.
Outcomes & Results: Initially there was a sharp contrast between severely impaired word and pseudoword repetition and almost unimpaired reading aloud. During recovery, performance in naming and word repetition improved a great deal, while repetition of pseudowords remained impossible. The evolvement of real word repetition was characterised by psycholinguistic parameter effects at different points in time: concreteness before length, before frequency. The connectionist model over-predicted correct responses in word repetition as for NC. There were only few expressive repetition errors; regarding receptive errors, nonwords and null responses decreased significantly while formal errors became the dominant error type in the course of recovery.
Conclusions: The development of psycholinguistic parameter effects, dissociations in performance, the computer simulations, and results from error modality analysis as well as changes of error pattern are ample evidence for the primary decoding disorder in JR. We argue that deep dysphasia can be explained by an impairment of phonetic-phonological connections in an extended version of the connectionist one-route model of repetition with four rather than three levels of auditory word processing. The improved real word repetition despite persisting failure on pseudowords is accounted for by an increase of both phonetic-phonological and lexical-phonological connection weights.
A reassigned or time-corrected instantaneous frequency spectrogram has been developed in the work of a number of practitioners. Here we present a general description of this imaging technique and explore its manifold applications to acoustic phonetics. The TCIF spectrogram shows the locations of signal components with unrivalled precision, eliminating the blurring and smearing of components that hamper the readability of the conventional spectrogram. Formants of vowels and other resonants are shown with great accuracy by observing glottal pulsations at very short time scales with a wideband analysis. A further post-processing technique is also described, by which signal components such as formants, as well as impulsive events, can be effectively isolated to the exclusion of other signal information. When the phonation process is examined this closely, a variety of evidence surfaces which supports recent developments in the theory and computational simulation of aeroacoustic phenomena in speech. Narrowband analysis is also demonstrated to permit pitch tracking with relative ease.