Blog Archives

Eliciting Dyslexic Symptoms in Proficient Readers by Simulating Deficits in Grapheme-to-Phoneme Conversion and Visuo-Magnocellular Processing

Among the cognitive causes of dyslexia, phonological and magnocellular deficits have attracted a substantial amount of research. Their role and their exact impact on reading ability are still a matter of debate, partly also because large samples of dyslexics are hard to recruit. Here, we report a new technique to simulate dyslexic symptoms in normal readers in two ways. Although difficulties in grapheme-to-phoneme conversion were elicited by manipulating the identifiability of written letters, visual-magnocellular processing deficits were generated by presenting letters moving dynamically on the screen. Both factors were embedded into a lexical word–pseudoword decision task with proficient German readers. Although both experimental variations systematically increased lexical decision times, they did not interact. Subjects successfully performed word–pseudoword distinctions at all levels of simulation, with consistently longer reaction times for pseudowords than for words. Interestingly, detecting a pseudoword was more difficult in the grapheme-to-phoneme conversion simulation as indicated by a significant interaction of word type and letter shape. These behavioural effects are consistent with those observed in ‘real’ dyslexics in the literature. The paradigm is thus a potential means of generating novel hypotheses about dyslexia, which can easily be tested with normal readers before screening and recruiting real dyslexics. Copyright © 2011 John Wiley & Sons, Ltd.

from Dyslexia

Auditory representations and phonological illusions: A linguist’s perspective on the neuropsychological bases of speech perception

This paper argues that speech perception includes grammatical—in particular phonological—computations implemented by an analysis-by-synthesis component (Halle & Stevens, 1962) which analyzes linguistic material by synthesizing it anew. Analysis-by-synthesis, however, is not always required in perception but only when the listener wants to be certain that the words or morphemes identified in the input signal correspond to those intended by the speaker who produced the signal (= parity requirements, see [Liberman, 1996] and [Liberman and Whalen, 2000]). As we will see, in some situation analysis-by-synthesis may generate ‘phonological’ illusions. A central assumption is that the representations of words or morphemes in perception involve distinctive features and are formally structured into syllables. Two perceptual modes are needed: phonetic and phonemic perception. In phonemic perception only contrastive aspects of sounds, i.e., the aspect of sounds associated with meaning differences, are searched for. In phonetic perception both contrastive and noncontrastive aspects of sounds are identified. The phenomenon of phonological ‘deafening’ will be shown to follow from phonemic perception.

from the Journal of Neurolinguistics

Evidence for right hemisphere phonology in a backward masking task

The extent to which orthographic and phonological processes are available during the initial moments of word recognition within each hemisphere is under specified, particularly for the right hemisphere. Few studies have investigated whether each hemisphere uses orthography and phonology under constraints that restrict the viewing time of words and reduce overt phonological demands. The current study used backward masking in the divided visual field paradigm to explore hemisphere differences in the availability of orthographic and phonological word recognition processes. A 20 ms and 60 ms SOA were used to track the time course of how these processes develop during pre-lexical moments of word recognition. Nonword masks varied in similarity to the target words such that there were four types: orthographically and phonologically similar, orthographically but not phonologically similar, phonologically but not orthographically similar and unrelated. The results showed the left hemisphere has access to both orthography and phonology early in the word recognition process. With more time to process the stimulus, the left hemisphere is able to use phonology which benefits word recognition to a larger extent than orthography. The right hemisphere also demonstrates access to both orthography and phonology in the initial moments of word recognition, however, orthographic similarity improves word recognition to a greater extent than phonological similarity.

from Brain and Language

Functional MRI evidence for modulation of cerebral activity by grapheme-to-phoneme conversion in French, and by the variable of gender

This fMRI study aims to assess the effect of two variables on the cerebral substrate of phonological processing during visual phoneme detection: (a) the difficulty level (type) of grapheme-to-phoneme conversion (GPC, letter-sound mapping) with two modalities, simple (S) and complex (C); and (b) the gender of participants, females (F) vs. males (M). The behavioral results have shown that simple items were processed more accurately than complex ones. At the cerebral level, phoneme detection activated the left-hemisphere phonological network and several regions of this network were modulated by the GPC type. Specifically, the activity of the superior posterior temporal gyrus was significantly higher for simple grapheme detection and suggests automatic activation of phonological representations; the activity of the inferior temporal gyrus was significantly higher for complex grapheme detection, suggesting greater demands of the integrative processes for solving competitive and inhibitory processes induced by the visual and phonological properties of stimuli. With respect to gender variable, we obtained significant interaction between GPC and gender. This effect showed higher accuracy for simple graphemes in females and suggests that female participants were more proficient than males for detecting simple items. This effect suggests easier and more rapid activation of phonological codes, probably based on a specific visual strategy, different from males. This is supported by the additional activation of the lingual gyrus in females for processing simple graphemes, although the exact explanation of this effect is not clear yet and requires supplementary experimentation and evidence. Overall, our results indicate that the cognitive mechanisms and cerebral correlates of phonological processing may depend on intrinsic and extrinsic variables, such as GPC and gender.

from the Journal of Neurolinguistics

Music, rhythm, rise time perception and developmental dyslexia: Perception of musical meter predicts reading and phonology

The accurate perception of metrical structure may be critical for phonological development and consequently for the development of literacy. Difficulties in metrical processing are associated with basic auditory rise time processing difficulties, suggesting a primary sensory impairment in developmental dyslexia in tracking the lower-frequency modulations in the speech envelope.

from Cortex

Music, rhythm, rise time perception and developmental dyslexia: Perception of musical meter predicts reading and phonology

Rhythm organises musical events into patterns and forms, and rhythm perception in music is usually studied by using metrical tasks. Metrical structure also plays an organisational function in the phonology of language, via speech prosody, and there is evidence for rhythmic perceptual difficulties in developmental dyslexia. Here we investigate the hypothesis that the accurate perception of musical metrical structure is related to basic auditory perception of rise time, and also to phonological and literacy development in children.

from Cortex

Acoustic Evidence for Positional and Complexity Effects on Children’s Production of Plural –s

Conclusions: These findings extend positional effects on morpheme production to plural –s. An effect of coda complexity was not observed for plural but was observed for 3rd person singular, which raises the possibility that the morphological representation proper influences the degree to which phonological factors affect morpheme production.

from the Journal of Speech, Language, and Hearing Research

Evidence-Based Practice for Children With Speech Sound Disorders: Part 1 Narrative Review

Conclusion: Collaborative research reflecting higher levels of evidence using rigorous experimental designs is needed to compare the relative benefits of different intervention approaches.

from Language, Speech and Hearing Services in Schools

Evidence-Based Practice for Children With Speech Sound Disorders: Part 2 Application to Clinical Practice

Conclusion: SLPs need to use their clinical expertise to integrate research findings with the constraints and complexities of everyday clinical practice and client factors, values, and preferences in their management of SSDs in children.

from Language, Speech and Hearing Services in Schools

Neural correlates of rhyming vs. lexical and semantic fluency

Rhyming words, as in songs or poems, is a universal feature of human language across all ages. In the present fMRI study a novel overt rhyming task was applied to determine the neural correlates of rhyme production.

Fifteen right-handed healthy male volunteers participated in this verbal fluency study. Participants were instructed to overtly articulate as many words as possible either to a given initial letter (LVF) or to a semantic category (SVF). During the rhyming verbal fluency task (RVF), participants had to generate words that rhymed with pseudoword stimuli. On-line overt verbal responses were audiotaped in order to correct the imaging results for the number of generated words.

Fewer words were generated in the rhyming compared to both the lexical and the semantic condition. On a neural level, all language tasks activated a language network encompassing the left inferior frontal gyrus, the middle and superior temporal gyri as well as the contralateral right cerebellum. Rhyming verbal fluency compared to both, lexical and semantic verbal fluency, demonstrated significantly stronger activation of left inferior parietal region.

Generating novel rhyme words seems to be mainly mediated by the left inferior parietal lobe, a region previously found to be associated with meta-phonological as well as sub-lexical linguistic processes.

from Brain Research

The privileged status of locality in consonant harmony

While the vast majority of linguistic processes apply locally, consonant harmony appears to be an exception. In this phonological process, consonants share the same value of a phonological feature, such as secondary place of articulation. In sibilant harmony, [s] and [∫] (‘sh’) alternate such that if a word contains the sound [∫], all [s] sounds become [∫]. This can apply locally as a first-order or non-locally as a second-order pattern. In the first-order case, no consonants intervene between the two sibilants (e.g., [pisasu], [pi∫a∫u]). In second-order case, a consonant may intervene (e.g., [sipasu], [∫ipa∫u]). The fact that there are languages that allow second-order non-local agreement of consonant features has led some to question whether locality constraints apply to consonant harmony. This paper presents the results from two artificial grammar learning experiments that demonstrate the privileged role of locality constraints, even in patterns that allow second-order non-local interactions. In Experiment 1, we show that learners do not extend first-order non-local relationships in consonant harmony to second-order non-local relationships. In Experiment 2, we show that learners will extend a consonant harmony pattern with second-order long distance relationships to a consonant harmony with first-order long distance relationships. Because second-order non-local application implies first-order non-local application, but first-order non-local application does not imply second-order non-local application, we establish that local constraints are privileged even in consonant harmony.

from the Journal of Memory and Language

Phonetic and phonemic acquisition: Normative data in English and Dutch speech sound development

The data from the development of the English and Dutch speech sound system show many similar tendencies. Vowels are mastered by the age of three, most consonants by the age of four and most consonant clusters between 5 and 6–8 years of age. Perhaps, there is a universal trend in speech sound development like there is in language development.

from the International Journal of Pediatric Otorhinolaryngology

Phonology and vocal behavior in toddlers with autism spectrum disorders

The purpose of this study is to examine the phonological and other vocal productions of children, 18–36 months, with autism spectrum disorder (ASD) and to compare these productions to those of age-matched and language-matched controls. Speech samples were obtained from 30 toddlers with ASD, 11 age-matched toddlers and 23 language-matched toddlers during either parent–child or clinician–child play sessions. Samples were coded for a variety of speech-like and nonspeech vocalization productions. Toddlers with ASD produced speech-like vocalizations similar to those of language-matched peers, but produced significantly more atypical nonspeech vocalizations when compared to both control groups. Toddlers with ASD show speech-like sound production that is linked to their language level, in a manner similar to that seen in typical development. The main area of difference in vocal development in this population is in the production of atypical vocalizations. Findings suggest that toddlers with ASDs do not tune into the language model of their environment. Failure to attend to the ambient language environment negatively impacts the ability to acquire spoken language.

from Autism Research

Phonological learning in semantic dementia

Patients with semantic dementia (SD) have anterior temporal lobe (ATL) atrophy that gives rise to a highly selective deterioration of semantic knowledge. Despite pronounced anomia and poor comprehension of words and pictures, SD patients have well-formed, fluent speech and normal digit span. Given the intimate connection between phonological STM and word learning revealed by both neuropsychological and developmental studies, SD patients might be expected to show good acquisition of new phonological forms, even though their ability to map these onto meanings is impaired. In contradiction of these predictions, a limited amount of previous research has found poor learning of new phonological forms in SD. In a series of experiments, we examined whether SD patient, GE, could learn novel phonological sequences and, if so, under which circumstances. GE showed normal benefits of phonological knowledge in STM (i.e., normal phonotactic frequency and phonological similarity effects) but reduced support from semantic memory (i.e., poor immediate serial recall for semantically degraded words, characterised by frequent item errors). Next, we demonstrated normal learning of serial order information for repeated lists of single-digit number words using the Hebb paradigm: these items were well-understood allowing them to be repeated without frequent item errors. In contrast, patient GE showed little learning of nonsense syllable sequences using the same Hebb paradigm. Detailed analysis revealed that both GE and the controls showed a tendency to learn their own errors as opposed to the target items. Finally, we showed normal learning of phonological sequences for GE when he was prevented from repeating his errors. These findings confirm that the ATL atrophy in SD disrupts phonological processing for semantically-degraded words but leaves the phonological architecture intact. Consequently, when item errors are minimised, phonological STM can support the acquisition of new phoneme sequences in patients with SD.

from Neuropsychologia

The potential contribution of communication breakdown and repair in phonological intervention

This paper explores the potential contribution of communication breakdown and repair sequences in phonological intervention. The paper is divided into two parts. In part one, we examine the inclusion of communication breakdown and repair sequences across three current approaches to phonological intervention. The review of this literature highlights a need for researchers to better document the teaching dialogue used in therapy. In part two of this paper, we consider how a unique type of clarification request containing an incorrect production could be applied in an intervention context. Reasons why such a unique counterintuitive clarification request might help children’s speech are considered. The need to better understand the effect of different types of clarification requests on children’s speech production skills during phonological intervention is discussed.

from the Canadian Journal of Speech-Language Pathology and Audiology