Monthly Archives: May 2011

Brain activation for language dual-tasking: Listening to two people speak at the same time and a change in network timing

The study used fMRI to investigate brain activation in participants who were able to listen to and successfully comprehend two people speaking at the same time (dual-tasking). The study identified brain mechanisms associated with high-level, concurrent dual-tasking, as compared with comprehending a single message. Results showed an increase in the functional connectivity among areas of the language network in the dual task. The increase in synchronization of brain activation for dual-tasking was brought about primarily by a change in the timing of left inferior frontal gyrus (LIFG) activation relative to posterior temporal activation, bringing the LIFG activation into closer correspondence with temporal activation. The results show that the change in LIFG timing was greater in participants with lower working memory capacity, and that recruitment of additional activation in the dual-task occurred only in the areas adjacent to the language network that was activated in the single task. The shift in LIFG activation may be a brain marker of how the brain adapts to high-level dual-tasking. Hum Brain Mapp, 2011. © 2011 Wiley-Liss, Inc.

from Human Brain Mapping


The vestibular evoked-potential profile of Ménière’s disease

Predominance of abnormalities in oVEMP and cVEMP responses to AC sound is characteristic of MD and indicative of saccular involvement.

This pattern of VEMP abnormalities may enable separation of Ménière’s disease from other peripheral vestibulopathies.

from Clinical Neurophysiology

Evidence of deficient central speech processing in children with specific language impairment: The T-complex

These results suggest that poor auditory processing, as measured by the T-complex, is a marker for LI and that multiple deficits serve to mark LI.

The T-complex measures, indexing secondary auditory cortex, reflect an important aspect of processing in speech and language development.

from Clinical Neurophysiology

Resting-state EEG in schizophrenia: Auditory verbal hallucinations are related to shortening of specific microstates

Microstates of class D resemble topographies associated with error monitoring. Its premature termination may facilitate the misattribution of self-generated inner speech to external sources during hallucinations.

These results suggest that microstate D represents a biological state marker for hallucinatory experiences.

from Clinical Neurophysiology

Dysfunction of bulbar central pattern generator in ALS patients with dysphagia during sequential deglutition

The corticobulbar control of swallowing is insufficient in ALS, and the swallowing CPG cannot work very well to produce segmental muscle activation and sequential swallowing. CPG dysfunction can result in irregular and arhythmical sequential swallowing in ALS patients with bulbar plus pseudobulbar types.

The arhythmical SWS pattern can be considered as a kind of dysfunction of CPG in human ALS cases with dysphagia.

from Clinical Neurophysiology

The crucial role of thiamine in the development of syntax and lexical retrieval: a study of infantile thiamine deficiency

This study explored the effect of thiamine deficiency during early infancy on the development of syntax and lexical retrieval. We tested syntactic comprehension and production, lexical retrieval abilities and conceptual abilities of 59 children aged 5–7 years who had been fed during their first year of life with a thiamine-deficient milk substitute. We compared them to 35 age-matched control children who were fed with other milk sources. Experiment 1 tested the comprehension of relative clauses using a sentence–picture-matching task. Experiment 2 tested the production of relative clauses using a preference elicitation task. Experiment 3 tested the repetition of various syntactic structures with various types of syntactic movement and embedding. Experiment 4 tested picture naming and Experiment 5 tested lexical substitutions in a sentence repetition task. Experiments 6 and 7 tested the children’s conceptual abilities using a picture association task and a picture absurdity description task. The results indicated a very high rate of syntactic and lexical retrieval deficits in the group of children who were exposed to thiamine deficiency in early infancy: 57 of the 59 thiamine-deficient children examined had language impairment, compared with three of the 35 controls (9%). Importantly, unlike the impairment this group sustained in their language abilities, the conceptual abilities of most of the children were intact (only six children, 10%, were conceptually impaired). These findings indicate that thiamine deficiency in infancy causes severe and long-lasting language disorders and that nutrition may be one of the causes for language impairment.

from Brain

Early Intervention Key To Improving Literacy Skills For Deaf Children

“One more story” is a common refrain in families with young children who love to read. But children who are deaf or are hard-of-hearing often miss out on this activity because their parents may not know how to use American Sign Language (ASL) when they read to them. Early findings from a Ryerson study show deaf and hard-of-hearing children may benefit greatly when parents read to them using ASL.

from Medical News

Speech Buddies: Innovative Technology for Articulation

Speech Buddies by Articulate Technologies are a recent innovation born of SLP Gordy Rogers’ observation that while other disciplines were using technologies designed to best serve their patients, SLPs continued to scramble to gather old-skool materials like tongue depressors, peanut butter and straws to facilitate correct articulation (I hear that!). In partnership with a friend, an engineer specializing in medical technologies, Rogers developed Speech Buddies, a set of handheld devices that provide targets inside the mouth to facilitate production of the most commonly misarticulated sounds during actual speech: /s/, /r/, /sh/, /ch/ and /l/.

from Speech

Gene expression analysis in lymphoblasts derived from patients with autism spectrum disorder

Our results provide evidence that the NLGN3 and SHANK3 genes may be differentially expressed in lymphoblastoid cell lines from individuals with ASD compared to those from controls. These findings suggest the possibility that decreased mRNA expression levels of these genes might be involved in the pathophysiology of ASD in a substantial population of ASD patients.

from the Molecular Autism

Hearing function and thresholds: a genome-wide association study in European isolated populations identifies new loci and pathways

Conclusion These results provide new insights into the molecular basis of hearing function and may suggest new targets for hearing impairment treatment and prevention.

from the Journal of Medical Genetics

Physicians are not adherent to clinical practice guidelines for acute otitis media

The antibiotics prescription for children with acute otitis media varied widely among ENT specialists. The overall adherence was 8.2%. The adherence was not correlated to patient’s age, gender, single or both ear infection; but significantly inversely correlated with specialists’ years of experience and their service quantity.

from the International Journal of Pediatric Otorhinolaryngology

BOLD response to motion verbs in left posterior middle temporal gyrus during story comprehension

A primary focus within neuroimaging research on language comprehension is on the distribution of semantic knowledge in the brain. Studies have shown that the left posterior middle temporal gyrus (LPMT), a region just anterior to area MT/V5, is important for the processing of complex action knowledge. It has also been found that motion verbs cause activation in LPMT. In this experiment we investigated whether this effect could be replicated in a setting resembling real life language comprehension, i.e. without any overt behavioral task during passive listening to a story. During fMRI participants listened to a recording of the story “The Ugly Duckling”. We incorporated a nuisance elimination regression approach for factoring out known nuisance variables both in terms of physiological noise, sound intensity, linguistic variables and emotional content. Compared to the remaining text, clauses containing motion verbs were accompanied by a robust activation of LPMT with no other significant effects, consistent with the hypothesis that this brain region is important for processing motion knowledge, even during naturalistic language comprehension conditions.

from Brain and Language

Quality trumps quantity at reducing memory errors: Implications for retrieval monitoring and mirror effects

Memories have qualitative properties (e.g., the different kinds of features or details that can be retrieved) and quantitative properties (e.g., the frequency and/or strength of retrieval). Here we investigated the relative contribution of these two properties to the retrieval monitoring process. Participants studied a list of words, and memory for these words was enhanced either by studying an associated picture or by word repetition. Subsequent memory tests required participants to selectively monitor retrieval for these different kinds of stimuli. Compared to words that were studied only once, test words associated with either pictures or repetitions were more likely to be correctly recognized, but critically, false recognition was reduced only when monitoring memory for picture recollections. Subjective judgments and speeded tests indicated that study repetition increased the number of test words that elicited recollection and familiarity (a quantitative difference), but studying pictures maximized the recollection of unique or distinctive details (a qualitative difference). These results indicate that memory quality is more critical than quantity for retrieval monitoring accuracy.

from the Journal of Memory and Language

Scalar reference, contrast and discourse: Separating effects of linguistic discourse from availability of the referent

Listeners expect that a definite noun phrase with a pre-nominal scalar adjective (e.g., the big …) will refer to an entity that is part of a set of objects contrasting on the scalar dimension, e.g., size (Sedivy, Tanenhaus, Chambers, & Carlson, 1999). Two visual world experiments demonstrate that uttering a referring expression with a scalar adjective makes all members of the relevant contrast set more salient in the discourse model, facilitating subsequent reference to other members of that contrast set. Moreover, this discourse effect is caused primarily by linguistic mention of a scalar adjective and not by the listener’s prior visual or perceptual experience. These experiments demonstrate that language processing is sensitive to which information was introduced by linguistic mention, and that the visual world paradigm can be use to tease apart the separate contributions of visual and linguistic information to reference resolution.

from the Journal of Memory and Language

Thinking-Aloud as Talking-in-Interaction: Reinterpreting How L2 Lexical Inferencing Gets Done

There is a general consensus among second-language (L2) researchers today that lexical inferencing (LIF) is among the most common techniques that L2 learners use to generate meaning for unknown words they encounter in context. Indeed, claims about the salience and pervasiveness of LIF for L2 learners rely heavily upon data obtained via concurrent think-aloud (TA) research methods. However, despite the consensus that L2 LIF involves a combination of cues, knowledge, and contextual awareness, a crucial aspect of that “context”— namely, the in situ context of TA data collection procedures themselves—is rarely, if ever, included in analyses presented in L2 LIF research studies. I argue in this article that acknowledging this reality and incorporating aspects of this in situ context into analysis is both important and desirable, as it would contribute vital elements of research transparency and legitimacy as well as a much needed reflexivity about claims regarding L2 LIF that are made based on TA data.

from Language Learning