Blog Archives

Eliciting Dyslexic Symptoms in Proficient Readers by Simulating Deficits in Grapheme-to-Phoneme Conversion and Visuo-Magnocellular Processing

Among the cognitive causes of dyslexia, phonological and magnocellular deficits have attracted a substantial amount of research. Their role and their exact impact on reading ability are still a matter of debate, partly also because large samples of dyslexics are hard to recruit. Here, we report a new technique to simulate dyslexic symptoms in normal readers in two ways. Although difficulties in grapheme-to-phoneme conversion were elicited by manipulating the identifiability of written letters, visual-magnocellular processing deficits were generated by presenting letters moving dynamically on the screen. Both factors were embedded into a lexical word–pseudoword decision task with proficient German readers. Although both experimental variations systematically increased lexical decision times, they did not interact. Subjects successfully performed word–pseudoword distinctions at all levels of simulation, with consistently longer reaction times for pseudowords than for words. Interestingly, detecting a pseudoword was more difficult in the grapheme-to-phoneme conversion simulation as indicated by a significant interaction of word type and letter shape. These behavioural effects are consistent with those observed in ‘real’ dyslexics in the literature. The paradigm is thus a potential means of generating novel hypotheses about dyslexia, which can easily be tested with normal readers before screening and recruiting real dyslexics. Copyright © 2011 John Wiley & Sons, Ltd.

from Dyslexia

Advertisements

Modulation of the motor system during visual and auditory language processing

Studies of embodied cognition have demonstrated the engagement of the motor system when people process action-related words and concepts. However, research using transcranial magnetic stimulation (TMS) to examine linguistic modulation in primary motor cortex has produced inconsistent results. Some studies report that action words produce an increase in corticospinal excitability; others, a decrease. Given the differences in methodology and modality, we re-examined this issue, comparing conditions in which participants either read or listened to the same set of action words. In separate blocks of trials, participants were presented with lists of words in the visual and auditory modality, and a TMS pulse was applied over left motor cortex, either 150 or 300 ms after the word onset. Motor evoked potentials (MEPs) elicited were larger following the presentation of action words compared with control words. However, this effect was only observed when the words were presented visually; no changes in MEPs were found when the words were presented auditorily. A review of the TMS literature on action word processing reveals a similar modality effect on corticospinal excitability. We discuss different hypotheses that might account for this differential modulation of action semantics by vision and audition.

from Experimental Brain Research

Clinical neurophysiology of visual and auditory processing in dyslexia: A review

Neurophysiological studies on children and adults with dyslexia provide a deeper understanding of how visual and auditory processing in dyslexia might relate to reading deficits. The goal of this review is to provide an overview of research findings in the last two decades on motion related and contrast sensitivity visual evoked potentials and on auditory event related potentials to basic tone and speech sound processing in dyslexia. These results are particularly relevant for three important theories about causality in dyslexia: the magnocellular deficit hypothesis, the temporal processing deficit hypothesis and the phonological deficit hypothesis. Support for magnocellular deficits in dyslexia are primarily provided from evidence for altered visual evoked potentials to rapidly moving stimuli presented at low contrasts. Consistently ERP findings revealed altered neurophysiological processes in individuals with dyslexia to speech stimuli, but evidence for deficits processing certain general acoustic information relevant for speech perception, such as frequency changes and temporal patterns, are also apparent.

from Clinical Neurophysiology

Superior voice recognition in a patient with acquired prosopagnosia and object agnosia

Anecdotally, it has been reported that individuals with acquired prosopagnosia compensate for their inability to recognize faces by using other person identity cues such as hair, gait or the voice. Are they therefore superior at the use of non-face cues, specifically voices, to person identity? Here, we empirically measure person and object identity recognition in a patient with acquired prosopagnosia and object agnosia. We quantify person identity (face and voice) and object identity (car and horn) recognition for visual, auditory, and bimodal (visual and auditory) stimuli. The patient is unable to recognize faces or cars, consistent with his prosopagnosia and object agnosia, respectively. He is perfectly able to recognize people’s voices and car horns and bimodal stimuli. These data show a reverse shift in the typical weighting of visual over auditory information for audiovisual stimuli in a compromised visual recognition system. Moreover, the patient shows selectively superior voice recognition compared to the controls revealing that two different stimulus domains, persons and objects may not be equally affected by sensory adaptation effects. This is also implies that person and object identity recognition are processed in separate pathways. These data demonstrate that an individual with acquired prosopagnosia and object agnosia can compensate for the visual impairment and become quite skilled at using spared aspects of sensory processing. In the case of acquired prosopagnosia, it is advantageous to develop a superior use of voices for person identity recognition in everyday life.<p><p>from <a href=”http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6T0D-511R9X0-1&_user=108452&_coverDate=09%2F17%2F2010&_rdoc=1&_fmt=high&_orig=search&_origin=search&_sort=d&_docanchor=&view=c&_acct=C000059732&_version=1&_urlVersion=0&_userid=108452&md5=b8367be9e9acc9d929d3ec793dd0ee4a&searchtype=a”><em>Neuropsychologia</em></a></p&gt;

Cerebral Lateralization of Face-Selective and Body-Selective Visual Areas Depends on Handedness

The left-hemisphere dominance for language is a core example of the functional specialization of the cerebral hemispheres. The degree of left-hemisphere dominance for language depends on hand preference: Whereas the majority of right-handers show left-hemispheric language lateralization, this number is reduced in left-handers. Here, we assessed whether handedness analogously has an influence upon lateralization in the visual system. Using functional magnetic resonance imaging, we localized 4 more or less specialized extrastriate areas in left- and right-handers, namely fusiform face area (FFA), extrastriate body area (EBA), fusiform body area (FBA), and human motion area (human middle temporal [hMT]). We found that lateralization of FFA and EBA depends on handedness: These areas were right lateralized in right-handers but not in left-handers. A similar tendency was observed in FBA but not in hMT. We conclude that the relationship between handedness and hemispheric lateralization extends to functionally lateralized parts of visual cortex, indicating a general coupling between cerebral lateralization and handedness. Our findings indicate that hemispheric specialization is not fixed but can vary considerably across individuals even in areas engaged relatively early in the visual system.

from Cerebral Cortex

Dual sensory impairment (DSI) in traumatic brain injury (TBI) – An emerging interdisciplinary challenge

The present review characterizes dual sensory impairment (DSI) as co-existing auditory and visual deficits in TBI that can be peripherally or centrally based. Current research investigating DSI in the military population, along with applicable research which focuses on unimodal deficits, is considered. Due to the heterogenous nature of TBI lesions, an important challenge that the clinician faces is ruling out the influence of multiple sensory deficits and/or the influence of cognitive processes on diagnosis and rehabilitation of the patient. Treatment options for DSI involve remediation of the sensory deficits via existing sensory aids or training exercises.

from Neurorehabilitation

Electrophysiological (EEG, sEEG, MEG) evidence for multiple audiovisual interactions in the human auditory cortex

Abstract
In this review, we examine the contribution of human electrophysiological studies (EEG, sEEG and MEG) to the study of visual influence on processing in the auditory cortex. Focusing mainly on studies performed by our group, we critically review the evidence showing (1) that visual information can both activate and modulate the activity of the auditory cortex at relatively early stages (mainly at the processing stage of the auditory N1 wave) in response to both speech and non-speech sounds and (2) that visual information can be included in the representation of both speech and non-speech sounds in auditory sensory memory. We describe an important conceptual tool in the study of audiovisual interaction (the additive model) and show the importance of considering the spatial distribution of electrophysiological data when interpreting EEG results. Review of these studies points to the probable role of sensory, attentional and task-related factors in modulating audiovisual interactions in the auditory cortex.

from Language and Cognitive Processes

The Effects of Stimulus Modality and Frequency of Stimulus Presentation on Cross-modal Distraction

Selective attention produces enhanced activity (attention-related modulations [ARMs]) in cortical regions corresponding to the attended modality and suppressed activity in cortical regions corresponding to the ignored modality. However, effects of behavioral context (e.g., temporal vs. spatial tasks) and basic stimulus properties (i.e., stimulus frequency) on ARMs are not fully understood. The current study used functional magnetic resonance imaging to investigate selectively attending and responding to either a visual or auditory metronome in the presence of asynchronous cross-modal distractors of 3 different frequencies (0.5, 1, and 2 Hz). Attending to auditory information while ignoring visual distractors was generally more efficient (i.e., required coordination of a smaller network) and less effortful (i.e., decreased interference and presence of ARMs) than attending to visual information while ignoring auditory distractors. However, these effects were modulated by stimulus frequency, as attempting to ignore auditory information resulted in the obligatory recruitment of auditory cortical areas during infrequent (0.5 Hz) stimulation. Robust ARMs were observed in both visual and auditory cortical areas at higher frequencies (2 Hz), indicating that participants effectively allocated attention to more rapidly presented targets. In summary, results provide neuroanatomical correlates for the dominance of the auditory modality in behavioral contexts that are highly dependent on temporal processing.

from Cerebral Cortex

The Effects of Stimulus Modality and Frequency of Stimulus Presentation on Cross-modal Distraction

from Cerebral Cortex

Selective attention produces enhanced activity (attention-related modulations [ARMs]) in cortical regions corresponding to the attended modality and suppressed activity in cortical regions corresponding to the ignored modality. However, effects of behavioral context (e.g., temporal vs. spatial tasks) and basic stimulus properties (i.e., stimulus frequency) on ARMs are not fully understood. The current study used functional magnetic resonance imaging to investigate selectively attending and responding to either a visual or auditory metronome in the presence of asynchronous cross-modal distractors of 3 different frequencies (0.5, 1, and 2 Hz). Attending to auditory information while ignoring visual distractors was generally more efficient (i.e., required coordination of a smaller network) and less effortful (i.e., decreased interference and presence of ARMs) than attending to visual information while ignoring auditory distractors. However, these effects were modulated by stimulus frequency, as attempting to ignore auditory information resulted in the obligatory recruitment of auditory cortical areas during infrequent (0.5 Hz) stimulation. Robust ARMs were observed in both visual and auditory cortical areas at higher frequencies (2 Hz), indicating that participants effectively allocated attention to more rapidly presented targets. In summary, results provide neuroanatomical correlates for the dominance of the auditory modality in behavioral contexts that are highly dependent on temporal processing.