Blog Archives

Multiple constraints on semantic integration in a hierarchical structure: ERP Evidence from German

A recent ERP study on Chinese demonstrated dissociable neural responses to semantic integration processes at different levels of syntactic hierarchy (Zhou et al., 2010). However, it is unclear whether such findings are restricted to a non-case marked language that relies heavily on word order and semantic information for the construction of sentence representation. This study aimed to further investigate, in a case-marked language, how semantic processes in a hierarchical structure take place during sentence reading. We used German sentences with the structure “subject noun + verb + article/determiner + adjective + object noun + prepositional phrase”, in which the object noun was constrained either at the lower level by the adjective or at the higher level by the verb, and manipulated the semantic congruency between the adjective and the object noun and/or between the verb and the object noun. EEGs were recorded while participants read sentences and judged for their semantic acceptability. Compared with correct sentences, a biphasic pattern of an N400 effect followed by a late positivity effect was observed on the object noun for sentences with either lower- or higher-level mismatch or with double mismatches. Both the N400 effect and the late positivity (P600) effect were larger for the double mismatch condition than for either of the single mismatch conditions. These findings demonstrate cross-language mechanisms for processing multiple semantic constraints at different levels of syntactic hierarchy during sentence comprehension.

from Brain Research

Advertisements

Shadows of music–language interaction on low frequency brain oscillatory patterns

Electrophysiological studies investigating similarities between music and language perception have relied exclusively on the signal averaging technique, which does not adequately represent oscillatory aspects of electrical brain activity that are relevant for higher cognition. The current study investigated the patterns of brain oscillations during simultaneous processing of music and language using visually presented sentences and auditorily presented chord sequences. Music-syntactically regular or irregular chord functions were presented in sync with syntactically or semantically correct or incorrect words. Irregular chord functions (presented simultaneously with a syntactically correct word) produced an early (150–250 ms) spectral power decrease over anterior frontal regions in the theta band (5–7 Hz) and a late (350–700 ms) power increase in both the delta and the theta band (2–7 Hz) over parietal regions. Syntactically incorrect words (presented simultaneously with a regular chord) elicited a similar late power increase in delta–theta band over parietal sites, but no early effect. Interestingly, the late effect was significantly diminished when the language-syntactic and music-syntactic irregularities occurred at the same time. Further, the presence of a semantic violation occurring simultaneously with regular chords produced a significant increase in later delta–theta power at posterior regions; this effect was marginally decreased when the identical semantic violation occurred simultaneously with a music syntactical violation. Altogether, these results show that low frequency oscillatory networks get activated during the syntactic processing of both music and language, and further, these networks may possibly be shared.

from Brain and Language

The effects of subjectively significant stimuli on subsequent cognitive brain activity

Significance
The results indicate an effect of subjectively significant distracters on subsequent brain activity with an interaction between cognitive and emotional processes.

from Physiology and Behavior

Number of sense effects of Chinese disyllabic compounds in the two hemispheres

The current study manipulated the visual field and the number of senses of the first character in Chinese disyllabic compounds to investigate how the related senses (polysemy) of the constituted character in the compounds were represented and processed in the two hemispheres. The ERP results in experiment 1 revealed crossover patterns in the left hemisphere (LH) and the right hemisphere (RH). The sense facilitation in the LH was in favor of the assumption of single-entry representation for senses. However, the patterns in the RH yielded two possible interpretations: (1) the nature of hemispheric processing in dealing with sublexical sense ambiguity; (2) the semantic activation from the separate-entry representation for senses. To clarify these possibilities, experiment 2 was designed to push participants to a deeper level of lexical processing by the word class judgment. The results revealed the sense facilitation effect in the RH. In sum, the current study was in support of the single-entry account for related senses and demonstrated that two hemispheres processed sublexical sense ambiguity in a complementary way.

from Brain and Language

N1, P2 and T-complex of the auditory brain event-related potentials to tones with varying rise times in adults with and without dyslexia

Dyslexia is a learning difficulty affecting the acquisition of fluent reading and spelling skills due to poor phonological processing. Underlying deficits in processing sound rise time have also been found in children and adults with dyslexia. However, the neural basis for these deficits is unknown. In the present study event-related potentials were used to index neural processing and examine the effect of rise time manipulation on the obligatory N1, T-complex and P2 responses in English speaking adults with and without dyslexia. The Tb wave of the T-complex showed differences between groups, with the amplitudes for Tb becoming less negative with increased rise time for the participants with dyslexia only. Frontocentral N1 and P2 did not show group effects. Enhanced Tb amplitude that is modulated by rise time could indicate altered neural networks at the lateral surface of the superior temporal gyrus in adults with dyslexia.

from the International Journal of Psychophysiology

Conflict and surrender during sentence processing: An ERP study of syntax-semantics interaction

Recent ERP studies report that implausible verb-argument combinations can elicit a centro-parietal P600 effect (e.g., “The hearty meal was devouring …”; Kim & Osterhout, 2005). Such eliciting conditions do not involve outright syntactic anomaly, deviating from previous reports of P600. Kim and Osterhout (2005) attributed such P600 effects to structural reprocessing that occurs when syntactic cues fail to support a semantically attractive interpretation (‘meal’ as the Agent of ‘devouring’) and the syntactic cues are overwhelmed; the sentence is therefore perceived as syntactically ill-formed. The current study replicated such findings and also found that altering the syntactic cues in such situations of syntax-semantics conflict (e.g., “The hearty meal would devour …”) affects the conflict’s outcome. P600s were eliminated when sentences contained syntactic cues that required multiple morphosyntactic steps to “repair”. These sentences elicited a broad, left-anterior negativity at 300–600 ms (LAN). We interpret the reduction in P600 amplitude in terms of “resistance” of syntactic cues to reprocessing. We speculate that the LAN may be generated by difficulty retrieving an analysis that satisfies both syntactic and semantic cues, which results when syntactic cues are strong enough to resist opposing semantic cues. This pattern of effects is consistent with partially independent but highly interactive syntactic and semantic processing streams, which often operate collaboratively but can compete for influence over interpretation.

from Brain and Language

Words and pictures: An electrophysiological investigation of domain specific processing in native Chinese and English speakers

Comparisons of word and picture processing using Event-Related Potentials (ERPs) are contaminated by gross physical differences between the two types of stimuli. In the present study, we tackle this problem by comparing picture processing with word processing in an alphabetic and a logographic script, that are also characterized by gross physical differences. Native Mandarin Chinese speakers viewed pictures (line drawings) and Chinese characters (Experiment 1), native English speakers viewed pictures and English words (Experiment 2), and naïve Chinese readers (native English speakers) viewed pictures and Chinese characters (Experiment 3) in a semantic categorization task. The varying pattern of differences in the ERPs elicited by pictures and words across the three experiments provided evidence for i) script-specific processing arising between 150-200 ms post-stimulus onset, ii) domain-specific but script-independent processing arising between 200-300 ms post-stimulus onset, and iii) processing that depended on stimulus meaningfulness in the N400 time window. The results are interpreted in terms of differences in the way visual features are mapped onto higher-level representations for pictures and words in alphabetic and logographic writing systems.

from Neuropsychologia

P300 as a measure of processing capacity in auditory and visual domains in Specific Language Impairment

This study examined the electrophysiological correlates of auditory and visual working memory in children with Specific Language Impairments (SLI). Children with SLI and age-matched controls (11;9 – 14;10) completed visual and auditory working memory tasks while event-related potentials (ERPs) were recorded. In the auditory condition, children with SLI performed similarly to controls when the memory load was kept low (1-back memory load). As expected, when demands for auditory working memory were higher, children with SLI showed decreases in accuracy and attenuated P3b responses. However, children with SLI also evinced difficulties in the visual working memory tasks. In both the low (1-back) and high (2-back) memory load conditions, P3b amplitude was significantly lower for the SLI as compared to CA groups. These data suggest a domain-general working memory deficit in SLI that is manifested across auditory and visual modalities.

from Brain Research

Expectancy modulates a Late Positive ERP in an artificial grammar task

A wide range of studies have found late positive ERP components in response to anomalies during processing of structured sequences. In language studies, this component is named Syntactic Positive Shift (SPS) or P600. It is characterized by an increase in potential peaking around 600 ms after the appearance of the syntactic anomaly and has a centroparietal topography. Similar late positive components were found more recently in non-linguistic contexts. These results have led to the hypothesis that these components index the detection of anomalies in rule-governed sequences, or the access to abstract rule representations, regardless of the nature of the stimuli. Additionally, there is evidence showing that the SPS/P600 is sensitive to probability manipulations, which affect the subjects’ expectancy of the stimuli. Our aim in the present work was to address the hypothesis that the late positive component is modulated by the subject’s expectancy of the stimuli. To do so, we employed an artificial grammar learning task, and controlled the frequency of presentation to different kind of sequences during training. Results showed that certain sequence types elicited a late positive component which was modulated by different factors in two distinct time-windows. In an earlier window, the component was higher for sequences which had a low or null probability of occurrence during training, while in a later window, the component was higher for incorrect than correct sequences. Furthermore, this late-window effect was absent in those subjects whose performance was not significantly above chance. Two possible explanations for this effect are suggested.

from Brain Research

Effect of musical training on the development of speech and music perception abilities as indicated by event-related brain potentials

The present study examined the effect of musical training on three different types of perceptual abilities: phoneme perception, perception of word stress and perception of musical sequences. Children underwent a one-year long complex musical training program called ‘Visible sounds’ in their first year of elementary school. We expected that the intensive musical training facilitates not only the processing of music, but has a broader positive effect on linguistic processing.

from the International Journal of Developmental Neuroscience

Comparing Electrophysiological Correlates of Word Production in Immediate and Delayed Naming Through the Analysis of Word Age of Acquisition Effects

Most EEG studies analysing speech production with event related brain potential (ERP) have adopted silent metalinguistic tasks or delayed or tacit picture naming in order to avoid possible artefacts during motor preparation. A central issue in the interpretation of these results is whether the processes involved in those tasks are comparable to those involved in overt speech production. In the present study we addressed a methodological issue about the integration of stimulus-aligned and response-aligned ERPs in immediate overt picture naming in comparison to delayed production, coupled with a theoretical point on the effect of word Age of Acquisition (AoA). High density EEG recordings were used and waveform analyses and spatio-temporal segmentation were combined on stimulus-aligned and response-aligned ERPs. The same sequence and duration of topographic maps appeared in the immediate and delayed production until around 350 ms after picture onset, revealing similar encoding processes until the beginning of phonological encoding, but modulations linked to word AoA were only observed in the immediate production. Considering stimulus-aligned and response-aligned ERPs together allowed to identify that a stable topography starting around 350 ms lasts 30 ms longer for late-acquired than for early-acquired words. This difference falls within the time-window of phonological encoding and its modulation can be linked to the longer production latencies for late-acquired words.

from Brain Topography

Behavioural and ERP evidence for amodal sluggish attentional shifting in developmental dyslexia

The goal of this study was to examine the claim that amodal deficits in attentional shifting may be the source of reading acquisition disorders in phonological developmental dyslexia (sluggish attentional shifting, SAS, theory, Hari & Renvall, 2001). We investigated automatic attentional shifting in the auditory and visual modalities in 13 dyslexic young adults with a phonological awareness deficit and 13 control participants, matched for cognitive abilities, using both behavioral and ERP measures. We tested automatic attentional shifting using a stream segregation task (perception of rapid succession of visual and auditory stimuli as one or two streams). Results of Experiment 1(behavioral) suggested that in order to process two successive stimuli separately dyslexic participants required a significantly longer interstimulus interval than controls regardless of sensory modality. In Experiment 2 (ERPs), the same participants were tested by means of an auditory and a visual oddball tasks involving variations in the tempo of the same alternating stimuli as Experiment 1. P3b amplitudes elicited by deviant tempos were differently modulated between groups, supporting predictions made on the basis of observations in Experiment 1. Overall, these results support the hypothesis that SAS in dyslexic participants might be responsible for their atypical perception of rapid sequential stimulus sequences in both the auditory and the visual modalities. Furthermore, these results bring new evidence supporting the link between amodal SAS and the phonological impairment in developmental dyslexia.

from Neuropsychologia

An investigation of prototypical and atypical within-category vowels and non-speech analogues on cortical auditory evoked related potentials (AERPs) in 9 year children

The present study examined cortical auditory evoked related potentials (AERPs) for the P1-N250 and MMN components in children 9 years of age. The first goal was to investigate whether AERPs respond differentially to vowels and complex tones, and the second goal was to explore how prototypical language formant structures might be reflected in these early auditory processing stages. Stimuli were two synthetic within-category vowels (/y/), one of which was preferred by adult German listeners (“prototypical-vowel”), and analogous complex tones. P1 strongly distinguished vowels from tones, revealing larger amplitudes for the more difficult to discriminate but phonetically richer vowel stimuli. Prototypical language phoneme status did not reliably affect AERPs; however P1 amplitudes elicited by the prototypical-vowel correlated robustly with the ability to correctly identify two prototypical-vowels presented in succession as “same” (r = -.70) and word reading fluency (r = -.63). These negative correlations suggest that smaller P1 amplitudes elicited by the prototypical-vowel predict enhanced accuracy when judging prototypical-vowel “sameness” and increased word reading speed. N250 and MMN did not differentiate between vowels and tones and showed no correlations to behavioral measures.

from the International Journal of Psychophysiology

Congruency of auditory sounds and visual letters modulates mismatch negativity and P300 event-related potentials

A key determinant of skilled reading is the ability to integrate the orthographic and auditory forms of language. A number of prior studies have identified neural markers in adult readers corresponding to audio-visual integration of letters and their corresponding sounds. However, there remains some controversy as to the stage of processing at which this occurs. In the present study, we examined this issue using event-related potentials (ERPs), due to their sensitivity to the timing of perceptual and cognitive processes. Letter sounds were presented auditorily in an unattended mismatch negativity (MMN) paradigm, which is argued to be indicative of auditory-sensory memory. Concurrently, participants performed a visual letter identification task. On critical trials, the auditory stimulus was played concurrently with the visual letters. We observed significant MMNs both when the visual letter was congruent with the auditory stimulus, and when it was incongruent. However, the magnitude and scalp distribution of this effect was attenuated in incongruent trials. We also observed a later-going effect of congruency on P300 trials, marked by increased amplitudes and latencies for incongruent compared to congruent trials. The results suggest audiovisual integration of letters and sounds can and does occur during relatively early pre-attentive stages of sensory processing, and that these effects extend to later-going attentional phases of processing as well.

from the International Journal of Psychophysiology

Clinical neurophysiology of visual and auditory processing in dyslexia: A review

Neurophysiological studies on children and adults with dyslexia provide a deeper understanding of how visual and auditory processing in dyslexia might relate to reading deficits. The goal of this review is to provide an overview of research findings in the last two decades on motion related and contrast sensitivity visual evoked potentials and on auditory event related potentials to basic tone and speech sound processing in dyslexia. These results are particularly relevant for three important theories about causality in dyslexia: the magnocellular deficit hypothesis, the temporal processing deficit hypothesis and the phonological deficit hypothesis. Support for magnocellular deficits in dyslexia are primarily provided from evidence for altered visual evoked potentials to rapidly moving stimuli presented at low contrasts. Consistently ERP findings revealed altered neurophysiological processes in individuals with dyslexia to speech stimuli, but evidence for deficits processing certain general acoustic information relevant for speech perception, such as frequency changes and temporal patterns, are also apparent.

from Clinical Neurophysiology