Blog Archives

Predictability and Novelty in Literal Language Comprehension: An ERP Study

Linguists have suggested one mechanism for the creative extension of meaning in language involves mapping, or constructing correspondences between conceptual domains. For example, the sentence, “The clever boys used a cardboard box as a boat,” sets up a novel mapping between the concepts cardboard box and boat, while “His main method of transportation is a boat,” relies on a more conventional mapping between method of transportation and boat. To examine the electrophysiological signature of this mapping process, electroencephalogram (EEG) was recorded from the scalp as healthy adults read three sorts of sentences: low-cloze (unpredictable) conventional (“His main method of transportation is a boat,”), low-cloze novel mapping (“The clever boys used a cardboard box as a boat,”), and high-cloze (predictable) conventional (“The only way to get around Venice is to navigate the canals in a boat,”). Event-related brain potentials (ERPs) were time-locked to sentence final words. The novel and conventional conditions were matched for cloze probability (a measure of predictability based on the sentence context), lexical association between the sentence frame and the final word (using latent semantic analysis), and other factors known to influence ERPs to language stimuli. The high-cloze conventional control condition was included to compare the effects of mapping conventionality to those of predictability. The N400 component of the ERPs was affected by predictability but not by conventionality. By contrast, a late positivity was affected both by the predictability of sentence final words, being larger for words in low-cloze contexts that made target words difficult to predict, and by novelty, as words in the novel condition elicited a larger positivity 700–900 ms than the same words in the (cloze-matched) conventional condition.

from Brain Research

Advertisements

Grammatical agreement processing in reading: ERP findings and future directions

In the domain of written sentence comprehension, the computation of agreement dependencies is generally considered as a form-driven processing routine whose domain is syntactic in nature. In the present review we discuss the main findings emerging in the Event-Related Potential (ERP) literature on sentence comprehension, focusing on the different dimensions of agreement patterns (features, values, constituents involved and language): Agreement mismatches usually evoke a biphasic electrophysiological pattern (Left Anterior Negativity – LAN, 300–450msec and P600 after 500msec). This ERP pattern is assumed to reflect rule-based computations sensitive to formal (inflectional) covariations of related words (trigger–target). Here we claim that agreement processing is sensitive to both the type of feature involved and the constituents that express the agreement dependency. More specifically, LAN could reflect violation of expectancy (elicited by the trigger) for the target functional morphology; later, trigger and target are structurally integrated at the sentence level (earlyP600). However, morphosyntactic information could trigger the activation of higher-level representations that are not strictly syntactic in nature. The recruitment of this additional non-syntactic information (mirrored by N400-like effects) indicates that rule-based computations of agreement dependencies are not blind to non-syntactic information but are often recruited to establish sentence-level relations.

from Cortex

Multiple constraints on semantic integration in a hierarchical structure: ERP Evidence from German

A recent ERP study on Chinese demonstrated dissociable neural responses to semantic integration processes at different levels of syntactic hierarchy (Zhou et al., 2010). However, it is unclear whether such findings are restricted to a non-case marked language that relies heavily on word order and semantic information for the construction of sentence representation. This study aimed to further investigate, in a case-marked language, how semantic processes in a hierarchical structure take place during sentence reading. We used German sentences with the structure “subject noun + verb + article/determiner + adjective + object noun + prepositional phrase”, in which the object noun was constrained either at the lower level by the adjective or at the higher level by the verb, and manipulated the semantic congruency between the adjective and the object noun and/or between the verb and the object noun. EEGs were recorded while participants read sentences and judged for their semantic acceptability. Compared with correct sentences, a biphasic pattern of an N400 effect followed by a late positivity effect was observed on the object noun for sentences with either lower- or higher-level mismatch or with double mismatches. Both the N400 effect and the late positivity (P600) effect were larger for the double mismatch condition than for either of the single mismatch conditions. These findings demonstrate cross-language mechanisms for processing multiple semantic constraints at different levels of syntactic hierarchy during sentence comprehension.

from Brain Research

A person is not a number: Discourse involvement in subject-verb agreement computation

Agreement is a very important mechanism for language processing. Mainstream psycholinguistic research on subject-verb agreement processing has emphasized the purely formal and encapsulated nature of this phenomenon, positing an equivalent access to person and number features. However, person and number are intrinsically different, because person conveys extra-syntactic information concerning the participants in the speech act. To test the person-number dissociation hypothesis we investigated the neural correlates of subject-verb agreement in Spanish, using person and number violations. While number agreement violations produced a left-anterior negativity followed by a P600 with a posterior distribution, the negativity elicited by person anomalies had a centro-posterior maximum and was followed by a P600 effect that was frontally distributed in the early phase and posteriorly distributed in the late phase. These data reveal that the parser is differentially sensitive to the two features and that it deals with the two anomalies by adopting different strategies, due to the different levels of analysis affected by the person and number violations.

from Brain Research

Grammatical agreement processing in reading: ERP findings and future directions

In the domain of written sentence comprehension, the computation of agreement dependencies is generally considered as a form-driven processing routine whose domain is syntactic in nature. In the present review we discuss the main findings emerging in the Event-Related Potential (ERP) literature on sentence comprehension, focusing on the different dimensions of agreement patterns (features, values, constituents involved and language): Agreement mismatches usually evoke a biphasic electrophysiological pattern (Left Anterior Negativity – LAN, 300–450msec and P600 after 500msec). This ERP pattern is assumed to reflect rule-based computations sensitive to formal (inflectional) covariations of related words (trigger–target). Here we claim that agreement processing is sensitive to both the type of feature involved and the constituents that express the agreement dependency. More specifically, LAN could reflect violation of expectancy (elicited by the trigger) for the target functional morphology; later, trigger and target are structurally integrated at the sentence level (earlyP600). However, morphosyntactic information could trigger the activation of higher-level representations that are not strictly syntactic in nature. The recruitment of this additional non-syntactic information (mirrored by N400-like effects) indicates that rule-based computations of agreement dependencies are not blind to non-syntactic information but are often recruited to establish sentence-level relations.

from Cortex

Number of sense effects of Chinese disyllabic compounds in the two hemispheres

The current study manipulated the visual field and the number of senses of the first character in Chinese disyllabic compounds to investigate how the related senses (polysemy) of the constituted character in the compounds were represented and processed in the two hemispheres. The ERP results in experiment 1 revealed crossover patterns in the left hemisphere (LH) and the right hemisphere (RH). The sense facilitation in the LH was in favor of the assumption of single-entry representation for senses. However, the patterns in the RH yielded two possible interpretations: (1) the nature of hemispheric processing in dealing with sublexical sense ambiguity; (2) the semantic activation from the separate-entry representation for senses. To clarify these possibilities, experiment 2 was designed to push participants to a deeper level of lexical processing by the word class judgment. The results revealed the sense facilitation effect in the RH. In sum, the current study was in support of the single-entry account for related senses and demonstrated that two hemispheres processed sublexical sense ambiguity in a complementary way.

from Brain and Language

The balance between memory and unification in semantics: A dynamic account of the N400

At least three cognitive brain components are necessary in order for us to be able to produce and comprehend language: a Memory repository for the lexicon, a Unification buffer where lexical information is combined into novel structures, and a Control apparatus presiding over executive function in language. Here we describe the brain networks that support Memory and Unification in semantics. A dynamic account of their interactions is presented, in which a balance between the two components is sought at each word-processing step. We use the theory to provide an explanation of the N400 effect.

from Language and Cognitive Processes

Conflict and surrender during sentence processing: An ERP study of syntax-semantics interaction

Recent ERP studies report that implausible verb-argument combinations can elicit a centro-parietal P600 effect (e.g., “The hearty meal was devouring …”; Kim & Osterhout, 2005). Such eliciting conditions do not involve outright syntactic anomaly, deviating from previous reports of P600. Kim and Osterhout (2005) attributed such P600 effects to structural reprocessing that occurs when syntactic cues fail to support a semantically attractive interpretation (‘meal’ as the Agent of ‘devouring’) and the syntactic cues are overwhelmed; the sentence is therefore perceived as syntactically ill-formed. The current study replicated such findings and also found that altering the syntactic cues in such situations of syntax-semantics conflict (e.g., “The hearty meal would devour …”) affects the conflict’s outcome. P600s were eliminated when sentences contained syntactic cues that required multiple morphosyntactic steps to “repair”. These sentences elicited a broad, left-anterior negativity at 300–600 ms (LAN). We interpret the reduction in P600 amplitude in terms of “resistance” of syntactic cues to reprocessing. We speculate that the LAN may be generated by difficulty retrieving an analysis that satisfies both syntactic and semantic cues, which results when syntactic cues are strong enough to resist opposing semantic cues. This pattern of effects is consistent with partially independent but highly interactive syntactic and semantic processing streams, which often operate collaboratively but can compete for influence over interpretation.

from Brain and Language

Before the N400: Effects of lexical–semantic violations in visual cortex

There exists an increasing body of research demonstrating that language processing is aided by context-based predictions. Recent findings suggest that the brain generates estimates about the likely physical appearance of upcoming words based on syntactic predictions: words that do not physically look like the expected syntactic category show increased amplitudes in the visual M100 component, the first salient MEG response to visual stimulation. This research asks whether violations of predictions based on lexical–semantic information might similarly generate early visual effects. In a picture–noun matching task, we found early visual effects for words that did not accurately describe the preceding pictures. These results demonstrate that, just like syntactic predictions, lexical–semantic predictions can affect early visual processing around 100 ms, suggesting that the M100 response is not exclusively tuned to recognizing visual features relevant to syntactic category analysis. Rather, the brain might generate predictions about upcoming visual input whenever it can. However, visual effects of lexical–semantic violations only occurred when a single lexical item could be predicted. We argue that this may be due to the fact that in natural language processing, there is typically no straightforward mapping between lexical–semantic fields (e.g., flowers) and visual or auditory forms (e.g., tulip, rose, magnolia). For syntactic categories, in contrast, certain form features do reliably correlate with category membership. This difference may, in part, explain why certain syntactic effects typically occur much earlier than lexical–semantic effects.

from Brain and Language

Pitch Accents in Context: How Listeners Process Accentuation in Referential Communication

We investigated whether listeners are sensitive to (mis)matching accentuation patterns with respect to contrasts in the linguistic and visual context, using Event-Related Potentials. We presented participants with displays of two pictures followed by a spoken reference to one of these pictures (e.g., “the red ball”). The referent was contrastive with respect to the linguistic context (utterance in the previous trial: e.g., “the blue ball”) or with respect to the visual context (other picture in the display; e.g., a display with a red ball and a blue ball). The spoken reference carried a pitch accent on the noun (“the red BALL”) or on the adjective (“the RED ball”), or an intermediate (‘neutral’) accentuation. For the linguistic context, we found evidence for the Missing Accent Hypothesis: Listeners showed processing difficulties, in the form of increased negativities in the ERPs, for missing accents, but not for superfluous accents. ‘Neutral’ or intermediate accents were interpreted as ‘missing’ accents when they occurred late in the referential utterance, but not when they occurred early. For the visual context, we found evidence for the Missing Accent Hypothesis for a missing accent on the adjective (an increase in negativity in the ERPs) and a superfluous accent on the noun (no effect). However, a redundant color adjective (e.g., in the case of a display with a red ball and a red hat) led to less processing problems when the adjective carried a pitch accent.

from Neuropsychologia

Words and pictures: An electrophysiological investigation of domain specific processing in native Chinese and English speakers

Comparisons of word and picture processing using Event-Related Potentials (ERPs) are contaminated by gross physical differences between the two types of stimuli. In the present study, we tackle this problem by comparing picture processing with word processing in an alphabetic and a logographic script, that are also characterized by gross physical differences. Native Mandarin Chinese speakers viewed pictures (line drawings) and Chinese characters (Experiment 1), native English speakers viewed pictures and English words (Experiment 2), and naïve Chinese readers (native English speakers) viewed pictures and Chinese characters (Experiment 3) in a semantic categorization task. The varying pattern of differences in the ERPs elicited by pictures and words across the three experiments provided evidence for i) script-specific processing arising between 150-200 ms post-stimulus onset, ii) domain-specific but script-independent processing arising between 200-300 ms post-stimulus onset, and iii) processing that depended on stimulus meaningfulness in the N400 time window. The results are interpreted in terms of differences in the way visual features are mapped onto higher-level representations for pictures and words in alphabetic and logographic writing systems.

from Neuropsychologia

Sublexical ambiguity effect in reading Chinese disyllabic compounds

For Chinese compounds, neighbors can share either both orthographic forms and meanings, or orthographic forms only. In this study, central presentation and visual half-field (VF) presentation methods were used in conjunction with ERP measures to investigate how readers solve the sublexical semantic ambiguity of the first constituent character in reading a disyllabic compound. The sublexical ambiguity of the first character was manipulated while the orthographic neighborhood sizes of the first and second character (NS1, NS2) were controlled. Subjective rating of number of meanings corresponding to a character was used as an index of sublexical ambiguity. Results showed that low sublexical ambiguity words elicited a more negative N400 than high sublexical ambiguity words when words were centrally presented. Similar patterns were found when words were presented to the left VF. Interestingly, different patterns were observed for pseudowords. With left VF presentation, high sublexical ambiguity psudowords showed a more negative N400 than low sublexical ambiguity pseudowords. In contrast, with right VF presentation, low sublexical ambiguity pseudowords showed a more negative N400 than high sublexical ambiguity pseudowords. These findings indicate that a level of morphological representation between form and meaning needs to be established and refined in Chinese. In addition, hemispheric asymmetries in the use of word information in ambiguity resolution should be taken into account, even at sublexical level.

from Brain and Language

Age and amount of exposure to a foreign language during childhood: behavioral and ERP data on the semantic comprehension of spoken English by Japanese children

Children’s foreign-language (FL) learning is a matter of much social as well as scientific debate. Previous behavioral research indicates that starting language learning late in life can lead to problems in phonological processing. Inadequate phonological capacity may impede lexical learning and semantic processing (phonological bottleneck hypothesis). Using both behavioral and neuroimaging data, here we examine the effects of age of first exposure (AOFE) and total hours of exposure (HOE) to English, on 350 Japanese primary-school children’s semantic processing of spoken English. Children’s English proficiency scores and N400 event-related brain potentials (ERPs) were analyzed in multiple regression analyses. The results showed (1) that later, rather than earlier, AOFE led to higher English proficiency and larger N400 amplitudes, when HOE was controlled for; and (2) that longer HOE led to higher English proficiency and larger N400 amplitudes, whether AOFE was controlled for or not. These data highlight the important role of amount of exposure in FL learning, and cast doubt on the view that starting FL learning earlier always produces better results.

from Neuroscience Research

The N400 effect in children: Relationships with comprehension, vocabulary and decoding

Using event-related potentials (ERPs), we investigated the N400 (an ERP component that occurs in response to meaningful stimuli) in children aged 8–10 years old and examined relationships between the N400 and individual differences in listening comprehension, word recognition and non-word decoding. Moreover, we tested the claim that the N400 effect provides a valuable indicator of behavioural vocabulary knowledge. Eighteen children were presented with picture-word pairs that were either ‘congruent’ (the picture depicted the spoken word) or ‘incongruent’ (they were unrelated). Three peaks were observed in the ERP waveform triggered to the onset of the picture-word stimuli: an N100 in fronto-central channels, an N200 in central–parietal channels and an N400 in frontal, central and parietal channels. In contrast to the N100 peak, the N200 and N400 peaks were sensitive to semantic incongruency with greater peak amplitudes for incongruent than congruent conditions. The incongruency effects for each peak correlated positively with listening comprehension but when the peak amplitudes were averaged across congruent/incongruent conditions they correlated positively with non-word decoding. These findings provide neurophysiological support for the position that sensitivity to semantic context (reflected in the N400 effect) is crucial for comprehension whereas phonological decoding skill relates to more general processing differences reflected in the ERP waveform. There were no correlations between ERP and behavioural measures of expressive or receptive vocabulary knowledge for the same items, suggesting that the N400 effect may not be a reliable estimate of vocabulary knowledge in children aged 8–10 years.

from Brain and Language

The acceleration of spoken-word processing in children’s native-language acquisition: An ERP cohort study

Healthy adults can identify spoken words at a remarkable speed, by incrementally analyzing word-onset information. It is currently unknown how this adult-level speed of spoken-word processing emerges during children’s native-language acquisition. In a picture–word mismatch paradigm, we manipulated the semantic congruency between picture contexts and spoken words, and recorded event-related potential (ERP) responses to the words. Previous similar studies focused on the N400 response, but we focused instead on the onsets of semantic congruency effects (N200 or Phonological Mismatch Negativity), which contain critical information for incremental spoken-word processing. We analyzed ERPs obtained longitudinally from two age cohorts of 40 primary-school children (total n = 80) in a 3-year period. Children first tested at 7 years of age showed earlier onsets of congruency effects (by approximately 70 ms) when tested 2 years later (i.e., at age 9). Children first tested at 9 years of age did not show such shortening of onset latencies 2 years later (i.e., at age 11). Overall, children’s onset latencies at age 9 appeared similar to those of adults. These data challenge the previous hypothesis that word processing is well established at age 7. Instead they support the view that the acceleration of spoken-word processing continues beyond age 7.

from Neuropsychologia