Linguists have suggested one mechanism for the creative extension of meaning in language involves mapping, or constructing correspondences between conceptual domains. For example, the sentence, “The clever boys used a cardboard box as a boat,” sets up a novel mapping between the concepts cardboard box and boat, while “His main method of transportation is a boat,” relies on a more conventional mapping between method of transportation and boat. To examine the electrophysiological signature of this mapping process, electroencephalogram (EEG) was recorded from the scalp as healthy adults read three sorts of sentences: low-cloze (unpredictable) conventional (“His main method of transportation is a boat,”), low-cloze novel mapping (“The clever boys used a cardboard box as a boat,”), and high-cloze (predictable) conventional (“The only way to get around Venice is to navigate the canals in a boat,”). Event-related brain potentials (ERPs) were time-locked to sentence final words. The novel and conventional conditions were matched for cloze probability (a measure of predictability based on the sentence context), lexical association between the sentence frame and the final word (using latent semantic analysis), and other factors known to influence ERPs to language stimuli. The high-cloze conventional control condition was included to compare the effects of mapping conventionality to those of predictability. The N400 component of the ERPs was affected by predictability but not by conventionality. By contrast, a late positivity was affected both by the predictability of sentence final words, being larger for words in low-cloze contexts that made target words difficult to predict, and by novelty, as words in the novel condition elicited a larger positivity 700–900 ms than the same words in the (cloze-matched) conventional condition.
from Brain Research
Expressive vocabulary and phonological awareness: correlations in children with phonological disorders
CONCLUSION: A correlation was found between some phonological awareness abilities and the expressive vocabulary of the children with phonological disorder in this study, in different ages. Performance in both tasks improved with age.
Electrophysiological studies investigating similarities between music and language perception have relied exclusively on the signal averaging technique, which does not adequately represent oscillatory aspects of electrical brain activity that are relevant for higher cognition. The current study investigated the patterns of brain oscillations during simultaneous processing of music and language using visually presented sentences and auditorily presented chord sequences. Music-syntactically regular or irregular chord functions were presented in sync with syntactically or semantically correct or incorrect words. Irregular chord functions (presented simultaneously with a syntactically correct word) produced an early (150–250 ms) spectral power decrease over anterior frontal regions in the theta band (5–7 Hz) and a late (350–700 ms) power increase in both the delta and the theta band (2–7 Hz) over parietal regions. Syntactically incorrect words (presented simultaneously with a regular chord) elicited a similar late power increase in delta–theta band over parietal sites, but no early effect. Interestingly, the late effect was significantly diminished when the language-syntactic and music-syntactic irregularities occurred at the same time. Further, the presence of a semantic violation occurring simultaneously with regular chords produced a significant increase in later delta–theta power at posterior regions; this effect was marginally decreased when the identical semantic violation occurred simultaneously with a music syntactical violation. Altogether, these results show that low frequency oscillatory networks get activated during the syntactic processing of both music and language, and further, these networks may possibly be shared.
from Brain and Language
In the last century philosophers, mathematicians and linguists put much effort in building formal models to describe and explain the complexity of language and the meaning of words. Concepts such as truth value, compositionality, recursion, predication and logical entailment have become well known in the linguistic field of formal semantics. In the last decades, on the other hand, neuropsychologists, physicians and cognitive scientists started developing methodologies to investigate how different kinds of information are processed in real time by the brain. Electroencephalography (EEG), functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) allow us to inspect how and where many kinds of stimuli, including words and sentences, commit neuronal populations to work. In the current paper we review some recent experimental studies investigating the linguistic mechanisms postulated by formal theories of meaning in the brain of language speakers.
In particular, we will focus on the processing of Negative Polarity Items, which are terms such as ever or any that require specific semantic demands in order to be correctly used – or understood – in a sentence. Then we will explore the compositional aspects of meaning contrasted to the world knowledge based ones, and we will compare some theories that challenge or find evidence for distinct neuronal substrates handling these mechanisms. Finally, we will briefly review some formal semantics construals that have been studied through neurolinguistics methods, such as modal subordination.
After this survey, we will conclude that there is neuroscientific evidence that the human brain implements semantic representations and operations that have the following properties: they are abstract, symbolic and grammar-driven. The challenge for the future research, then, will be to figure out how brain functions cooperate and interact with other cognitive systems, in dealing with this kind of information structures.
from the Journal of Neurolinguistics
At least three cognitive brain components are necessary in order for us to be able to produce and comprehend language: a Memory repository for the lexicon, a Unification buffer where lexical information is combined into novel structures, and a Control apparatus presiding over executive function in language. Here we describe the brain networks that support Memory and Unification in semantics. A dynamic account of their interactions is presented, in which a balance between the two components is sought at each word-processing step. We use the theory to provide an explanation of the N400 effect.
More evidence for a continuum between phonological and deep dyslexia: Novel data from three measures of direct orthography-to-phonology translation
Conclusions: Two factors are key to this dyslexia continuum: the severity of phonological impairment and also the degree of interaction between semantic and impaired phonological representations, indicating that semantic representations become more central to reading in the face of phonological impairment.
C-Speak Aphasia alternative communication program for people with severe aphasia: Importance of executive functioning and semantic knowledge
Learning how to use a computer-based communication system can be challenging for people with severe aphasia even if the system is not word-based. This study explored cognitive and linguistic factors relative to how they affected individual patients’ ability to communicate expressively using C-Speak Aphasia (CSA), an alternative communication computer program that is primarily picture-based. Ten individuals with severe non-fluent aphasia received at least six months of training with CSA. To assess carryover of training, untrained functional communication tasks (i.e., answering autobiographical questions, describing pictures, making telephone calls, describing a short video, and two writing tasks) were repeatedly probed in two conditions: (1) using CSA in addition to natural forms of communication, and (2) using only natural forms of communication, e.g., speaking, writing, gesturing, drawing. Four of the 10 participants communicated more information on selected probe tasks using CSA than they did without the computer. Response to treatment was also examined in relation to baseline measures of non-linguistic executive function skills, pictorial semantic abilities, and auditory comprehension. Only nonlinguistic executive function skills were significantly correlated with treatment response.
The relatively intact ability of individuals with aphasia to assign prominence to information in narratives sheds light on the neurological underpinnings of modalising language, and suggests possible skills associated with the ability of aphasic persons to “communicate better than they talk” (Holland, 1977). The clinical potential for assessment and treatment that incorporates narrative evaluative devices and modalising language needs to be further explored.
Recent ERP findings challenge the widespread assumption that syntactic and semantic processes are tightly coupled. Syntactically well-formed sentences that are semantically anomalous due to thematic mismatches elicit a P600, the component standardly associated with syntactic anomaly. This ‘thematic P600’ effect has been attributed to detection of semantically plausible thematic relations that conflict with the surface syntactic structure of the sentence, implying a processing architecture with an independent semantic analyzer. A key finding is that the P600 is selectively sensitive to the presence of plausible verb-argument relations, and that otherwise an N400 is elicited (The hearty meal was devouring … vs. The dusty tabletop was devouring …: Kim & Osterhout, 2005). The current study investigates in Spanish whether the evidence for an independent semantic analyzer is better explained by a standard architecture that rapidly integrates multiple sources of lexical, syntactic, and semantic information. The study manipulated the presence of plausible thematic relations, and varied the choice of auxiliary between passive-biased fue and active-progressive biased estaba. Results show a late positivity that appeared as soon as comprehenders detected an improbable combination of subject animacy, auxiliary bias, or verb voice morphology. This effect appeared at the lexical verb in the fue conditions and at the auxiliary in the estaba conditions. The late positivity elicited by surface thematic anomalies was the same, regardless of the presence of a plausible non-surface interpretation, and no N400 effects were elicited. These findings do not implicate an independent semantic analyzer, and are compatible with standard language processing architectures.
from Brain and Language
This investigation moves beyond the traditional studies of word reading to identify how the production complexity of words affects reading accuracy in an individual with deep dyslexia (JO). We examined JO’s ability to read words aloud while manipulating both the production complexity of the words and the semantic context. The classification of words as either phonetically simple or complex was based on the Index of Phonetic Complexity. The semantic context was varied using a semantic blocking paradigm (i.e., semantically blocked and unblocked conditions). In the semantically blocked condition words were grouped by semantic categories (e.g., table, sit, seat, couch), whereas in the unblocked condition the same words were presented in a random order. JO’s performance on reading aloud was also compared to her performance on a repetition task using the same items. Results revealed a strong interaction between word complexity and semantic blocking for reading aloud but not for repetition. JO produced the greatest number of errors for phonetically complex words in semantically blocked condition. This interaction suggests that semantic processes are constrained by output production processes which are exaggerated when derived from visual rather than auditory targets. This complex relationship between orthographic, semantic, and phonetic processes highlights the need for word recognition models to explicitly account for production processes.
from the Journal of Neurolinguistics
Generics are statements such as “tigers are striped” and “ducks lay eggs”. They express general, though not universal or exceptionless, claims about kinds (Carlson & Pelletier, 1995). For example, the generic “ducks lay eggs” seems true even though many ducks (e.g. the males) do not lay eggs. The universally quantified version of the statement should be rejected, however: it is incorrect to say “all ducks lay eggs”, since many ducks do not lay eggs. We found that adults nonetheless often judged such universal statements true, despite knowing that only one gender had the relevant property (Experiment 1). The effect was not due to participants interpreting the universals as quantifying over subkinds, or as applying to only a subset of the kind (e.g. only the females) (Experiment 2), and it persisted even when people judged that male ducks did not lay eggs only moments before (Experiment 3). It also persisted when people were presented with correct alternatives such as “some ducks do not lay eggs” (Experiment 4). Our findings reveal a robust generic overgeneralization effect, predicted by the hypothesis that generics express primitive, default generalizations.
from the Journal of Memory and Language
Achieving a clearer picture of categorial distinctions in the brain is essential for our understanding of the conceptual lexicon, but much more fine-grained investigations are required in order for this evidence to contribute to lexical research. Here we present a collection of advanced data-mining techniques that allows the category of individual concepts to be decoded from single trials of EEG data. Neural activity was recorded while participants silently named images of mammals and tools, and category could be detected in single trials with an accuracy well above chance, both when considering data from single participants, and when group-training across participants. By aggregating across all trials, single concepts could be correctly assigned to their category with an accuracy of 98%. The pattern of classifications made by the algorithm confirmed that the neural patterns identified are due to conceptual category, and not any of a series of processing-related confounds. The time intervals, frequency bands and scalp locations that proved most informative for prediction permit physiological interpretation: the widespread activation shortly after appearance of the stimulus (from 100 ms) is consistent both with accounts of multi-pass processing, and distributed representations of categories. These methods provide an alternative to fMRI for fine-grained, large-scale investigations of the conceptual lexicon.
from Brain and Language
Logic and conversation revisited: evidence for a division between semantic and pragmatic content in real-time language comprehension
The distinction between semantics (linguistically encoded meaning) and pragmatics (inferences about communicative intentions) can often be unclear and counterintuitive. For example, linguistic theories argue that the meaning of some encompasses the meaning of all while the intuition that some implies not all results from an inference. We explored how online interpretation of some evolves using an eye-tracking while listening paradigm. Early eye-movements indicated that while some was initially interpreted as compatible with all, participants began excluding referents compatible with all approximately 800 ms later. These results contrast with recent evidence of immediate inferencing and highlight the presence of bottom-up semantic-pragmatic interactions which necessarily rely on initial access to lexical meanings to trigger inferences.
This article extends recent findings that presenting semantically related vocabulary simultaneously inhibits learning. It does so by adding story contexts. Participants learned 32 new labels for known concepts from four different semantic categories in stories that were either semantically related (one category per story) or semantically unrelated (four categories per story). They then completed a semantic-categorization task, followed by a stimulus-match verification task in an eye-tracker. Results suggest that there may be a slight learning advantage in the semantically unrelated condition. However, our findings are better interpreted in terms of how learning occurred and how vocabulary was processed afterward. Additionally, our results suggest that contextual support from the stories may have surmounted much of the disadvantage attributed to semantic relatedness.
from Language Learning