Blog Archives

Age of acquisition effects on the functional organization of language in the adult brain

Using functional magnetic resonance imaging (fMRI), we neuroimaged deaf adults as they performed two linguistic tasks with sentences in American Sign Language, grammatical judgment and phonemic-hand judgment. Participants’ age-onset of sign language acquisition ranged from birth to 14 years; length of sign language experience was substantial and did not vary in relation to age of acquisition. For both tasks, a more left lateralized pattern of activation was observed, with activity for grammatical judgment being more anterior than that observed for phonemic-hand judgment, which was more posterior by comparison. Age of acquisition was linearly and negatively related to activation levels in anterior language regions and positively related to activation levels in posterior visual regions for both tasks.

from Brain and Language

Advertisements

Before the N400: Effects of lexical–semantic violations in visual cortex

There exists an increasing body of research demonstrating that language processing is aided by context-based predictions. Recent findings suggest that the brain generates estimates about the likely physical appearance of upcoming words based on syntactic predictions: words that do not physically look like the expected syntactic category show increased amplitudes in the visual M100 component, the first salient MEG response to visual stimulation. This research asks whether violations of predictions based on lexical–semantic information might similarly generate early visual effects. In a picture–noun matching task, we found early visual effects for words that did not accurately describe the preceding pictures. These results demonstrate that, just like syntactic predictions, lexical–semantic predictions can affect early visual processing around 100 ms, suggesting that the M100 response is not exclusively tuned to recognizing visual features relevant to syntactic category analysis. Rather, the brain might generate predictions about upcoming visual input whenever it can. However, visual effects of lexical–semantic violations only occurred when a single lexical item could be predicted. We argue that this may be due to the fact that in natural language processing, there is typically no straightforward mapping between lexical–semantic fields (e.g., flowers) and visual or auditory forms (e.g., tulip, rose, magnolia). For syntactic categories, in contrast, certain form features do reliably correlate with category membership. This difference may, in part, explain why certain syntactic effects typically occur much earlier than lexical–semantic effects.

from Brain and Language

Verbal short-term memory reflects the organization of long-term memory: Further evidence from short-term memory for emotional words

Many studies suggest that long-term lexical–semantic knowledge is an important determinant of verbal short-term memory (STM) performance. This study explored the impact of emotional valence on word immediate serial recall as a further lexico-semantic long-term memory (LTM) effect on STM. This effect is particularly interesting for the study of STM–LTM interactions since emotional words not only activate specific lexico-semantic LTM features but also capture attentional resources, and hence allow for the study of both LTM and attentional factors on STM tasks. In Experiments 1 and 2, we observed a robust effect of emotional valence on pure list recall in both young and elderly adults, with higher recall performance for emotional lists as opposed to neutral lists, as predicted by increased LTM support for emotional words. In Experiments 3 and 4 however, using mixed lists, it was the lists containing a minority of emotional words which led to higher recall performance over lists containing a majority of emotional words. This was predicted by a weak version of the attentional capture account. These data add new evidence to the theoretical position that LTM knowledge is a critical determinant of STM performance, with further, list type dependent intervention of attentional factors.

from the Journal of Memory and Language

Processing of Ironic Language in Children with High-Functioning Autism Spectrum Disorder

We examined processing of verbal irony in three groups of children: (1) 18 children with high-functioning Autism Spectrum Disorder (HFASD), (2) 18 typically-developing children, matched to the first group for verbal ability, and (3) 18 typically-developing children matched to the first group for chronological age. We utilized an irony comprehension task that minimized verbal and pragmatic demands for participants. Results showed that children with HFASD were as accurate as typically-developing children in judging speaker intent for ironic criticisms, but group differences in judgment latencies, eye gaze, and humor evaluations suggested that children with HFASD applied a different processing strategy for irony comprehension; one that resulted in less accurate appreciation of the social functions of irony.

from the Journal of Autism and Developmental Disorders

Real-time processing of gender-marked articles by native and non-native Spanish speakers

Three experiments using online-processing measures explored whether native and non-native Spanish-speaking adults use gender-marked articles to identify referents of target nouns more rapidly, as shown previously with 3-year-old children learning Spanish as L1 (Lew-Williams & Fernald, 2007). In Experiment 1, participants viewed familiar objects with names of either the same or different grammatical gender while listening to Spanish sentences referring to one object. L1 adults, like L1 children, oriented to the target more rapidly on different-gender trials, when the article was informative about noun identity; however, L2 adults did not. Experiments 2 and 3 controlled for frequency of exposure to article–noun pairs by using novel nouns. L2 adults could not exploit gender information when different article–noun pairs were used in teaching and testing. Experience-related factors may influence how L1 adults and children and L2 adults—who learned Spanish at different ages and in different settings—use grammatical gender in real-time processing.

from the Journal of Memory and Language

Early and late talkers: school-age language, literacy and neurolinguistic differences

Early language development sets the stage for a lifetime of competence in language and literacy. However, the neural mechanisms associated with the relative advantages of early communication success, or the disadvantages of having delayed language development, are not well explored. In this study, 174 elementary school-age children whose parents reported that they started forming sentences ‘early’, ‘on-time’ or ‘late’ were evaluated with standardized measures of language, reading and spelling. All oral and written language measures revealed consistent patterns for ‘early’ talkers to have the highest level of performance and ‘late’ talkers to have the lowest level of performance. We report functional magnetic resonance imaging data from a subset of early, on-time and late talkers matched for age, gender and performance intelligence quotient that allows evaluation of neural activation patterns produced while listening to and reading real words and pronounceable non-words. Activation in bilateral thalamus and putamen, and left insula and superior temporal gyrus during these tasks was significantly lower in late talkers, demonstrating that residual effects of being a late talker are found not only in behavioural tests of oral and written language, but also in distributed cortical-subcortical neural circuits underlying speech and print processing. Moreover, these findings suggest that the age of functional language acquisition can have long-reaching effects on reading and language behaviour, and on the corresponding neurocircuitry that supports linguistic function into the school-age years.

from Brain

Abnormal N400 word repetition effects in fragile X-associated tremor/ataxia syndrome

Fragile X-associated tremor/ataxia syndrome, a neurodegenerative disorder associated with premutation alleles (55–200 CGG repeats) of the FMR1 gene, affects many carriers in late-life. Patients with fragile X-associated tremor/ataxia syndrome typically have cerebellar ataxia, intranuclear inclusions in neurons and astrocytes, as well as cognitive impairment. Dementia can also be present with cognitive deficits that are as severe as in Alzheimer’s disease, however frontosubcortical type impairment is more pronounced in fragile X-associated tremor/ataxia syndrome. We sought to characterize the P600 and N400 word repetition effects in patients with fragile X-associated tremor/ataxia syndrome, using an event-related potential word repetition paradigm with demonstrated sensitivity to very early Alzheimer’s disease. We hypothesized that the fragile X-associated tremor/ataxia syndrome-affected participants with poor declarative verbal memory would have pronounced abnormalities in the P600 repetition effect. In the event-related potential experiment, subjects performed a category decision task whilst an electroencephalogram was recorded. Auditory category statements were each followed by an associated visual target word (50% ‘congruous’ category exemplars, 50% ‘incongruous’ nouns). Two-thirds of the stimuli (category statement–target word pairs) were repeated, either at short-lag (10–40 s) or long-lag (100–140 s). The N400 and P600 amplitude data were submitted to split-plot analyses of variance. These analyses of variance showed a highly significant reduction of the N400 repetition effect (F = 22.5, P < 0.001), but not of the P600 repetition effect, in mild fragile X-associated tremor/ataxia syndrome (n = 32, mean age = 68.7, mean Mini-Mental State Examination score = 26.8). Patients with fragile X-associated tremor/ataxia syndrome had significantly smaller late positive amplitude (550–800 ms post-stimulus onset) to congruous words (P = 0.04 for group effect). Reduced P600 repetition effect amplitude was associated with poorer recall within fragile X-associated tremor/ataxia syndrome patients (r = 0.66) and across all subjects (r = 0.52). Larger P600 amplitude to new congruous words also correlated significantly with higher free recall scores (r = 0.37, P < 0.01) across all subjects. We found a correlation between the amplitude of late positivity and CGG repeat length in those with fragile X-associated tremor/ataxia syndrome (r = 0.47, P = 0.006). Higher levels of FMR1 mRNA were associated with smaller N400s to incongruous words and larger positive amplitudes (between 300 and 500 ms) to congruous words. In conclusion, event-related potential word repetition effects appear sensitive to the cognitive dysfunction present in patients with mild fragile X-associated tremor/ataxia syndrome. Their more severe reduction in N400 repetition effect, than P600, is in contrast to the reverse pattern reported in amnestic mild cognitive impairment and incipient Alzheimer’s disease (Olichney et al., 2008).

from Brain

Word Accents and Morphology—ERPs of Swedish Word Processing

Results are presented which indicate that high stem tones realizing word accents activate a certain class of suffixes in on-line processing of Central Swedish. This supports the view that high Swedish word accent tones are induced onto word stems by particular suffixes rather than being associated with words in the mental lexicon. Using Event-Related Potentials, effects of mismatch between word accents and inflectional suffixes were compared with mismatches between stem and suffix in terms of declension class. Declensionally incorrect suffixes yielded an increase in the N400, indicating problems in lexical retrieval, as well as a P600 effect, showing reanalysis. Both declensionally correct and incorrect high tone-inducing (Accent 2) suffixes combined with a mismatching low tone (Accent 1) on the stems produced P600 effects, but did not increase the N400. Suffixes usually co-occurring with Accent 1 did not yield any effects in words realized with the non-matching Accent 2, suggesting that Accent 1 is a default accent, lacking association with any particular suffix. High tones on Accent 2 words also produced an early anterior positivity, interpreted as a P200 effect reflecting pre-attentive processing of the tone.

from Brain Research

Superior temporal activation as a function of linguistic knowledge: Insights from deaf native signers who speechread

Studies of spoken and signed language processing reliably show involvement of the posterior superior temporal cortex. This region is also reliably activated by observation of meaningless oral and manual actions. In this study we directly compared the extent to which activation in posterior superior temporal cortex is modulated by linguistic knowledge irrespective of differences in language form. We used a novel cross-linguistic approach in two groups of volunteers who differed in their language experience. Using fMRI, we compared deaf native signers of British Sign Language (BSL), who were also proficient speechreaders of English (i.e., two languages) with hearing people who could speechread English, but knew no BSL (i.e., one language). Both groups were presented with BSL signs and silently spoken English words, and were required to respond to a signed or spoken target. The interaction of group and condition revealed activation in the superior temporal cortex, bilaterally, focused in the posterior superior temporal gyri (pSTG, BA 42/22). In hearing people, these regions were activated more by speech than by sign, but in deaf respondents they showed similar levels of activation for both language forms – suggesting that posterior superior temporal regions are highly sensitive to language knowledge irrespective of the mode of delivery of the stimulus material.

from Brain and Language

Implicit statistical learning in language processing: Word predictability is the key

Fundamental learning abilities related to the implicit encoding of sequential structure have been postulated to underlie language acquisition and processing. However, there is very little direct evidence to date supporting such a link between implicit statistical learning and language. In three experiments using novel methods of assessing implicit learning and language abilities, we show that sensitivity to sequential structure – as measured by improvements to immediate memory span for structurally-consistent input sequences – is significantly correlated with the ability to use knowledge of word predictability to aid speech perception under degraded listening conditions. Importantly, the association remained even after controlling for participant performance on other cognitive tasks, including short-term and working memory, intelligence, attention and inhibition, and vocabulary knowledge. Thus, the evidence suggests that implicit learning abilities are essential for acquiring long-term knowledge of the sequential structure of language – i.e., knowledge of word predictability – and that individual differences on such abilities impact speech perception in everyday situations. These findings provide a new theoretical rationale linking basic learning phenomena to specific aspects of spoken language processing in adults, and may furthermore indicate new fruitful directions for investigating both typical and atypical language development.

from Cognition

Neurology of anomia in the semantic variant of primary progressive aphasia

The semantic variant of primary progressive aphasia (PPA) is characterized by the combination of word comprehension deficits, fluent aphasia and a particularly severe anomia. In this study, two novel tasks were used to explore the factors contributing to the anomia. The single most common factor was a blurring of distinctions among members of a semantic category, leading to errors of overgeneralization in word–object matching tasks as well as in word definitions and object descriptions. This factor was more pronounced for natural kinds than artifacts. In patients with the more severe anomias, conceptual maps were more extensively disrupted so that inter-category distinctions were as impaired as intra-category distinctions. Many objects that could not be named aloud could be matched to the correct word in patients with mild but not severe anomia, reflecting a gradual intensification of the semantic factor as the naming disorder becomes more severe. Accurate object descriptions were more frequent than accurate word definitions and all patients experienced prominent word comprehension deficits that interfered with everyday activities but no consequential impairment of object usage or face recognition. Magnetic resonance imaging revealed three characteristics: greater atrophy of the left hemisphere; atrophy of anterior components of the perisylvian language network in the superior and middle temporal gyri; and atrophy of anterior components of the face and object recognition network in the inferior and medial temporal lobes.

from the Journal of Neurology

English L1 and L2 Speakers’ Knowledge of Lexical Bundles

ABSTRACT
The purpose of the present study is to contribute to the ongoing debate about the use of lexical bundles by first (L1) and second language (L2) speakers of English. The study consists of two experiments that examined whether L1 and L2 English speakers displayed any knowledge of lexical bundles as holistic units and whether their knowledge was affected by the discourse function of the lexical bundles (discourse-organizing or referential). The participants in Experiment 1 (N = 61) completed a gap-filling activity, whereas the participants in Experiment 2 (N = 61) carried out a dictation task. Results showed that the participants’ knowledge differed for specific lexical bundles and that, overall, they knew more discourse-organizing bundles than referential bundles. The implications of the study are discussed in terms of current research about the role of frequency-based language chunks in L1 and L2 speech processing in English.

from Language Learning

Situated sentence processing: The coordinated interplay account and a neurobehavioral model

Empirical evidence demonstrating that sentence meaning is rapidly reconciled with the visual environment has been broadly construed as supporting the seamless interaction of visual and linguistic representations during situated comprehension. Based on recent behavioral and neuroscientific findings, however, we argue for the more deeply rooted coordination of the mechanisms underlying visual and linguistic processing, and for jointly considering the behavioral and neural correlates of scene–sentence reconciliation during situated comprehension. The Coordinated Interplay Account (CIA; Knoeferle, P., & Crocker, M. W. (2007). The influence of recent scene events on spoken comprehension: Evidence from eye movements. Journal of Memory and Language, 57(4), 519–543) asserts that incremental linguistic interpretation actively directs attention in the visual environment, thereby increasing the salience of attended scene information for comprehension. We review behavioral and neuroscientific findings in support of the CIA’s three processing stages: (i) incremental sentence interpretation, (ii) language-mediated visual attention, and (iii) the on-line influence of non-linguistic visual context. We then describe a recently developed connectionist model which both embodies the central CIA proposals and has been successfully applied in modeling a range of behavioral findings from the visual world paradigm (Mayberry, M. R., Crocker, M. W., & Knoeferle, P. (2009). Learning to attend: A connectionist model of situated language comprehension. Cognitive Science). Results from a new simulation suggest the model also correlates with event-related brain potentials elicited by the immediate use of visual context for linguistic disambiguation (Knoeferle, P., Habets, B., Crocker, M. W., & Münte, T. F. (2008). Visual scenes trigger immediate syntactic reanalysis: Evidence from ERPs during situated spoken comprehension. Cerebral Cortex, 18(4), 789–795). Finally, we argue that the mechanisms underlying interpretation, visual attention, and scene apprehension are not only in close temporal synchronization, but have co-adapted to optimize real-time visual grounding of situated spoken language, thus facilitating the association of linguistic, visual and motor representations that emerge during the course of our embodied linguistic experience in the world.

from Brain and Language

Phonological typicality does not influence fixation durations in normal reading.

Using a word-by-word self-paced reading paradigm, T. A. Farmer, M. H. Christiansen, and P. Monaghan (2006) reported faster reading times for words that are phonologically typical for their syntactic category (i.e., noun or verb) than for words that are phonologically atypical. This result has been taken to suggest that language users are sensitive to subtle relationships between sound and syntactic function and that they make rapid use of this information in comprehension. The present article reports attempts to replicate this result using both eyetracking during normal reading (Experiment 1) and word-by-word self-paced reading (Experiment 2). No hint of a phonological typicality effect emerged on any reading-time measure in Experiment 1, nor did Experiment 2 replicate Farmer et al.’s finding from self-paced reading. Indeed, the differences between condition means were not consistently in the predicted direction, as phonologically atypical verbs were read more quickly than phonologically typical verbs, on most measures. Implications for research on visual word recognition are discussed. (PsycINFO Database Record (c) 2009 APA, all rights reserved)

from Journal of Experimental Psychology: Learning, Memory, and Cognition

Opaque for the reader but transparent for the brain: Neural signatures of morphological complexity

Within linguistics, words with a complex internal structure are commonly assumed to be decomposed into their constituent morphemes (e.g. un-help-ful). Nevertheless, an ongoing debate concerns the brain structures that subserve this process. Using functional magnetic resonance imaging, the present study varied the internal complexity of derived words while keeping the external surface structure constant as well as controlling relevant parameters that could affect word recognition. This allowed us to tease apart brain activations specifically related to morphological processing from those related to possible confounds of perceptual cues like word length or affix type. Increased task related activity in left inferior frontal, bilateral temporo-occipital and right parietal areas was specifically related to the processing of derivations with high complex internal structure relative to those with low complex internal structure. Our results show, that morphologically complex words are decomposed and that the brain processes the degree of internal complexity of word derivations.

from Neuropsychologia