Using functional magnetic resonance imaging (fMRI), we neuroimaged deaf adults as they performed two linguistic tasks with sentences in American Sign Language, grammatical judgment and phonemic-hand judgment. Participants’ age-onset of sign language acquisition ranged from birth to 14 years; length of sign language experience was substantial and did not vary in relation to age of acquisition. For both tasks, a more left lateralized pattern of activation was observed, with activity for grammatical judgment being more anterior than that observed for phonemic-hand judgment, which was more posterior by comparison. Age of acquisition was linearly and negatively related to activation levels in anterior language regions and positively related to activation levels in posterior visual regions for both tasks.
from Brain and Language
There exists an increasing body of research demonstrating that language processing is aided by context-based predictions. Recent findings suggest that the brain generates estimates about the likely physical appearance of upcoming words based on syntactic predictions: words that do not physically look like the expected syntactic category show increased amplitudes in the visual M100 component, the first salient MEG response to visual stimulation. This research asks whether violations of predictions based on lexical–semantic information might similarly generate early visual effects. In a picture–noun matching task, we found early visual effects for words that did not accurately describe the preceding pictures. These results demonstrate that, just like syntactic predictions, lexical–semantic predictions can affect early visual processing around 100 ms, suggesting that the M100 response is not exclusively tuned to recognizing visual features relevant to syntactic category analysis. Rather, the brain might generate predictions about upcoming visual input whenever it can. However, visual effects of lexical–semantic violations only occurred when a single lexical item could be predicted. We argue that this may be due to the fact that in natural language processing, there is typically no straightforward mapping between lexical–semantic fields (e.g., flowers) and visual or auditory forms (e.g., tulip, rose, magnolia). For syntactic categories, in contrast, certain form features do reliably correlate with category membership. This difference may, in part, explain why certain syntactic effects typically occur much earlier than lexical–semantic effects.
from Brain and Language
Verbal short-term memory reflects the organization of long-term memory: Further evidence from short-term memory for emotional words
Many studies suggest that long-term lexical–semantic knowledge is an important determinant of verbal short-term memory (STM) performance. This study explored the impact of emotional valence on word immediate serial recall as a further lexico-semantic long-term memory (LTM) effect on STM. This effect is particularly interesting for the study of STM–LTM interactions since emotional words not only activate specific lexico-semantic LTM features but also capture attentional resources, and hence allow for the study of both LTM and attentional factors on STM tasks. In Experiments 1 and 2, we observed a robust effect of emotional valence on pure list recall in both young and elderly adults, with higher recall performance for emotional lists as opposed to neutral lists, as predicted by increased LTM support for emotional words. In Experiments 3 and 4 however, using mixed lists, it was the lists containing a minority of emotional words which led to higher recall performance over lists containing a majority of emotional words. This was predicted by a weak version of the attentional capture account. These data add new evidence to the theoretical position that LTM knowledge is a critical determinant of STM performance, with further, list type dependent intervention of attentional factors.
from the Journal of Memory and Language
We examined processing of verbal irony in three groups of children: (1) 18 children with high-functioning Autism Spectrum Disorder (HFASD), (2) 18 typically-developing children, matched to the first group for verbal ability, and (3) 18 typically-developing children matched to the first group for chronological age. We utilized an irony comprehension task that minimized verbal and pragmatic demands for participants. Results showed that children with HFASD were as accurate as typically-developing children in judging speaker intent for ironic criticisms, but group differences in judgment latencies, eye gaze, and humor evaluations suggested that children with HFASD applied a different processing strategy for irony comprehension; one that resulted in less accurate appreciation of the social functions of irony.
Three experiments using online-processing measures explored whether native and non-native Spanish-speaking adults use gender-marked articles to identify referents of target nouns more rapidly, as shown previously with 3-year-old children learning Spanish as L1 (Lew-Williams & Fernald, 2007). In Experiment 1, participants viewed familiar objects with names of either the same or different grammatical gender while listening to Spanish sentences referring to one object. L1 adults, like L1 children, oriented to the target more rapidly on different-gender trials, when the article was informative about noun identity; however, L2 adults did not. Experiments 2 and 3 controlled for frequency of exposure to article–noun pairs by using novel nouns. L2 adults could not exploit gender information when different article–noun pairs were used in teaching and testing. Experience-related factors may influence how L1 adults and children and L2 adults—who learned Spanish at different ages and in different settings—use grammatical gender in real-time processing.
from the Journal of Memory and Language
Early language development sets the stage for a lifetime of competence in language and literacy. However, the neural mechanisms associated with the relative advantages of early communication success, or the disadvantages of having delayed language development, are not well explored. In this study, 174 elementary school-age children whose parents reported that they started forming sentences ‘early’, ‘on-time’ or ‘late’ were evaluated with standardized measures of language, reading and spelling. All oral and written language measures revealed consistent patterns for ‘early’ talkers to have the highest level of performance and ‘late’ talkers to have the lowest level of performance. We report functional magnetic resonance imaging data from a subset of early, on-time and late talkers matched for age, gender and performance intelligence quotient that allows evaluation of neural activation patterns produced while listening to and reading real words and pronounceable non-words. Activation in bilateral thalamus and putamen, and left insula and superior temporal gyrus during these tasks was significantly lower in late talkers, demonstrating that residual effects of being a late talker are found not only in behavioural tests of oral and written language, but also in distributed cortical-subcortical neural circuits underlying speech and print processing. Moreover, these findings suggest that the age of functional language acquisition can have long-reaching effects on reading and language behaviour, and on the corresponding neurocircuitry that supports linguistic function into the school-age years.
Results are presented which indicate that high stem tones realizing word accents activate a certain class of suffixes in on-line processing of Central Swedish. This supports the view that high Swedish word accent tones are induced onto word stems by particular suffixes rather than being associated with words in the mental lexicon. Using Event-Related Potentials, effects of mismatch between word accents and inflectional suffixes were compared with mismatches between stem and suffix in terms of declension class. Declensionally incorrect suffixes yielded an increase in the N400, indicating problems in lexical retrieval, as well as a P600 effect, showing reanalysis. Both declensionally correct and incorrect high tone-inducing (Accent 2) suffixes combined with a mismatching low tone (Accent 1) on the stems produced P600 effects, but did not increase the N400. Suffixes usually co-occurring with Accent 1 did not yield any effects in words realized with the non-matching Accent 2, suggesting that Accent 1 is a default accent, lacking association with any particular suffix. High tones on Accent 2 words also produced an early anterior positivity, interpreted as a P200 effect reflecting pre-attentive processing of the tone.
from Brain Research
Superior temporal activation as a function of linguistic knowledge: Insights from deaf native signers who speechread
Studies of spoken and signed language processing reliably show involvement of the posterior superior temporal cortex. This region is also reliably activated by observation of meaningless oral and manual actions. In this study we directly compared the extent to which activation in posterior superior temporal cortex is modulated by linguistic knowledge irrespective of differences in language form. We used a novel cross-linguistic approach in two groups of volunteers who differed in their language experience. Using fMRI, we compared deaf native signers of British Sign Language (BSL), who were also proficient speechreaders of English (i.e., two languages) with hearing people who could speechread English, but knew no BSL (i.e., one language). Both groups were presented with BSL signs and silently spoken English words, and were required to respond to a signed or spoken target. The interaction of group and condition revealed activation in the superior temporal cortex, bilaterally, focused in the posterior superior temporal gyri (pSTG, BA 42/22). In hearing people, these regions were activated more by speech than by sign, but in deaf respondents they showed similar levels of activation for both language forms – suggesting that posterior superior temporal regions are highly sensitive to language knowledge irrespective of the mode of delivery of the stimulus material.
from Brain and Language
Fundamental learning abilities related to the implicit encoding of sequential structure have been postulated to underlie language acquisition and processing. However, there is very little direct evidence to date supporting such a link between implicit statistical learning and language. In three experiments using novel methods of assessing implicit learning and language abilities, we show that sensitivity to sequential structure – as measured by improvements to immediate memory span for structurally-consistent input sequences – is significantly correlated with the ability to use knowledge of word predictability to aid speech perception under degraded listening conditions. Importantly, the association remained even after controlling for participant performance on other cognitive tasks, including short-term and working memory, intelligence, attention and inhibition, and vocabulary knowledge. Thus, the evidence suggests that implicit learning abilities are essential for acquiring long-term knowledge of the sequential structure of language – i.e., knowledge of word predictability – and that individual differences on such abilities impact speech perception in everyday situations. These findings provide a new theoretical rationale linking basic learning phenomena to specific aspects of spoken language processing in adults, and may furthermore indicate new fruitful directions for investigating both typical and atypical language development.
The semantic variant of primary progressive aphasia (PPA) is characterized by the combination of word comprehension deficits, fluent aphasia and a particularly severe anomia. In this study, two novel tasks were used to explore the factors contributing to the anomia. The single most common factor was a blurring of distinctions among members of a semantic category, leading to errors of overgeneralization in word–object matching tasks as well as in word definitions and object descriptions. This factor was more pronounced for natural kinds than artifacts. In patients with the more severe anomias, conceptual maps were more extensively disrupted so that inter-category distinctions were as impaired as intra-category distinctions. Many objects that could not be named aloud could be matched to the correct word in patients with mild but not severe anomia, reflecting a gradual intensification of the semantic factor as the naming disorder becomes more severe. Accurate object descriptions were more frequent than accurate word definitions and all patients experienced prominent word comprehension deficits that interfered with everyday activities but no consequential impairment of object usage or face recognition. Magnetic resonance imaging revealed three characteristics: greater atrophy of the left hemisphere; atrophy of anterior components of the perisylvian language network in the superior and middle temporal gyri; and atrophy of anterior components of the face and object recognition network in the inferior and medial temporal lobes.
from the Journal of Neurology
The purpose of the present study is to contribute to the ongoing debate about the use of lexical bundles by first (L1) and second language (L2) speakers of English. The study consists of two experiments that examined whether L1 and L2 English speakers displayed any knowledge of lexical bundles as holistic units and whether their knowledge was affected by the discourse function of the lexical bundles (discourse-organizing or referential). The participants in Experiment 1 (N = 61) completed a gap-filling activity, whereas the participants in Experiment 2 (N = 61) carried out a dictation task. Results showed that the participants’ knowledge differed for specific lexical bundles and that, overall, they knew more discourse-organizing bundles than referential bundles. The implications of the study are discussed in terms of current research about the role of frequency-based language chunks in L1 and L2 speech processing in English.
from Language Learning
Using a word-by-word self-paced reading paradigm, T. A. Farmer, M. H. Christiansen, and P. Monaghan (2006) reported faster reading times for words that are phonologically typical for their syntactic category (i.e., noun or verb) than for words that are phonologically atypical. This result has been taken to suggest that language users are sensitive to subtle relationships between sound and syntactic function and that they make rapid use of this information in comprehension. The present article reports attempts to replicate this result using both eyetracking during normal reading (Experiment 1) and word-by-word self-paced reading (Experiment 2). No hint of a phonological typicality effect emerged on any reading-time measure in Experiment 1, nor did Experiment 2 replicate Farmer et al.’s finding from self-paced reading. Indeed, the differences between condition means were not consistently in the predicted direction, as phonologically atypical verbs were read more quickly than phonologically typical verbs, on most measures. Implications for research on visual word recognition are discussed. (PsycINFO Database Record (c) 2009 APA, all rights reserved)
Within linguistics, words with a complex internal structure are commonly assumed to be decomposed into their constituent morphemes (e.g. un-help-ful). Nevertheless, an ongoing debate concerns the brain structures that subserve this process. Using functional magnetic resonance imaging, the present study varied the internal complexity of derived words while keeping the external surface structure constant as well as controlling relevant parameters that could affect word recognition. This allowed us to tease apart brain activations specifically related to morphological processing from those related to possible confounds of perceptual cues like word length or affix type. Increased task related activity in left inferior frontal, bilateral temporo-occipital and right parietal areas was specifically related to the processing of derivations with high complex internal structure relative to those with low complex internal structure. Our results show, that morphologically complex words are decomposed and that the brain processes the degree of internal complexity of word derivations.