Blog Archives

Emotional self reference: Brain structures involved in the processing of words describing one’s own emotions

The present functional magnetic resonance imaging study investigated the role of emotion-related (e.g., amygdala) and self-related brain structures (MPFC in particular) in the processing of emotional words varying in stimulus reference. Healthy subjects (N = 22) were presented with emotional (pleasant or unpleasant) or neutral words in three different conditions: 1) self (e.g., my fear), 2) other (e.g., his fear) and 3) no reference (e.g., the fear). Processing of unpleasant words was associated with increased amygdala and also insula activation across all conditions. Pleasant stimuli were specifically associated with increased activation of amygdala and insula when related to the self (vs. other and no reference). Activity in the MPFC (vMPFC in particular) and anterior cingulate cortex (ACC) was preferentially increased during processing of self-related emotional words (vs. other and no reference). These results demonstrate that amygdala activation in response to emotional stimuli is modulated by stimulus reference and that brain structures implicated in emotional and self-related processing might be important for the subjective experience of one’s own emotions.

from Neuropsychología

Advertisements

Frontal lobe damage impairs process and content in semantic memory: Evidence from category-specific effects in progressive non-fluent aphasia

Portions of left inferior frontal cortex have been linked to semantic memory both in terms of the content of conceptual representation (e.g., motor aspects in an embodied semantics framework) and the cognitive processes used to access these representations (e.g., response selection). Progressive non-fluent aphasia (PNFA) is a neurodegenerative condition characterized by progressive atrophy of left inferior frontal cortex. PNFA can, therefore, provide a lesion model for examining the impact of frontal lobe damage on semantic processing and content. In the current study we examined picture naming in a cohort of PNFA patients across a variety of semantic categories. An embodied approach to semantic memory holds that sensorimotor features such as self-initiated action may assume differential importance for the representation of manufactured artifacts (e.g., naming hand tools). Embodiment theories might therefore predict that patients with frontal damage would be differentially impaired on manufactured artifacts relative to natural kinds, and this prediction was borne out. We also examined patterns of naming errors across a wide range of semantic categories and found that naming error distributions were heterogeneous. Although PNFA patients performed worse overall on naming manufactured artifacts, there was no reliable relationship between anomia and manipulability across semantic categories. These results add to a growing body of research arguing against a purely sensorimotor account of semantic memory, suggesting instead a more nuanced balance of process and content in how the brain represents conceptual knowledge.

from Cortex

Frontal lobe damage impairs process and content in semantic memory: Evidence from category-specific effects in progressive non-fluent aphasia

Portions of left inferior frontal cortex have been linked to semantic memory both in terms of the content of conceptual representation (e.g., motor aspects in an embodied semantics framework) and the cognitive processes used to access these representations (e.g., response selection). Progressive non-fluent aphasia (PNFA) is a neurodegenerative condition characterized by progressive atrophy of left inferior frontal cortex. PNFA can, therefore, provide a lesion model for examining the impact of frontal lobe damage on semantic processing and content. In the current study we examined picture naming in a cohort of PNFA patients across a variety of semantic categories. An embodied approach to semantic memory holds that sensorimotor features such as self-initiated action may assume differential importance for the representation of manufactured artifacts (e.g., naming hand tools). Embodiment theories might therefore predict that patients with frontal damage would be differentially impaired on manufactured artifacts relative to natural kinds, and this prediction was borne out. We also examined patterns of naming errors across a wide range of semantic categories and found that naming error distributions were heterogeneous. Although PNFA patients performed worse overall on naming manufactured artifacts, there was no reliable relationship between anomia and manipulability across semantic categories. These results add to a growing body of research arguing against a purely sensorimotor account of semantic memory, suggesting instead a more nuanced balance of process and content in how the brain represents conceptual knowledge.

from Cortex

How vision is shaped by language comprehension – top-down feedback based on low-spatial frequencies

Effects of language comprehension on visual processing have been extensively studied within the embodied-language framework. However, it is unknown whether these effects are caused by passive repetition suppression in visual processing areas, or depend on active feedback, based on partial input, from prefrontal regions. Based on a model of top-down feedback during visual recognition, we predicted diminished effects when low-spatial frequencies were removed from targets. We compared low-pass and high-pass filtered pictures in a sentence-picture-verification task. Target pictures matched or mismatched the implied shape of an object mentioned in a preceding sentence, or were unrelated to the sentences. As predicted, there was a large match advantage when the targets contained low-spatial frequencies, but no effect of linguistic context when these frequencies were filtered out. The proposed top-down feedback model is superior to repetition suppression in explaining the current results, as well as earlier results as well as about the lateralization of this effect, and peculiar color match effects. We discuss these findings in the context of recent general proposals of prediction and top-down feedback.

from Brain Research

Testing the theory of embodied cognition with subliminal words

In the current study, we tested the embodied cognition theory (ECT). The ECT postulates mandatory sensorimotor processing of words when accessing their meaning. We test that prediction by investigating whether invisible (i.e., subliminal) spatial words activate responses based on their long-term and short-term meaning. Masking of the words is used to prevent word visibility and intentional elaboration of the words’ semantic content. In this way, masking specifically isolates mandatory sensorimotor processing of words as predicted by the ECT. Do spatial subliminal words activate responses nonetheless? In Experiment 1, we demonstrate a spatial congruence effect of the invisible words if they precede visible target words. In Experiment 2, we show that masked words activate responses based on their long-term meaning. In Experiment 3, we demonstrate that masked words are also processed according to their short-term response meaning. We conclude that the ECT is supported by our findings and discuss implications of our results for embodied theories of semantic word processing and masked priming experiments.

from Cognition

Language, gesture, action! A test of the Gesture as Simulated Action framework

The Gesture as Simulated Action (GSA) framework (Hostetter & Alibali, 2008) holds that representational gestures are produced when actions are simulated as part of thinking and speaking. Accordingly, speakers should gesture more when describing images with which they have specific physical experience than when describing images that are less closely tied to action. Experiment 1 supported this hypothesis by showing that speakers produced more representational gestures when describing patterns they had physically made than when describing patterns they had only viewed. Experiment 2 replicated this finding and ruled out the possibility that the effect is due to decreased opportunity for verbal rehearsal when speakers physically made the patterns. Experiment 3 ruled out the possibility that the effect in Experiments 1 and 2 was due to motor priming from making the patterns. Taken together, these experiments support the central claim of the GSA framework by suggesting that speakers gesture when they express thoughts that involve simulations of actions.

from the Journal of Memory and Language

Verb impairment in aphasia: A priming study of body-part overlap

This group of verb-impaired aphasic individuals was able to automatically (and rapidly) activate somatotopic features of verbs, showing little evidence of impaired lexical-semantic representations. Hence verb processing and verb naming were found to dissociate. In addition, this study extends our understanding of language processing by showing that actions are simulated by the human brain, even when verbs are encountered as de-contextualised single words. Further, somatotopic information is necessary, but not sufficient, for action simulation.

from Aphasiology

Look but don’t touch: Tactile disadvantage in processing modality-specific words

Recent neuroimaging research has shown that perceptual and conceptual processing share a common, modality-specific neural substrate, while work on modality switching costs suggests that they share some of the same attentional mechanisms. In three experiments, we employed a modality detection task that displayed modality-specific object properties (e.g., unimodal shrill, warm, crimson, or bimodal jagged, fluffy) for extremely short display times and asked participants to judge whether each property corresponded to a particular target modality (e.g., auditory, gustatory, tactile, olfactory, visual). Results show that perceptual and conceptual processing share a tactile disadvantage: people are less accurate in detecting expected information regarding the sense of touch than any other modality. These findings support embodied assertions that the conceptual system uses the perceptual system for the purposes of representation. We suggest that the tactile disadvantage emerges for linguistic stimuli due to the evolutionary adaptation of endogenous attention to incoming sensory stimuli.

from Cognition

Are cortical motor maps based on body parts or coordinated actions? Implications for embodied semantics

The embodied cognition approach to the study of the mind proposes that higher order mental processes such as concept formation and language are essentially based on perceptual and motor processes. Contrary to the classical approach in cognitive science, in which concepts are viewed as amodal, arbitrary symbols, embodied semantics argues that concepts must be “grounded” in sensorimotor experiences in order to have meaning. In line with this view, neuroimaging studies have shown a roughly somatotopic pattern of activation along cortical motor areas (broadly construed) for the observation of actions involving different body parts, as well as for action-related language comprehension. These findings have been interpreted in terms of a mirror-neuron system, which automatically matches observed and executed actions. However, the somatotopic pattern of activation found in these studies is very coarse, with significant overlap between body parts, and sometimes with multiple representations for the same body part. Furthermore, the localization of the respective activations varies considerably across studies. Based on recent work on the motor cortex in monkeys, we suggest that these discrepancies result from the organization of the primate motor cortex (again, broadly construed), which probably includes maps of the coordinated actions making up the individual’s motor repertoire, rather than a single, continuous map of the body. We review neurophysiological and neuroimaging data supporting this hypothesis and discuss ways in which this framework can be used to further test the links between neural mirroring and linguistic processing.

from Brain and Language

Actions, Words, and Numbers: A Motor Contribution to Semantic Processing?

from Current Directions in Psychological Science

ABSTRACT—Recent findings in neuroscience challenge the view that the motor system is exclusively dedicated to the control of actions, and it has been suggested that it may contribute critically to conceptual processes such as those involved in language and number representation. The aim of this review is to address this issue by illustrating some interactions between the motor system and the processing of words and numbers. First, we detail functional brain imaging studies suggesting that motor circuits may be recruited to represent the meaning of action-related words. Second, we summarize a series of experiments demonstrating some interference between the size of grip used to grasp objects and the magnitude processing of words or numbers. Third, we report data suggestive of a common representation of numbers and finger movements in the adult brain, a possible trace of the finger-counting strategies used in childhood. Altogether, these studies indicate that the motor system interacts with several aspects of word and number representations. Future research should determine whether these findings reflect a causal role of the motor system in the organization of semantic knowledge.