Blog Archives

Conceptual metaphors in gesture

This study investigates metaphoric gestures in face-to-face conversation. It is found that gestures of this kind are mainly performed in the central gesture space with noticeable and discernable configurations, providing visible evidence for cross-domain cognitive mappings and the grounding of conceptual metaphors in people’s recurrent bodily experiences and in what people habitually do in social and cultural practices. Moreover, whether metaphorical thinking is conveyed by gesture exclusively or along with metaphoric speech, the manual enactment of even conventional metaphors manifests dynamism in communicating metaphors. Metaphoric gestures can provide salient, additional information about the aspect of the conceptualization which is the speaker’s focus of attention in real-time multimodal communication.

from Cognitive Linguistics

Advertisements

Modulation of the motor system during visual and auditory language processing

Studies of embodied cognition have demonstrated the engagement of the motor system when people process action-related words and concepts. However, research using transcranial magnetic stimulation (TMS) to examine linguistic modulation in primary motor cortex has produced inconsistent results. Some studies report that action words produce an increase in corticospinal excitability; others, a decrease. Given the differences in methodology and modality, we re-examined this issue, comparing conditions in which participants either read or listened to the same set of action words. In separate blocks of trials, participants were presented with lists of words in the visual and auditory modality, and a TMS pulse was applied over left motor cortex, either 150 or 300 ms after the word onset. Motor evoked potentials (MEPs) elicited were larger following the presentation of action words compared with control words. However, this effect was only observed when the words were presented visually; no changes in MEPs were found when the words were presented auditorily. A review of the TMS literature on action word processing reveals a similar modality effect on corticospinal excitability. We discuss different hypotheses that might account for this differential modulation of action semantics by vision and audition.

from Experimental Brain Research

From Language Comprehension to Action Understanding and Back Again

A controversial question in cognitive neuroscience is whether comprehension of words and sentences engages brain mechanisms specific for decoding linguistic meaning or whether language comprehension occurs through more domain-general sensorimotor processes. Accumulating behavioral and neuroimaging evidence suggests a role for cortical motor and premotor areas in passive action-related language tasks, regions that are known to be involved in action execution and observation. To examine the involvement of these brain regions in language and nonlanguage tasks, we used functional magnetic resonance imaging (fMRI) on a group of 21 healthy adults. During the fMRI session, all participants 1) watched short object-related action movies, 2) looked at pictures of man-made objects, and 3) listened to and produced short sentences describing object-related actions and man-made objects. Our results are among the first to reveal, in the human brain, a functional specialization within the ventral premotor cortex (PMv) for observing actions and for observing objects, and a different organization for processing sentences describing actions and objects. These findings argue against the strongest version of the simulation theory for the processing of action-related language.

from Cerebral Cortex

Viewpoint in speech-gesture integration: Linguistic structure, discourse structure, and event structure

We examine a corpus of narrative data to determine which types of events evoke character viewpoint gestures, and which evoke observer viewpoint gestures. We consider early claims made by McNeill (1992) that character viewpoint tends to occur with transitive utterances and utterances that are causally central to the narrative. We argue that the structure of the event itself must be taken into account: there are some events that cannot plausibly evoke both types of gesture. We show that linguistic structure (transitivity), event structure (visuo-spatial and motoric properties), and discourse structure all play a role. We apply these findings to a recent model of embodied language production, the Gestures as Simulated Action framework.

from Language and Cognitive Processes

Viewpoint in speech-gesture integration: Linguistic structure, discourse structure, and event structure

We examine a corpus of narrative data to determine which types of events evoke character viewpoint gestures, and which evoke observer viewpoint gestures. We consider early claims made by McNeill (1992) that character viewpoint tends to occur with transitive utterances and utterances that are causally central to the narrative. We argue that the structure of the event itself must be taken into account: there are some events that cannot plausibly evoke both types of gesture. We show that linguistic structure (transitivity), event structure (visuo-spatial and motoric properties), and discourse structure all play a role. We apply these findings to a recent model of embodied language production, the Gestures as Simulated Action framework.

from Language and Cognitive Processes

“The drawer is still closed”: Simulating past and future actions when processing sentences that describe a state

In two experiments using the action–sentence–compatibility paradigm we investigated the simulation processes that readers undertake when processing state descriptions with adjectives (e.g., Die Schublade ist offen/zu. [The drawer is open/shut]) or adjectival passives (e.g., Die Schublade ist geöffnet/geschlossen. [The drawer is opened/closed]). In Experiment 1 we did not find evidence for action simulation, not even in sentences with adjectival passives. The results were different in Experiment 2, where the temporal particle noch (still/yet) was inserted into the sentences (e.g., The drawer is still closed). Under these circumstances readers mentally simulated the action that brought about the current state for sentences with adjectival passives, but the action that would change the current state for sentences with adjectives. Thus, comprehenders are in principle sensitive to the subtle differences between adjectives and adjectival passives but highlighting the temporal dimension of the described states of affairs seems a necessary precondition for obtaining evidence for action simulation with sentences that describe a state. We discuss implications for future studies employing neuro-psychological methods.

from Brain and Language

“The drawer is still closed”: Simulating past and future actions when processing sentences that describe a state

In two experiments using the action–sentence–compatibility paradigm we investigated the simulation processes that readers undertake when processing state descriptions with adjectives (e.g., Die Schublade ist offen/zu. [The drawer is open/shut]) or adjectival passives (e.g., Die Schublade ist geöffnet/geschlossen. [The drawer is opened/closed]). In Experiment 1 we did not find evidence for action simulation, not even in sentences with adjectival passives. The results were different in Experiment 2, where the temporal particle noch (still/yet) was inserted into the sentences (e.g., The drawer is still closed). Under these circumstances readers mentally simulated the action that brought about the current state for sentences with adjectival passives, but the action that would change the current state for sentences with adjectives. Thus, comprehenders are in principle sensitive to the subtle differences between adjectives and adjectival passives but highlighting the temporal dimension of the described states of affairs seems a necessary precondition for obtaining evidence for action simulation with sentences that describe a state. We discuss implications for future studies employing neuro-psychological methods.

from Brain and Language

Body Schemantics: On the role of the body schema in embodied lexical-semantic representations

Words denoting manipulable objects activate sensorimotor brain areas, likely reflecting action experience with the denoted objects. In particular, these sensorimotor lexical representations have been found to reflect the way in which an object is used. In the current paper we present data from two experiments (one behavioral and one neuroimaging) in which we investigate whether body schema information, putatively necessary for interacting with functional objects, is also recruited during lexical processing. To this end, we presented participants with words denoting objects that are typically brought towards or away from the body (e.g., cup or key, respectively). We hypothesized that objects typically brought to a location on the body (e.g., cup) are relatively more reliant on body schema representations, since the final goal location of the cup (i.e., the mouth) is represented primarily through posture and body co-ordinates. In contrast, objects typically brought to a location away from the body (e.g., key) are relatively more dependent on visuo-spatial representations, since the final goal location of the key (i.e., a keyhole) is perceived visually. The behavioral study showed that prior planning of a movement along an axis towards and away from the body facilitates processing of words with a congruent action semantic feature (i.e., preparation of movement towards the body facilitates processing of cup.). In an fMRI study we showed that words denoting objects brought towards the body engage the resources of brain areas involved in the processing information about human bodies (i.e., the extra-striate body area, middle occipital gyrus and inferior parietal lobe) relatively more than words denoting objects typically brought away from the body. The results provide converging evidence that body schema are implicitly activated in processing lexical information.

from Neuropsychologia

Situated sentence processing: The coordinated interplay account and a neurobehavioral model

Results from a new simulation suggest the model also correlates with event-related brain potentials elicited by the immediate use of visual context for linguistic disambiguation (Knoeferle, P., Habets, B., Crocker, M. W., & Münte, T. F. (2008). Visual scenes trigger immediate syntactic reanalysis: Evidence from ERPs during situated spoken comprehension. Cerebral Cortex, 18(4), 789–795). Finally, we argue that the mechanisms underlying interpretation, visual attention, and scene apprehension are not only in close temporal synchronization, but have co-adapted to optimize real-time visual grounding of situated spoken language, thus facilitating the association of linguistic, visual and motor representations that emerge during the course of our embodied linguistic experience in the world.

from Brain and Language

Situated sentence processing: The coordinated interplay account and a neurobehavioral model

Empirical evidence demonstrating that sentence meaning is rapidly reconciled with the visual environment has been broadly construed as supporting the seamless interaction of visual and linguistic representations during situated comprehension. Based on recent behavioral and neuroscientific findings, however, we argue for the more deeply rooted coordination of the mechanisms underlying visual and linguistic processing, and for jointly considering the behavioral and neural correlates of scene–sentence reconciliation during situated comprehension. The Coordinated Interplay Account (CIA; Knoeferle, P., & Crocker, M. W. (2007). The influence of recent scene events on spoken comprehension: Evidence from eye movements. Journal of Memory and Language, 57(4), 519–543) asserts that incremental linguistic interpretation actively directs attention in the visual environment, thereby increasing the salience of attended scene information for comprehension. We review behavioral and neuroscientific findings in support of the CIA’s three processing stages: (i) incremental sentence interpretation, (ii) language-mediated visual attention, and (iii) the on-line influence of non-linguistic visual context. We then describe a recently developed connectionist model which both embodies the central CIA proposals and has been successfully applied in modeling a range of behavioral findings from the visual world paradigm (Mayberry, M. R., Crocker, M. W., & Knoeferle, P. (2009). Learning to attend: A connectionist model of situated language comprehension. Cognitive Science). Results from a new simulation suggest the model also correlates with event-related brain potentials elicited by the immediate use of visual context for linguistic disambiguation (Knoeferle, P., Habets, B., Crocker, M. W., & Münte, T. F. (2008). Visual scenes trigger immediate syntactic reanalysis: Evidence from ERPs during situated spoken comprehension. Cerebral Cortex, 18(4), 789–795). Finally, we argue that the mechanisms underlying interpretation, visual attention, and scene apprehension are not only in close temporal synchronization, but have co-adapted to optimize real-time visual grounding of situated spoken language, thus facilitating the association of linguistic, visual and motor representations that emerge during the course of our embodied linguistic experience in the world.

from Brain and Language

Much mouth much tongue: Chinese metonymies and metaphors of verbal behaviour.

This paper explores metonymical and metaphorical expressions of verbal behaviour in Chinese. While metonymy features prominently in some of these expressions and metaphor in others, the entire dataset can be best viewed as spanning the metonymy-metaphor-continuum. That is, we observe a gradation of conceptual distance between the source and target which corresponds to the gradation of figurativity. Specifically, roughly half of the expressions we encounter are based on the ORGAN OF SPEECH ARTICULATION FOR SPEECH metonymy and can be considered as clustering around the metonymic pole. The other half can be seen as tending towards the metaphoric pole, as they are largely motivated by conceptual metaphors: (a) VERBAL BEHAVIOUR IS PHYSICAL ACTION, (b) SPEECH IS CONTAINER, (c) ARGUMENT IS WAR (or WORDS ARE WEAPONS) and (d) WORDS ARE FOOD. The interaction between metonymy and metaphor is an important cognitive strategy in the conceptualisation of verbal behaviour. The findings (i) evidence the gradient predictability of idiom meanings based on semantic compositionality, (ii) confirm the hypothesis of a bodily and experiential basis of cognition, (iii) suggest the existence of culture-specific models in the utilization of basic experiences, and (iv) point to the role of emotion in the metaphorisation of verbal behaviour as a socio-emotional domain. [ABSTRACT FROM AUTHOR]

from Cognitive Linguistics

Brief training with co-speech gesture lends a hand to word learning in a foreign language

Recent research in psychology and neuroscience has demonstrated that co-speech gestures are semantically integrated with speech during language comprehension and development. The present study explored whether gestures also play a role in language learning in adults. In Experiment 1, we exposed adults to a brief training session presenting novel Japanese verbs with and without hand gestures. Three sets of memory tests (at five minutes, two days and one week) showed that the greatest word learning occurred when gestures conveyed redundant imagistic information to speech. Experiment 2 was a preliminary investigation into possible neural correlates for such learning. We exposed participants to similar training sessions over three days and then measured event-related potentials (ERPs) to words learned with and without co-speech gestures. The main finding was that words learned with gesture produced a larger Late Positive Complex (indexing recollection) in bi-lateral parietal sites than words learned without gesture. However, there was no significant difference between the two conditions for the N400 component (indexing familiarity). The results have implications for pedagogical practices in foreign language instruction and theories of gesture-speech integration.

from Language and Cognitive Processes

Brief training with co-speech gesture lends a hand to word learning in a foreign language

Recent research in psychology and neuroscience has demonstrated that co-speech gestures are semantically integrated with speech during language comprehension and development. The present study explored whether gestures also play a role in language learning in adults. In Experiment 1, we exposed adults to a brief training session presenting novel Japanese verbs with and without hand gestures. Three sets of memory tests (at five minutes, two days and one week) showed that the greatest word learning occurred when gestures conveyed redundant imagistic information to speech. Experiment 2 was a preliminary investigation into possible neural correlates for such learning. We exposed participants to similar training sessions over three days and then measured event-related potentials (ERPs) to words learned with and without co-speech gestures. The main finding was that words learned with gesture produced a larger Late Positive Complex (indexing recollection) in bi-lateral parietal sites than words learned without gesture. However, there was no significant difference between the two conditions for the N400 component (indexing familiarity). The results have implications for pedagogical practices in foreign language instruction and theories of gesture-speech integration.

from Language and Cognitive Processes