Blog Archives

Social Communication in Young Children with Traumatic Brain Injury: Relations with Corpus Callosum Morphometry

The purpose of the present investigation was to characterize the relations of specific social communication behaviors, including joint attention, gestures, and verbalization, with surface area of midsagittal corpus callosum (CC) subregions in children who sustained traumatic brain injury (TBI) before 7 years of age. Participants sustained mild (n = 10) or moderate-severe (n = 26) noninflicted TBI. The mean age at injury was 33.6 months; mean age at MRI was 44.4 months. The CC was divided into seven subregions. Relative to young children with mild TBI, those with moderate-severe TBI had smaller surface area of the isthmus. A semi-structured sequence of social interactions between the child and an examiner was videotaped and coded for specific social initiation and response behaviors. Social responses were similar across severity groups. Even though the complexity of their language was similar, children with moderate-severe TBI used more gestures than those with mild TBI to initiate social overtures; this may indicate a developmental lag or deficit as the use of gestural communication typically diminishes after age 2. After controlling for age at scan and for total brain volume, the correlation of social interaction response and initiation scores with the midsagittal surface area of the CC regions was examined. For the total group, responding to a social overture using joint attention was significantly and positively correlated with surface area of all regions, except the rostrum. Initiating joint attention was specifically and negatively correlated with surface area of the anterior midbody. Use of gestures to initiate a social interaction correlated significantly and positively with surface area of the anterior and posterior midbody. Social response and initiation behaviors were selectively related to regional callosal surface areas in young children with TBI. Specific brainbehavior relations indicate early regional specialization of anterior and posterior CC for social communication.

from the International Journal of Developmental Neuroscience

Advertisements

Conceptual metaphors in gesture

This study investigates metaphoric gestures in face-to-face conversation. It is found that gestures of this kind are mainly performed in the central gesture space with noticeable and discernable configurations, providing visible evidence for cross-domain cognitive mappings and the grounding of conceptual metaphors in people’s recurrent bodily experiences and in what people habitually do in social and cultural practices. Moreover, whether metaphorical thinking is conveyed by gesture exclusively or along with metaphoric speech, the manual enactment of even conventional metaphors manifests dynamism in communicating metaphors. Metaphoric gestures can provide salient, additional information about the aspect of the conceptualization which is the speaker’s focus of attention in real-time multimodal communication.

from Cognitive Linguistics

Gestural behavior in group oral assessment: a case study of higher- and lower-scoring students

This paper reports on a microanalysis of gestural behavior in classroom assessment situations. Videotaped excerpts of secondary school ESL students engaged in a peer group oral assessment task were transcribed to represent the gestures that occurred during the interaction. Using conversation analysis as a central tool for analysis, this study explored the potential differences in use of gestures between higher- and lower-scoring students in group oral language assessment situations. Results show that gestures of the higher-scoring group appeared to be well synchronized with the flow of speech, turn-taking, as well as other nonverbal behavior such as eye contact and facial expression, whereas the gestural behavior of the lower-scoring group appeared to be an outward sign of language difficulties, disfluency, tension, and lack of confidence, and largely bore no association to the verbal speech. In addition, in the higher-scoring group, gestures seemed to function at all the three discourse levels (paranarrative, metanarrative, and narrative), while in the lower-scoring group, gestures seemed to be utilized predominantly at the paranarrative level and be involved in self-organizational processes. Implications for current oral test criterion modification and for students’ test preparation are discussed.

from the International Journal of Applied Linguistics

Knowledge of Mathematical Equivalence in Children With Specific Language Impairment: Insights From Gesture and Speech

Conclusion: Children with SLI showed delays in their knowledge of mathematical equivalence. Children with ER-SLI displayed greater delays than children with E-SLI. Children with E-SLI sometimes expressed more advanced knowledge in gestures, suggesting that their knowledge is represented in a nonverbal format.

from Language, Speech and Hearing Services in Schools

Viewpoint in speech-gesture integration: Linguistic structure, discourse structure, and event structure

We examine a corpus of narrative data to determine which types of events evoke character viewpoint gestures, and which evoke observer viewpoint gestures. We consider early claims made by McNeill (1992) that character viewpoint tends to occur with transitive utterances and utterances that are causally central to the narrative. We argue that the structure of the event itself must be taken into account: there are some events that cannot plausibly evoke both types of gesture. We show that linguistic structure (transitivity), event structure (visuo-spatial and motoric properties), and discourse structure all play a role. We apply these findings to a recent model of embodied language production, the Gestures as Simulated Action framework.

from Language and Cognitive Processes

Improving language without words: first evidence from aphasia

In support of a multimodal representation of action, these findings univocally demonstrate that gestures interact with the speech production system, inducing long-lasting modification at the lexical level in patients with cerebral damage.

from Neuropsychologia

“Are You Looking at Me?” The Influence of Gaze on Frequent Conversation Partners’ Management of Interaction with Adults with Acquired Hearing Impairment

This article presents findings from a larger conversation analysis study of interactional management by adults with severe or profound acquired hearing impairment and their experienced communication partners. It addresses how some partners display a consistent orientation toward their hearing-impaired cointeractants’ need for visual speech information. These partners monitor their cointeractants’ gaze direction and hence their availability as recipients of their talk. They time their talk in such a way that important components of their talk coincide with the availability of their hearing-impaired cointeractants’ gaze. Where necessary, they secure their cointeractants’ gaze by using conversational gaze-soliciting strategies such as speech disfluencies and gestures. On listening, the self-repairs by partners that constitute or result from these strategies might easily be thought to arise from problems of production alone. However, detailed visual examination of the data reveals the function of these self-initiated self-repairs by partners and underlines the importance of visual analysis to a full understanding of the management of interaction.

from Seminars in Hearing

Gesturing makes memories that last

When people are asked to perform actions, they remember those actions better than if they are asked to talk about the same actions. But when people talk, they often gesture with their hands, thus adding an action component to talking. The question we asked in this study was whether producing gesture along with speech makes the information encoded in that speech more memorable than it would have been without gesture. We found that gesturing during encoding led to better recall, even when the amount of speech produced during encoding was controlled. Gesturing during encoding improved recall whether the speaker chose to gesture spontaneously or was instructed to gesture. Thus, gesturing during encoding seems to function like action in facilitating memory.

from the Journal of Memory and Language

Comprehension of the Communicative Intent Behind Pointing and Gazing Gestures by Young Children with Williams Syndrome or Down Syndrome

Conclusions: At the group level, preschoolers with WS or DS were able to comprehend the communicative intent expressed by pointing and gazing gestures in a tabletop task. Children with DS evidenced significantly stronger pragmatic skills than did children with WS, providing further evidence that children with WS have more difficulty with sociocommunication than expected for chronological age or cognitive/language ability.

from the Journal of Speech, Language, and Hearing Research

The Impact of Object and Gesture Imitation Training on Language Use in Children With Autism Spectrum Disorder

Conclusion: These findings suggest that adding gesture imitation training to object imitation training can lead to greater gains in rate of language use than object imitation alone. Implications for both language development and early intervention are discussed.

from the Journal of Speech, Language, and Hearing Research

The role of speech-gesture congruency and delay in remembering action events

When watching others describe events, does information from their speech and gestures affect our memory representations for the gist and surface form of the described events? Does our reliance on these memory representations change over time? Forty participants watched videos of stories narrated by an actor. Each story included three target events that differed in their speech-gesture congruency for particular actions (congruent speech/gesture, incongruent speech/gesture, or speech with no gesture). Participants had to reproduce target event sentences, prompted after delays of 2, 6, or 18 minutes. Seeing gestures, either congruent or incongruent, led to better gist recall (more mentions of the target action, more gestures for the target action, and more complete target events) compared to not seeing gestures. However, seeing incongruent gestures sometimes led to reproductions of the incongruent gestures, particularly after short delays, as well as inaccuracies in speech. Our results suggest that over time people increasingly rely on multimodal gist-based representations and rely less on representations that include surface and source information about speech and gesture.

from Language and Cognitive Processes

Are You Looking at Me?” The Influence of Gaze on Frequent Conversation Partners’ Management of Interaction with Adults with Acquired Hearing Impairment

This article presents findings from a larger conversation analysis study of interactional management by adults with severe or profound acquired hearing impairment and their experienced communication partners. It addresses how some partners display a consistent orientation toward their hearing-impaired cointeractants’ need for visual speech information. These partners monitor their cointeractants’ gaze direction and hence their availability as recipients of their talk. They time their talk in such a way that important components of their talk coincide with the availability of their hearing-impaired cointeractants’ gaze. Where necessary, they secure their cointeractants’ gaze by using conversational gaze-soliciting strategies such as speech disfluencies and gestures. On listening, the self-repairs by partners that constitute or result from these strategies might easily be thought to arise from problems of production alone. However, detailed visual examination of the data reveals the function of these self-initiated self-repairs by partners and underlines the importance of visual analysis to a full understanding of the management of interaction.

from Seminars in Hearing

Language, gesture, action! A test of the Gesture as Simulated Action framework

The Gesture as Simulated Action (GSA) framework (Hostetter & Alibali, 2008) holds that representational gestures are produced when actions are simulated as part of thinking and speaking. Accordingly, speakers should gesture more when describing images with which they have specific physical experience than when describing images that are less closely tied to action. Experiment 1 supported this hypothesis by showing that speakers produced more representational gestures when describing patterns they had physically made than when describing patterns they had only viewed. Experiment 2 replicated this finding and ruled out the possibility that the effect is due to decreased opportunity for verbal rehearsal when speakers physically made the patterns. Experiment 3 ruled out the possibility that the effect in Experiments 1 and 2 was due to motor priming from making the patterns. Taken together, these experiments support the central claim of the GSA framework by suggesting that speakers gesture when they express thoughts that involve simulations of actions.

from the Journal of Memory and Language

Speech-and-gesture integration in high functioning autism

This study examined iconic gesture comprehension in autism, with the goal of assessing whether cross-modal processing difficulties impede speech-and-gesture integration. Participants were 19 adolescents with high functioning autism (HFA) and 20 typical controls matched on age, gender, verbal IQ, and socio-economic status (SES). Gesture comprehension was assessed via quantitative analyses of visual fixations during a video-based task, using the visual world paradigm. Participants’ eye movements were recorded while they watched videos of a person describing one of four shapes shown on a computer screen, using speech-and-gesture or speech-only descriptions. Participants clicked on the shape that the speaker described. Since gesture naturally precedes speech, earlier visual fixations to the target shape during speech-and-gesture compared to speech-only trials, would suggest immediate integration of auditory and visual information. Analyses of eye movements supported this pattern in control participants but not in individuals with autism: iconic gestures facilitated comprehension in typical individuals, while it hindered comprehension in those with autism. Cross-modal processing difficulties in autism were not accounted for by impaired unimodal speech or gesture processing. The results have important implications for the treatment of children and adults with this disorder.

from Cognition

Viewpoint in speech-gesture integration: Linguistic structure, discourse structure, and event structure

We examine a corpus of narrative data to determine which types of events evoke character viewpoint gestures, and which evoke observer viewpoint gestures. We consider early claims made by McNeill (1992) that character viewpoint tends to occur with transitive utterances and utterances that are causally central to the narrative. We argue that the structure of the event itself must be taken into account: there are some events that cannot plausibly evoke both types of gesture. We show that linguistic structure (transitivity), event structure (visuo-spatial and motoric properties), and discourse structure all play a role. We apply these findings to a recent model of embodied language production, the Gestures as Simulated Action framework.

from Language and Cognitive Processes