Blog Archives

Clinical Evaluation of the Mini-Mental State Exam with Culturally Deaf Senior Citizens

The Mini-Mental State Exam (MMSE) is commonly used to screen cognitive function in a clinical setting. The measure has been published in over 50 languages; however, the validity and reliability of the MMSE has yet to be assessed with the culturally Deaf elderly population. Participants consisted of 117 Deaf senior citizens, aged 55–89 (M = 69.44, SD = 8.55). Demographic information, including state of residence, age, and history of depression, head injury, and dementia diagnoses, were collected. A standard form of the MMSE was used with modification of test administration and stimuli including translation of English test items into a sign-based form and alteration of two items in order to make them culturally and linguistically appropriate. Significant correlations were observed between overall test score and education level (r = .23, p = .01) as well as test score and age (r = –.33, p < .001). Patterns of responses were analyzed and revealed several items that were problematic and yielded a fewer correct responses. These results indicate that clinicians need to be aware of cultural and linguistic factors associated with the deaf population that may impact test performance and clinical interpretation of test results. On the basis of these data, there is an increased risk of false positives obtained when using this measure. Further research is needed to validate the use of this measure with the culturally Deaf population.

from Archives of Clinical Neuropsychology

Advertisements

The link between form and meaning in American Sign Language: Lexical processing effects

Signed languages exploit iconicity (the transparent relationship between meaning and form) to a greater extent than spoken languages. where it is largely limited to onomatopoeia. In a picture–sign matching experiment measuring reaction times, the authors examined the potential advantage of iconicity both for 1st- and 2nd-language learners of American Sign Language (ASL). The results show that native ASL signers are faster to respond when a specific property iconically represented in a sign is made salient in the corresponding picture, thus providing evidence that a closer mapping between meaning and form can aid in lexical retrieval. While late 2nd-language learners appear to use iconicity as an aid to learning sign (R. Campbell, P. Martin, & T. White, 1992), they did not show the same facilitation effect as native ASL signers, suggesting that the task tapped into more automatic language processes. Overall, the findings suggest that completely arbitrary mappings between meaning and form may not be more advantageous in language and that, rather, arbitrariness may simply be an accident of modality. (PsycINFO Database Record (c) 2009 APA, all rights reserved)

from the Journal of Experimental Psychology: Learning, Memory, and Cognition

Eye Gaze During Comprehension of American Sign Language by Native and Beginning Signers

from the Journal of Deaf Studies and Deaf Education

An eye-tracking experiment investigated where deaf native signers (N = 9) and hearing beginning signers (N = 10) look while comprehending a short narrative and a spatial description in American Sign Language produced live by a fluent signer. Both groups fixated primarily on the signer’s face (more than 80% of the time) but differed with respect to fixation location. Beginning signers fixated on or near the signer’s mouth, perhaps to better perceive English mouthing, whereas native signers tended to fixate on or near the eyes. Beginning signers shifted gaze away from the signer’s face more frequently than native signers, but the pattern of gaze shifts was similar for both groups. When a shift in gaze occurred, the sign narrator was almost always looking at his or her hands and was most often producing a classifier construction. We conclude that joint visual attention and attention to mouthing (for beginning signers), rather than linguistic complexity or processing load, affect gaze fixation patterns during sign language comprehension.

Encoding, Rehearsal, and Recall in Signers and Speakers: Shared Network but Differential Engagement

from Cerebral Cortex

Short-term memory (STM), or the ability to hold verbal information in mind for a few seconds, is known to rely on the integrity of a frontoparietal network of areas. Here, we used functional magnetic resonance imaging to ask whether a similar network is engaged when verbal information is conveyed through a visuospatial language, American Sign Language, rather than speech. Deaf native signers and hearing native English speakers performed a verbal recall task, where they had to first encode a list of letters in memory, maintain it for a few seconds, and finally recall it in the order presented. The frontoparietal network described to mediate STM in speakers was also observed in signers, with its recruitment appearing independent of the modality of the language. This finding supports the view that signed and spoken STM rely on similar mechanisms. However, deaf signers and hearing speakers differentially engaged key structures of the frontoparietal network as the stages of STM unfold. In particular, deaf signers relied to a greater extent than hearing speakers on passive memory storage areas during encoding and maintenance, but on executive process areas during recall. This work opens new avenues for understanding similarities and differences in STM performance in signers and speakers.

American Sign Language syntactic and narrative comprehension in skilled and less skilled readers: Bilingual and bimodal evidence for the linguistic basis of reading

from Applied Psycholinguistics

We tested the hypothesis that syntactic and narrative comprehension of a natural sign language can serve as the linguistic basis for skilled reading. Thirty-one adults who were deaf from birth and used American Sign Language (ASL) were classified as skilled or less skilled readers using an eighth-grade criterion. Proficiency with ASL syntax, and narrative comprehension of ASL and Manually Coded English (MCE) were measured in conjunction with variables including exposure to print, nonverbal IQ, and hearing and speech ability. Skilled readers showed high levels of ASL syntatic ability and narrative comprehension whereas less skilled readers did not. Regression analyses showed ASL syntactic ability to contribute unique variance in English reading performance when the effects of nonverbal IQ, exposure to print, and MCE comprehension were controlled. A reciprocal relationship between print exposure and sign language proficiency was further found. The results indicate that the linguistic basis of reading, and the reciprocal relationship between print exposure and “through the air” language, can be bimodal, as in being a sign language or a spoken language, and bilingual, as in being ASL and English.