Monthly Archives: January 2009

Risk Factors for Hearing Loss in US Adults: Data From the National Health and Nutrition Examination Survey, 1999 to 2002

Objective: To evaluate and compare the effects of cardiovascular risk factors (hypertension, smoking, diabetes) and noise exposure (occupational, recreational, firearm) on frequency-specific audiometric thresholds among US adults while assessing synergistic interactions between these exposures.

Design: National cross-sectional survey.

Setting/Participants: United States adults aged 20 to 69 years who participated in the 1999 to 2002 National Health and Nutrition Examination Survey (N = 3,527).

Main Outcome Measures: Air-conduction thresholds at 0.5 to 8 kHz (dB) in the poorer-hearing ear. Multivariate models adjusted for age, sex, race/ethnicity, and educational level.

Results: Exposure to firearm noise was significantly associated with high-frequency (4-8 kHz) hearing loss (HL), whereas smoking and diabetes were associated with significantly increased hearing thresholds across the frequency range (0.5-8 kHz). A significant interaction was observed between exposure to firearm noise and heavy smoking such that firearm noise was associated with a mean 8-dB hearing loss in heavy smokers compared with a mean 2-dB hearing loss in nonsmokers at 8 kHz. We also observed significant interactions between firearm noise exposure and diabetes.

Conclusion: Noise exposure was associated with high-frequency HL, whereas cardiovascular risk generated by smoking and diabetes was associated with both high- and low-frequency HL. The frequency-specific effects of these exposures may offer insight into mechanisms of cochlear damage. We demonstrated an interaction between cardiovascular risk and noise exposures, possibly as a result of cochlear vulnerability due to microvascular insufficiency. Such significant interactions provide proof of principle that certain preexisting medical conditions can potentiate the effect of noise exposure on hearing. Data-based stratification of risk should guide our counseling of patients regarding HL.

from Otology & Neurotology

Speech Recognition in Cochlear Implant Recipients: Comparison of Standard HiRes and HiRes 120 Sound Processing

Objective: HiRes (HR) 120 is a sound-processing strategy purported to offer an increase in the precision of frequency-to-place mapping through the use of current steering. This within-subject study was designed to compare speech recognition as well as music and sound quality ratings for HR and HR 120 processing.

Setting: Cochlear implant/tertiary referral center.

Subjects: Eight postlinguistically deafened adults implanted with an Advanced Bionics CII or HR 90K cochlear implant.

Study Design/Main Outcome Measures: Performance with HR and HR 120 was assessed during 4 test sessions with a battery of measures including monosyllabic words, sentences in quiet and in noise, and ratings of sound quality and musical passages.

Results: Compared with HR, speech recognition results in adult cochlear implant recipients revealed small but significant improvements with HR 120 for single syllable words and for 2 of 3 sentence recognition measures in noise. Both easy and more difficult sentence material presented in quiet were not significantly different between strategies. Additionally, music quality ratings were significantly better for HR 120 than for HR, and 7 of 8 subjects preferred HR 120 over HR for listening in everyday life.

Conclusion: HR 120 may offer equivalent or improved benefit to patients compared with HR. Differences in performance on test measures between strategies are dependent on speech recognition materials and listening conditions.

from Otology & Neurotology

The Birmingham Pediatric Bone-Anchored Hearing Aid Program: A 15-Year Experience

Objective: To evaluate the complication rates and outcomes of children who were fitted with a bone-anchored hearing aid (BAHA) on the Birmingham BAHA program.

Study Design: Retrospective case analysis of clinical records of all children implanted at Birmingham Children’s Hospital since the beginning of the program in 1992 until February 2007.

Patients: A total of 182 children younger than 16 years old fitted with a BAHA. Of these children, 107 had a significant medical history.

Results: Surgery was performed as a 2-stage procedure in 174 children. The healing time was between 3 and 4 months in 112 (64%) cases. Single-stage surgery was performed in 8 cases. Implant failures were 14% of 230 loaded fixtures (32 fixtures lost in total). Multiple-fixture failures (18 fixture failures) occurred in 7 patients. Adverse skin reactions appeared in 34 (17%) patients during a 15-year follow-up period. Revision surgery was undertaken in 14 (8%) cases because of skin overgrowth around the abutment. Five of these cases required multiple surgical skin reductions.

Conclusion: The Birmingham Program has a high proportion of syndromic patients with complex medical problems. The fixture failure rate was found to be 14%. This included the multiple-fixture failures in children younger than 3 years old. There was 1 serious complication. The BAHA is a reliable and effective treatment for selected patients. Our program currently has 97% of its children wearing their BAHA on a daily basis with continuing audiologic benefit.

from Otology & Neurotology

Communicating common ground: How mutually shared knowledge influences speech and gesture in a narrative task

Much research has been carried out into the effects of mutually shared knowledge (or common ground) on verbal language use. This present study investigates how common ground affects human communication when regarding language as consisting of both speech and gesture. A semantic feature approach was used to capture the range of information represented in speech and gesture. Overall, utterances were found to contain less semantic information when interlocutors had mutually shared knowledge, even when the information represented in both modalities, speech and gesture, was considered. However, when considering the gestures on their own, it was found that they represented only marginally less information. The findings also show that speakers gesture at a higher rate when common ground exists. It appears therefore that gestures play an important communicational function, even when speakers convey information which is already known to their addressee.

from Language and Cognitive Processes

Co-speech gesture in bimodal bilinguals

The effects of knowledge of sign language on co-speech gesture were investigated by comparing the spontaneous gestures of bimodal bilinguals (native users of American Sign Language and English; n=13) and non-signing native English speakers (n=12). Each participant viewed and re-told the Canary Row cartoon to a non-signer whom they did not know. Nine of the thirteen bimodal bilinguals produced at least one ASL sign, which we hypothesise resulted from a failure to inhibit ASL. Compared with non-signers, bimodal bilinguals produced more iconic gestures, fewer beat gestures, and more gestures from a character viewpoint. The gestures of bimodal bilinguals also exhibited a greater variety of handshape types and more frequent use of unmarked handshapes. We hypothesise that these semantic and form differences arise from an interaction between the ASL language production system and the co-speech gesture system.

from Language and Cognitive Processes

Co-speech gestures in a naming task: Developmental data

Few studies have explored the development of the gesture-speech system after the two-word stage. Aim of the present study is to examine developmental changes in speech and gesture use, in the context of a simple naming task. Fifty-one children (age range: 2;3-7;6) were divided into five age groups and requested to name pictures representing objects, actions, or characteristics. In the context of a naming task that requires only the production of a single word, children produced pointing and representational gestures together with spoken responses. Pointing was the most frequent gesture produced by all groups of children. Among representational gestures, action gestures were more frequent than size and shape gestures. In addition, gesture production declined as a function of increasing age and spoken lexical competence. Results are discussed in terms of the links between action, gesture, and language, and the ways in which these may change developmentally.

from Language and Cognitive Processes

Cross-cultural variation of speech-accompanying gesture: A review

This article reviews the literature on cross-cultural variation of gestures. Four factors governing the variation were identified. The first factor is the culture-specific convention for form-meaning associations. This factor is involved in well-known cross-cultural differences in emblem gestures (e.g., the OK-sign), as well as pointing gestures. The second factor is culture-specific spatial cognition. Representational gestures (i.e., iconic and deictic gestures) that express spatial contents or metaphorically express temporal concepts differ across cultures, reflecting the cognitive differences in how direction, relative location and different axes in space are conceptualised and processed. The third factor is linguistic differences. Languages have different lexical and syntactic resources to express spatial information. This linguistic difference is reflected in how gestures express spatial information. The fourth factor is culture-specific gestural pragmatics, namely the principles under which gesture is used in communication. The culture-specificity in politeness of gesture use, the role of nodding in conversation, and the use of gesture space are discussed.

from Language and Cognitive Processes

When gesture-speech combinations do and do not index linguistic change

At the one-word stage children use gesture to supplement their speech (‘eat’+point at cookie), and the onset of such supplementary gesture-speech combinations predicts the onset of two-word speech (‘eat cookie’). Gesture thus signals a child’s readiness to produce two-word constructions. The question we ask here is what happens when the child begins to flesh out these early skeletal two-word constructions with additional arguments. One possibility is that gesture continues to be a forerunner of linguistic change as children flesh out their skeletal constructions by adding arguments. Alternatively, after serving as an opening wedge into language, gesture could cease its role as a forerunner of linguistic change. Our analysis of 40 children – from 14 to 34 months – showed that children relied on gesture to produce the first instance of a variety of constructions. However, once each construction was established in their repertoire, the children did not use gesture to flesh out the construction. Gesture thus acts as a harbinger of linguistic steps only when those steps involve new constructions, not when the steps merely flesh out existing constructions.

from Language and Cognitive Processes

Clinicians must take into account the validity, reliability, sensitivity, and practical utility of aphasia screening assessment tools before using them on their patients

No abstract available.

from Evidence-Based Communication Assessment and Intervention

Semantic access dysphasia resulting from left temporal lobe tumours

Unlike semantic degradation disorders, the mechanisms and the anatomical underpinnings of semantic access disorders are still unclear. We report the results of a case series study on the effects of temporal lobe gliomas on semantic access abilities of a group of 20 patients. Patients were tested 1–2 days before and 4–6 days after the removal of the tumour. Their semantic access skills were assessed with two spoken word-to-picture matching tasks, which aimed to separately control for rate of presentation, consistency and serial position effects (Experiment 1) and for word frequency and semantic distance effects (Experiment 2). These variables have been held to be critical in characterizing access in contrast to degraded-store semantic deficits, with access deficits characterized by inconsistency of response, better performance with slower presentation rates and with semantically distant stimuli, in the absence of frequency effects. Degradation deficits show the opposite pattern. Our results showed that low-grade slowly growing tumours tend not to produce signs of access problems. However, high-grade tumours especially within the left hemisphere consistently produce strong semantic deficits of a clear access type: response inconsistency and strong semantic distance effects in the absence of word frequency effects were detected. However, effects of presentation rate and serial position were very weak, suggesting non-refractory behaviour in the tumour patients tested. This evidence, together with the results of lesion overlapping, suggests the presence of a type of non-refractory semantic access deficit. We suggest that this deficit could be caused by the disconnection of posterior temporal lexical input areas from semantic system.

from Brain

The neural basis of surface dyslexia in semantic dementia

Semantic dementia (SD) is a neurodegenerative disease characterized by atrophy of anterior temporal regions and progressive loss of semantic memory. SD patients often present with surface dyslexia, a relatively selective impairment in reading low-frequency words with exceptional or atypical spelling-to-sound correspondences. Exception words are typically ‘over-regularized’ in SD and pronounced as they are spelled (e.g. ‘sew’ is pronounced as ‘sue’). This suggests that in the absence of sufficient item-specific knowledge, exception words are read by relying mainly on subword processes for regular mapping of orthography to phonology. In this study, we investigated the functional anatomy of surface dyslexia in SD using functional magnetic resonance imaging (fMRI) and studied its relationship to structural damage with voxel-based morphometry (VBM). Five SD patients and nine healthy age-matched controls were scanned while they read regular words, exception words and pseudowords in an event-related design. Vocal responses were recorded and revealed that all patients were impaired in reading low-frequency exception words, and made frequent over-regularization errors. Consistent with prior studies, fMRI data revealed that both groups activated a similar basic network of bilateral occipital, motor and premotor regions for reading single words. VBM showed that these regions were not significantly atrophied in SD. In control subjects, a region in the left intraparietal sulcus was activated for reading pseudowords and low-frequency regular words but not exception words, suggesting a role for this area in subword mapping from orthographic to phonological representations. In SD patients only, this inferior parietal region, which was not atrophied, was also activated by reading low-frequency exception words, especially on trials where over-regularization errors occurred. These results suggest that the left intraparietal sulcus is involved in subword reading processes that are differentially recruited in SD when word-specific information is lost. This loss is likely related to degeneration of the anterior temporal lobe, which was severely atrophied in SD. Consistent with this, left mid-fusiform and superior temporal regions that showed reading-related activations in controls were not activated in SD. Taken together, these results suggest that the left inferior parietal region subserves subword orthographic-to-phonological processes that are recruited for exception word reading when retrieval of exceptional, item-specific word forms is impaired by degeneration of the anterior temporal lobe.

from Brain

Expressive spoken language development in deaf children with cochlear implants who are beginning formal education

This paper assesses the expressive spoken grammar skills of young deaf children using cochlear implants who are beginning formal education, compares it with that achieved by normally hearing children and considers possible implications for educational management. Spoken language grammar was assessed, three years after implantation, in 45 children with profound deafness who were implanted between ten and 36 months of age (mean age = 27 months), using the South Tyneside Assessment of Syntactic Structures (Armstrong and Ainley, 1983) which is based on the Language Assessment and Remediation Screening Procedure (Crystal et al., 1976). Of the children in this study aged between four and six years, 58 per cent (26) were at or above the expressive spoken language grammatical level of normally hearing three year olds after three years of consistent cochlear implant use: however, 42 per cent (19) had skills below this level. Aetiology of deafness, age at implantation, educational placement, mode of communication and presence of additional disorders did not have a statistically significant effect (accepted at p 0.05) on the development of expressive spoken grammar skills. While just over half of the group had acquired spoken language grammar skills equivalent to or above those of a normally hearing three year old, there remains a sizeable group who, after three years of cochlear implant use, had not attained this level. Spoken language grammar therefore remains an area of delay for many of the children in this group. All the children were attending school with hearing children whose language skills are likely to be in the normal range for four to six year olds. We therefore need to ensure that the ongoing educational management of these deaf children with implants addresses their spoken grammar delay in order that they can benefit more fully from formal education. Copyright © 2009 John Wiley & Sons, Ltd.

from Deafness and Education International

‘Ch’us mon propre Bescherelle’: Challenges from the Hip-Hop nation to the Quebec nation

We examine the uses of and attitudes towards language of members of the Montreal Hip-Hop community in relation to Quebec language-in-education policies. These policies, implemented in the 1970s, have ensured that French has become the common public language of an ethnically diverse young adult population in Montreal. We argue, using Blommaert’s (2005) model of orders of indexicality, that the dominant language hierarchy orders established by government policy have been both flattened and reordered by members of the Montreal Hip-Hop community, whose multilingual lyrics insist: (1) that while French is the lingua franca, it is a much more inclusive category which includes ‘Bad French,’ regional and class dialects, and European French; and (2) that all languages spoken by community members are valuable as linguistic resources for creativity and communication with multiple audiences. We draw from a database which includes interviews with and lyrics from rappers of Haitian, Latin-American, African-American and Québécois origin.

from Journal of Sociolinguistics

Dimensions of style: Context, politics and motivation in gay Israeli speech

Sociolinguistic research has traditionally examined stylistic variation as a way of understanding how speakers may use language indexically. Quantitatively, research has sought to correlate observed patterns of variation across such external parameters as context or topic with the ways in which speakers linguistically orient themselves to their immediate surroundings or to some other socially-salient reference group. Recently, this approach has been criticized for being too mechanistic. In this paper, I present a new method for examining stylistic variation that addresses this critique, and demonstrate how an attention to speakers’ motivations and interactional goals can be reconciled with a quantitative analysis of variation. I illustrate the proposed method with a quantitative examination of systematic patterns of prosodic variation in the speech of a group of Israeli men who are all members of various lesbian and gay political-activist groups.

from Journal of Sociolinguistics

Feeling your words: Hearing with your face

The movement of facial skin and muscles around the mouth plays an important role not only in the way the sounds of speech are made, but also in the way they are heard according to a study by scientists at Haskins Laboratories, a Yale-affiliated research laboratory.

from EurekAlert.org