Blog Archives

Do all ducks lay eggs? The generic overgeneralization effect

Generics are statements such as “tigers are striped” and “ducks lay eggs”. They express general, though not universal or exceptionless, claims about kinds (Carlson & Pelletier, 1995). For example, the generic “ducks lay eggs” seems true even though many ducks (e.g. the males) do not lay eggs. The universally quantified version of the statement should be rejected, however: it is incorrect to say “all ducks lay eggs”, since many ducks do not lay eggs. We found that adults nonetheless often judged such universal statements true, despite knowing that only one gender had the relevant property (Experiment 1). The effect was not due to participants interpreting the universals as quantifying over subkinds, or as applying to only a subset of the kind (e.g. only the females) (Experiment 2), and it persisted even when people judged that male ducks did not lay eggs only moments before (Experiment 3). It also persisted when people were presented with correct alternatives such as “some ducks do not lay eggs” (Experiment 4). Our findings reveal a robust generic overgeneralization effect, predicted by the hypothesis that generics express primitive, default generalizations.

from the Journal of Memory and Language

The cognitive and linguistic foundations of early reading development: A Norwegian latent variable longitudinal study.

The authors present the results of a 2-year longitudinal study of 228 Norwegian children beginning some 12 months before formal reading instruction began. The relationships between a range of cognitive and linguistic skills (letter knowledge, phoneme manipulation, visual–verbal paired-associate learning, rapid automatized naming (RAN), short-term memory, and verbal and nonverbal ability) were investigated and related to later measures of word recognition in reading. Letter knowledge, phoneme manipulation, and RAN were independent longitudinal predictors of early reading (word recognition) skills in the regular Norwegian orthography. Early reading skills initially appeared well described as a unitary construct that then showed rapid differentiation into correlated subskills (word decoding, orthographic choice, text reading, and nonword reading) that showed very high levels of longitudinal stability. The results are related to current ideas about the cognitive foundations of early reading skills. (PsycINFO Database Record (c) 2009 APA, all rights reserved)

from Developmental Psychology

The cognitive and linguistic foundations of early reading development: A Norwegian latent variable longitudinal study.

The authors present the results of a 2-year longitudinal study of 228 Norwegian children beginning some 12 months before formal reading instruction began. The relationships between a range of cognitive and linguistic skills (letter knowledge, phoneme manipulation, visual–verbal paired-associate learning, rapid automatized naming (RAN), short-term memory, and verbal and nonverbal ability) were investigated and related to later measures of word recognition in reading. Letter knowledge, phoneme manipulation, and RAN were independent longitudinal predictors of early reading (word recognition) skills in the regular Norwegian orthography. Early reading skills initially appeared well described as a unitary construct that then showed rapid differentiation into correlated subskills (word decoding, orthographic choice, text reading, and nonword reading) that showed very high levels of longitudinal stability. The results are related to current ideas about the cognitive foundations of early reading skills. (PsycINFO Database Record (c) 2009 APA, all rights reserved)

from Developmental Psychology

Language experience and consonantal context effects on perceptual assimilation of French vowels by American-English learners of French

Recent research has called for an examination of perceptual assimilation patterns in second-language speech learning. This study examined the effects of language learning and consonantal context on perceptual assimilation of Parisian French (PF) front rounded vowels /y/ and // by American English (AE) learners of French. AE listeners differing in their French language experience (no experience, formal instruction, formal-plus-immersion experience) performed an assimilation task involving PF /y, , u, o, i, , a/ in bilabial /rabVp/ and alveolar /radVt/ contexts, presented in phrases. PF front rounded vowels were assimilated overwhelmingly to back AE vowels. For PF //, assimilation patterns differed as a function of language experience and consonantal context. However, PF /y/ revealed no experience effect in alveolar context. In bilabial context, listeners with extensive experience assimilated PF /y/ to /ju/ less often than listeners with no or only formal experience, a pattern predicting the poorest /u-y/ discrimination for the most experienced group. An “internal consistency” analysis indicated that responses were most consistent with extensive language experience and in bilabial context. Acoustical analysis revealed that acoustical similarities among PF vowels alone cannot explain context-specific assimilation patterns. Instead it is suggested that native-language allophonic variation influences context-specific perceptual patterns in second-language learning.

©2009 Acoustical Society of America

from the Journal of the Acoustical Society of America

Effects of obstruent consonants on fundamental frequency at vowel onset in English

When a vowel follows an obstruent, the fundamental frequency in the first few tens of milliseconds of the vowel is known to be influenced by the voicing characteristics of the consonant. This influence was re-examined in the study reported here. Stops, fricatives, and the nasal /m/ were paired with the vowels /i,/ to form CVm syllables. Target syllables were embedded in carrier sentences, and intonation was varied to produce each syllable in either a high, low, or neutral pitch environment. In a high-pitch environment, F0 following voiceless obstruents is significantly increased relative to the baseline /m/, but following voiced obstruents it closely traces the baseline. In a low-pitch environment, F0 is very slightly increased following all obstruents, voiced and unvoiced. It is suggested that for certain pitch environments a conflict can occur between gestures corresponding to the segmental feature [stiff vocal folds] and intonational elements. The results are different acoustic manifestations of [stiff] in different pitch environments. The spreading of the vocal folds that occurs during unvoiced stops in certain contexts in English is an enhancing gesture, which aids the resolution of the gestural conflict by allowing the defining segmental gesture to be weakened without losing perceptual salience.

from the Journal of the Acoustical Society of America

Modeling tone and intonation in Mandarin and English as a process of target approximation

This paper reports the development of a quantitative target approximation (qTA) model for generating F0 contours of speech. The qTA model simulates the production of tone and intonation as a process of syllable-synchronized sequential target approximation [Xu, Y. (2005). “Speech melody as articulatorily implemented communicative functions,” Speech Commun. 46, 220–251]. It adopts a set of biomechanical and linguistic assumptions about the mechanisms of speech production. The communicative functions directly modeled are lexical tone in Mandarin and lexical stress in English and focus in both languages. The qTA model is evaluated by extracting function-specific model parameters from natural speech via supervised learning (automatic analysis by synthesis) and comparing the F0 contours generated with the extracted parameters to those of natural utterances through numerical evaluation and perceptual testing. The F0 contours generated by the qTA model with the learned parameters were very close to the natural contours in terms of root mean square error, rate of human identification of tone, and focus and judgment of naturalness by human listeners. The results demonstrate that the qTA model is both an effective tool for research on tone and intonation and a potentially effective system for automatic synthesis of tone and intonation.

from the Journal of the Acoustical Society of America

Perception of rhythmic grouping depends on auditory experience

Many aspects of perception are known to be shaped by experience, but others are thought to be innate universal properties of the brain. A specific example comes from rhythm perception, where one of the fundamental perceptual operations is the grouping of successive events into higher-level patterns, an operation critical to the perception of language and music. Grouping has long been thought to be governed by innate perceptual principles established a century ago. The current work demonstrates instead that grouping can be strongly dependent on culture. Native English and Japanese speakers were tested for their perception of grouping of simple rhythmic sequences of tones. Members of the two cultures showed different patterns of perceptual grouping, demonstrating that these basic auditory processes are not universal but are shaped by experience. It is suggested that the observed perceptual differences reflect the rhythms of the two languages, and that native language can exert an influence on general auditory perception at a basic level

from the Journal of the Acoustical Society of America

Differential levels of speech and manual dysfluency in adults who stutter during simultaneous drawing and speaking tasks

We examined the disruptive effects of stuttering on manual performance during simultaneous speaking and drawing tasks. Fifteen stuttering and fifteen non-stuttering participants drew continuous circles with a pen on a digitizer tablet under three conditions: silent (i.e., neither reading nor speaking), reading aloud, and choral reading (i.e., reading aloud in unison with another reader). We counted the frequency of stuttering events in the speaking tasks and measured pen stroke duration and pen stroke dysfluency (normalized jerk) in all three tasks. The control group was stutter-free and did not increase manual dysfluency in any condition. In the silent condition, the stuttering group performed pen movements without evidence of dysfluency, similar to the control group. However, in the reading aloud condition, the stuttering group stuttered on 12% of the syllables and showed increased manual dysfluency. In the choral reading condition stuttering was virtually eliminated (reduced by 97%), but manual dysfluency was reduced by only 47% relative to the reading aloud condition. Trials where more stuttered events were generally positively correlated with higher manual dysfluency. The results are consistent with a model in which episodes of stuttering and motor dysfluency are related to neural interconnectivity between manual and speech processes.

from Human Movement Science

Linguistic Competence in Aphasia

from Perspectives on Augmentative and Alternative Communication

Loss of implicit linguistic competence assumes a loss of linguistic rules, necessary linguistic computations, or representations. In aphasia, the inherent neurological damage is frequently assumed by some to be a loss of implicit linguistic competence that has damaged or wiped out neural centers or pathways that are necessary for maintenance of the language rules and representations needed to communicate. Not everyone agrees with this view of language use in aphasia. The measurement of implicit language competence, although apparently necessary and satisfying for theoretic linguistics, is complexly interwoven with performance factors. Transience, stimulability, and variability in aphasia language use provide evidence for an access deficit model that supports performance loss. Advances in understanding linguistic competence and performance may be informed by careful study of bilingual language acquisition and loss, the language of savants, the language of feral children, and advances in neuroimaging. Social models of aphasia treatment, coupled with an access deficit view of aphasia, can salve our restless minds and allow pursuit of maximum interactive communication goals even without a comfortable explanation of implicit linguistic competence in aphasia.

Linguistic Interactions: A Therapeutic Consideration for Adults With Aphasia

from Perspectives on Augmentative and Alternative Communication

Linguistic interaction models suggest that interrelationships arise between structural language components and between structural and pragmatic components when language is used in social contexts. The linguist, David Crystal (1986, 1987), has proposed that these relationships are central, not peripheral, to achieving desired clinical outcomes. For individuals with severe communication challenges, erratic or unpredictable relationships between structural and pragmatic components can result in atypical patterns of interaction between them and members of their social communities, which may create a perception of disablement. This paper presents a case study of a woman with fluent, Wernicke’s aphasia that illustrates how attention to patterns of linguistic interaction may enhance AAC intervention for adults with aphasia.

Naming action in Japanese: Effects of semantic similarity and grammatical class

from Language and Cognitive Processes

Abstract
This study investigated whether the semantic similarity and grammatical class of distracter words affects the naming of pictured actions (verbs) in Japanese. Three experiments used the picture-word interference paradigm with participants naming picturable actions while ignoring distracters. In all three experiments, we manipulated the semantic similarity between distracters and targets (similar vs. dissimilar verbs) and the grammatical class of semantically dissimilar distracters (verbs, verbal nouns, and also nouns in Experiment 3) in addition to task demands (single word naming vs. phrase/sentence generation). While Experiment 1 used visually presented distracters, Experiment 2 and 3 used auditory distracter words to rule out possible confounding factors of orthography (kanji vs. hiragana). We found the same results for all three experiments: robust semantic interference in the absence of any effects of grammatical class. We discuss the lack of grammatical class effects in terms of structural characteristics of the Japanese language.

American Sign Language syntactic and narrative comprehension in skilled and less skilled readers: Bilingual and bimodal evidence for the linguistic basis of reading

from Applied Psycholinguistics

We tested the hypothesis that syntactic and narrative comprehension of a natural sign language can serve as the linguistic basis for skilled reading. Thirty-one adults who were deaf from birth and used American Sign Language (ASL) were classified as skilled or less skilled readers using an eighth-grade criterion. Proficiency with ASL syntax, and narrative comprehension of ASL and Manually Coded English (MCE) were measured in conjunction with variables including exposure to print, nonverbal IQ, and hearing and speech ability. Skilled readers showed high levels of ASL syntatic ability and narrative comprehension whereas less skilled readers did not. Regression analyses showed ASL syntactic ability to contribute unique variance in English reading performance when the effects of nonverbal IQ, exposure to print, and MCE comprehension were controlled. A reciprocal relationship between print exposure and sign language proficiency was further found. The results indicate that the linguistic basis of reading, and the reciprocal relationship between print exposure and “through the air” language, can be bimodal, as in being a sign language or a spoken language, and bilingual, as in being ASL and English.

Articulatory evidence for feedback and competition in speech production

from Language and Cognitive Processes

Abstract
We report an experimental investigation of slips of the tongue using a Word Order Competition (WOC) paradigm in which context (entirely non-lexical, mixed) and competitor (whether a possible phoneme substitution would result in a word or not) were crossed. Our primary analysis uses electropalatographic (EPG) records to measure articulatory variation, and reveals that the articulation of onset phonemes is affected by two factors. First, onsets with real word competitors are articulated more similarly to the competitor onset than when the competitor would result in a non-word. Second, onsets produced in a non-lexical context vary more from the intended onset than when the context contains real words. We propose an account for these findings that incorporates feedback between phonological and lexical representations in a cascading model of speech production, and argue that measuring articulatory variation can improve our understanding of the cognitive processes involved in speech production.

Communicating common ground: How mutually shared knowledge influences speech and gesture in a narrative task

from Language and Cognitive Processes

Abstract
Much research has been carried out into the effects of mutually shared knowledge (or common ground) on verbal language use. This present study investigates how common ground affects human communication when regarding language as consisting of both speech and gesture. A semantic feature approach was used to capture the range of information represented in speech and gesture. Overall, utterances were found to contain less semantic information when interlocutors had mutually shared knowledge, even when the information represented in both modalities, speech and gesture, was considered. However, when considering the gestures on their own, it was found that they represented only marginally less information. The findings also show that speakers gesture at a higher rate when common ground exists. It appears therefore that gestures play an important communicational function, even when speakers convey information which is already known to their addressee.

Effects of different types of hand gestures in persuasive speech on receivers’ evaluations

from Language and Cognitive Processes

Abstract
Hand gestures have a close link with speech and with social perception and persuasion processes, however to date no one has experimentally investigated the role of hand gestures alone in persuasive speech. An experiment with undergraduates was conducted using 5 video-messages in which only hand gestures of the speaker were manipulated along five types. ANOVAs reveal effect of gesture type on receivers’ evaluation of message persuasiveness, speaker communication style effectiveness, and speaker’s composure and competence. A control study (Experiment 2) confirms that these effects are due to visible gestures. Speech accompanying gestures appear to play a causal role in social perception.