We investigated how articulatory complexity at the phoneme level is manifested neurobiologically in an overt production task. fMRI images were acquired from young Korean-speaking adults as they pronounced bisyllabic pseudowords in which we manipulated phonological complexity defined in terms of vowel duration and instability (viz., COMPLEX: /tii/ >> MID-COMPLEX: /tiye/ >> SIMPLE: /tii/). Increased activity in the left inferior frontal gyrus (Brodmann Areas (BA) 44 and 47), supplementary motor area and anterior insula was observed for the articulation of COMPLEX sequences relative to MID-COMPLEX; this was the case with the articulation of MID-COMPLEX relative to SIMPLE, except that the pars orbitalis (BA 47) was dominantly identified in the Broca’s area. The differentiation indicates that phonological complexity is reflected in the neural processing of distinct phonemic representations, both by recruiting brain regions associated with retrieval of phonological information from memory and via articulatory rehearsal for the production of COMPLEX vowels. In addition, the finding that increased complexity engages greater areas of the brain suggests that brain activation can be a neurobiological measure of articulo-phonological complexity, complementing, if not substituting for, biomechanical measurements of speech motor activity.
from Brain and Language
In the assessment of human hearing, it is often important to determine whether hearing loss is organic or nonorganic in nature. Nonorganic, or functional, hearing loss is often associated with deceptive intention on the part of the listener. Over the past decade, functional neuroimaging has been used to study the neural correlates of deception, and studies have consistently highlighted the contribution of the prefrontal cortex in such behaviors. Can patterns of brain activity be similarly used to detect when an individual is feigning a hearing loss? To answer this question, 15 adult participants were requested to respond to pure tones and simple words correctly, incorrectly, randomly, or with the intent to feign a hearing loss. As predicted, more activity was observed in the prefrontal cortices (as measured by functional magnetic resonance imaging), and delayed behavioral reaction times were noted, when the participants feigned a hearing loss or responded randomly versus when they responded correctly or incorrectly. The results suggest that cortical imaging techniques could play a role in identifying individuals who are feigning hearing loss. Hum Brain Mapp, 2011. © 2011 Wiley-Liss, Inc.
from Human Brain Mapping
The signer and the sign: Cortical correlates of person identity and language processing from point-light displays
These findings suggest that the neural systems supporting point-light displays for the processing of SL rely on a cortical network including areas of the inferior temporal cortex specialized for face and body identification. While this might be predicted from other studies of whole body point-light actions (Vaina et al., 2001) it is not predicted from the perspective of spoken language processing, where voice characteristics and speech content recruit distinct cortical regions (Stevens, 2004) in addition to a common network. In this respect, our findings contrast with studies of voice/speech recognition (von Kreigstein et al., 2005). Inferior temporal regions associated with the visual recognition of a person appear to be required during SL processing, for both carrier and content information.
Using functional magnetic resonance imaging (fMRI), we neuroimaged deaf adults as they performed two linguistic tasks with sentences in American Sign Language, grammatical judgment and phonemic-hand judgment. Participants’ age-onset of sign language acquisition ranged from birth to 14 years; length of sign language experience was substantial and did not vary in relation to age of acquisition. For both tasks, a more left lateralized pattern of activation was observed, with activity for grammatical judgment being more anterior than that observed for phonemic-hand judgment, which was more posterior by comparison. Age of acquisition was linearly and negatively related to activation levels in anterior language regions and positively related to activation levels in posterior visual regions for both tasks.
from Brain and Language
The aim of this functional magnetic resonance imaging (fMRI) study was to identify human brain areas that are sensitive to the direction of auditory motion. Such directional sensitivity was assessed in a hypothesis-free manner by analyzing fMRI response patterns across the entire brain volume using a spherical-searchlight approach. In addition, we assessed directional sensitivity in three predefined brain areas that have been associated with auditory motion perception in previous neuroimaging studies. These were the primary auditory cortex, the planum temporale and the visual motion complex (hMT/V5+). Our whole-brain analysis revealed that the direction of sound-source movement could be decoded from fMRI response patterns in the right auditory cortex and in a high-level visual area located in the right lateral occipital cortex. Our region-of-interest-based analysis showed that the decoding of the direction of auditory motion was most reliable with activation patterns of the left and right planum temporale. Auditory motion direction could not be decoded from activation patterns in hMT/V5+. These findings provide further evidence for the planum temporale playing a central role in supporting auditory motion perception. In addition, our findings suggest a cross-modal transfer of directional information to high-level visual cortex in healthy humans. Hum Brain Mapp, 2011. © 2011 Wiley-Liss, Inc.
from Human Brain Mapping
This study examined fMRI activation when perceivers either passively observed or observed and imitated matched or mismatched audiovisual (“McGurk”) speech stimuli. Greater activation was observed in the inferior frontal gyrus (IFG) overall for imitation than for perception of audiovisual speech and for imitation of the McGurk-type mismatched stimuli than matched audiovisual stimuli. This unique activation in the IFG during imitation of incongruent audiovisual speech may reflect activation associated with direct matching of incongruent auditory and visual stimuli or conflict between category responses. This study provides novel data about the underlying neurobiology of imitation and integration of AV speech.
from the Journal of Neurolinguistics
Skilled reading depends upon successfully integrating orthographic, phonological, and semantic information; however, the process of becoming a skilled reader with efficient neural circuitry is not fully understood. Short-term learning paradigms can provide insight into learning mechanisms by revealing differential responses to training approaches. To date, neuroimaging studies have primarily focused on effects of teaching novel words either in isolation or in context, without directly comparing the two. The current study compared the behavioral and neurobiological effects of learning novel pseudowords (i.e., pronouncing and attaching meaning) trained either in isolation or in sentential context. Behavioral results showed generally comparable pseudoword learning for both conditions, but sentential context-trained pseudowords were spoken and comprehended slightly more quickly. Neurobiologically, fMRI activity for reading trained pseudowords was similar to real words; however, an interaction between training approach and reading proficiency was observed. Specifically, highly skilled readers showed similar levels of activity regardless of training approach. However, less skilled readers differentiated between training conditions, showing comparable activity to highly skilled readers only for isolation-trained pseudowords. Overall, behavioral and neurobiological findings suggest that training approach may affect rate of learning and neural circuitry, and that less skilled readers may need explicit training to develop optimal neural pathways.
Frontal cortical activation is elicited when subjects have been instructed not to initiate a sensorimotor task. The goal of this preliminary fMRI study was to examine BOLD response to a “Do Not Swallow” instruction (an intentional “off-state”) in the context of other swallowing tasks in 3 groups of participants (healthy young, healthy old, and early Alzheimer’s disease (AD)). Overall, the older group had larger, bilaterally active clusters in the cortex, including the dorsomedial prefrontal cortex during the intentional swallowing off-state; this region is commonly active in response inhibition studies. Disease-related differences were evident where the AD group had significantly greater BOLD response in the insula/operculum than the old. These findings have significant clinical implications for control of swallowing across the age span and in neurodegenerative disease. Greater activation in the insula/operculum for the AD group supports previous studies where this region is associated with initiating swallowing. The AD group may have required more effort to “turn off” swallowing centers to reach the intentional swallowing off-state.
from the Journal of Alzheimer’s Disease
Brain activation for language dual-tasking: Listening to two people speak at the same time and a change in network timing
The study used fMRI to investigate brain activation in participants who were able to listen to and successfully comprehend two people speaking at the same time (dual-tasking). The study identified brain mechanisms associated with high-level, concurrent dual-tasking, as compared with comprehending a single message. Results showed an increase in the functional connectivity among areas of the language network in the dual task. The increase in synchronization of brain activation for dual-tasking was brought about primarily by a change in the timing of left inferior frontal gyrus (LIFG) activation relative to posterior temporal activation, bringing the LIFG activation into closer correspondence with temporal activation. The results show that the change in LIFG timing was greater in participants with lower working memory capacity, and that recruitment of additional activation in the dual-task occurred only in the areas adjacent to the language network that was activated in the single task. The shift in LIFG activation may be a brain marker of how the brain adapts to high-level dual-tasking. Hum Brain Mapp, 2011. © 2011 Wiley-Liss, Inc.
from Human Brain Mapping
A primary focus within neuroimaging research on language comprehension is on the distribution of semantic knowledge in the brain. Studies have shown that the left posterior middle temporal gyrus (LPMT), a region just anterior to area MT/V5, is important for the processing of complex action knowledge. It has also been found that motion verbs cause activation in LPMT. In this experiment we investigated whether this effect could be replicated in a setting resembling real life language comprehension, i.e. without any overt behavioral task during passive listening to a story. During fMRI participants listened to a recording of the story “The Ugly Duckling”. We incorporated a nuisance elimination regression approach for factoring out known nuisance variables both in terms of physiological noise, sound intensity, linguistic variables and emotional content. Compared to the remaining text, clauses containing motion verbs were accompanied by a robust activation of LPMT with no other significant effects, consistent with the hypothesis that this brain region is important for processing motion knowledge, even during naturalistic language comprehension conditions.
from Brain and Language
Cognitive models of reading all assume some division of labor among processing pathways in mapping among print, sound and meaning. Many studies of the neural basis of reading have used task manipulations such as rhyme or synonym judgment to tap these processes independently. Here we take advantage of specific properties of the Chinese writing system to test how differential availability of sublexical information about sound and meaning, as well as the orthographic structure of characters, pseudo-characters and “artificial” control stimuli influence brain activation in the context of the same one-back task. Analyses combine a data-driven approach that identifies temporally coherent patterns of activity over the course of the entire experiment with hypothesis-testing based on the correlation of these patterns with predictors for different stimulus classes. The results reveal a large network of task-related activity. Both the extent of this network and activity in regions commonly observed in studies of Chinese reading are apparently related to task difficulty. Other regions, including temporo-parietal cortex, were sensitive to particular sublexical functional units in mapping among print, sound, and meaning.
from Brain and Language
The identification of the first gene involved in a speech-language disorder was made possible through the study of a British multi-generational family (the “KE family”) in whom half the members have an inherited speech-language disorder caused by a FOXP2 mutation. Neuroimaging investigations in the affected members of the KE family have revealed structural and functional abnormalities in a wide cortical-subcortical network. Functional imaging studies have confirmed dysfunction of this network by revealing abnormal activation in several areas including Broca’s area and the putamen during language-related tasks, such as word repetition and generation. Repeating nonsense words is particularly challenging for the affected members of the family, as well as in other individuals suffering from idiopathic developmental specific language impairments; yet, thus far the neural correlates of the nonword repetition task have not been examined in individuals with developmental speech and language disorders. Here, four affected members of the KE family and four unrelated age-matched healthy participants repeated nonsense words aloud during functional MRI scanning. Relative to control participants, repetition in the affected members was severely impaired, and brain activation was significantly reduced in the premotor, supplementary and primary motor cortices, as well as in the cerebellum and basal ganglia. We suggest that nonword repetition is the optimal endophenotype for FOXP2 disruption in humans because this task recruits brain regions involved in the imitation and vocal learning of novel sequences of speech sounds.
Previous literature in alphabetic languages suggests that the occipital-temporal region (the ventral pathway) is specialized for automatic parallel word recognition, whereas the parietal region (the dorsal pathway) is specialized for serial letter-by-letter reading ([Cohen et al., 2008] and [Ho et al., 2002]). However, few studies have directly examined the role of the ventral and dorsal pathways in Chinese reading compared to English reading. To investigate this issue, we adopted the degraded word processing paradigm used by Cohen et al. (2008) and compared brain regions involved in the processing of degraded Chinese characters and English words during lexical decision, using functional magnetic resonance imaging (fMRI). The degraded characters/words were created by inserting blank spaces between radicals of Chinese characters or syllables of English polysyllabic words. Generally, the current study replicated the effects of Cohen et al. (2008), showing that in Chinese – like in alphabetic languages – character spacing modulates both ventral (bilateral cuneus, left middle occipital gyrus) and dorsal (left superior parietal lobule and middle frontal gyrus) pathways. In addition, the current study showed greater activation in bilateral cuneus and right lingual gyrus for Chinese versus English when comparing spaced to normal stimuli, suggesting that Chinese character recognition relies more on ventral visual-spatial processing than English word recognition. Interestingly, bilateral cuneus showed monotonic patterns in response to increasing spacing, while the rest of the regions of interest showed non-monotonic patterns, indicating different profiles for these regions in visual-spatial processing.
from Brain and Language
The purpose of this study was to investigate whether brain activity related to the presence of stuttering can be identified with rapid functional MRI (fMRI) sequences that involved overt and covert speech processing tasks. The long-term goal is to develop sensitive fMRI approaches with developmentally appropriate tasks to identify deviant speech motor and auditory brain activity in children who stutter closer to the age at which recovery from stuttering is documented. Rapid sequences may be preferred for individuals or populations who do not tolerate long scanning sessions. In this report, we document the application of a picture naming and phoneme monitoring task in three minute fMRI sequences with adults who stutter (AWS). If relevant brain differences are found in AWS with these approaches that conform to previous reports, then these approaches can be extended to younger populations. Pairwise contrasts of brain BOLD activity between AWS and normally fluent adults indicated the AWS showed higher BOLD activity in the right inferior frontal gyrus (IFG), right temporal lobe and sensorimotor cortices during picture naming and and higher activity in the right IFG during phoneme monitoring. The right lateralized pattern of BOLD activity together with higher activity in sensorimotor cortices is consistent with previous reports, which indicates rapid fMRI sequences can be considered for investigating stuttering in younger participants.
Emotions influence our everyday life in several ways. With the present study, we wanted to examine the impact of emotional information on neural correlates of semantic priming, a well-established technique to investigate semantic processing. Stimuli were presented with a short SOA of 200 ms as subjects performed a lexical decision task during fMRI measurement. Seven experimental conditions were compared: positive/negative/neutral related, positive/negative/neutral unrelated, nonwords (all words were nouns). Behavioral data revealed a valence specific semantic priming effect (i.e., unrelated > related) only for neutral and positive related word pairs. On a neural level, the comparison of emotional over neutral relations showed activation in left anterior medial frontal cortex, superior frontal gyrus, and posterior cingulate. Interactions for the different relations were located in left anterior part of the medial frontal cortex, cingulate regions, and right hippocampus (positive > neutral + negative) and left posterior part of medial frontal cortex (negative > neutral + positive). The results showed that emotional information have an influence on semantic association processes. While positive and neutral information seem to share a semantic network, negative relations might induce compensatory mechanisms that inhibit the spread of activation between related concepts. The neural correlates highlighted a distributed neural network, primarily involving attention, memory and emotion related processing areas in medial fronto-parietal cortices. The differentiation between anterior (positive) and posterior part (negative) of the medial frontal cortex was linked to the type of affective manipulation with more cognitive demands being involved in the automatic processing of negative information. Hum Brain Mapp, 2011. © 2010 Wiley-Liss, Inc.
from Human Brain Mapping