Blog Archives

Spatiotemporal dynamics of speech sound perception in chronic developmental stuttering

The fronto-central N1 auditory wave was reduced in response to spoken vowels relative to heard vowels (auditory-vocal gating), but no difference in the extent of such modulation was found in the PERS group. Abnormalities in the PERS group were restricted to the LISTEN condition, in the form of early N1 and late N3 amplitude changes. Voltage of the N1 wave was significantly reduced over right inferior temporo-occipital scalp in the PERS group. A laterality index derived from N1 voltage moderately correlated with the PERS group’s assessed pre-experiment stuttering frequency. Source localization with sLORETA (Pascual-Marqui, R. D. (2002). Standardized low-resolution brain electromagnetic tomography (sLORETA): Technical details. Methods & Findings in Experimental & Clinical Pharmacology, 24, 5–12.) revealed that at the peak of the N1 the PERS group displayed significantly greater current density in right primary motor cortex than the CONT group, suggesting abnormal early speech-motor activation. Finally, the late N3 wave was reduced in amplitude over inferior temporo-occipital scalp, more so over the right hemisphere. sLORETA revealed that in the time window of the N3 the PERS group showed significantly less current density in right secondary auditory cortex than the CONT group, suggesting abnormal speech sound perception. These results point to a deficit in auditory processing of speech sounds in persistent developmental stuttering, stemming from early increased activation of right rolandic area and late reduced activation in right auditory cortex.

from Brain and Language

Advertisements

Interactive Specialization: A domain-general framework for human functional brain development?

A domain-general framework for interpreting data on human functional brain development is presented. Assumptions underlying the general theory and predictions derived from it are discussed. Developmental functional neuroimaging data from the domains of face processing, social cognition, word learning and reading, executive control, and brain resting states are used to assess these predictions. Finally, potential criticisms of the framework are addressed and challenges for the future presented.

from Developmental Cognitive Neuroscience

Evolutionary Conservation and Neuronal Mechanisms of Auditory Perceptual Restoration

Auditory perceptual ‘restoration’ occurs when the auditory system restores an occluded or masked sound of interest. Behavioral work on auditory restoration in humans began over 50 years ago using it to model a noisy environmental scene with competing sounds. It has become clear that not only humans experience auditory restoration: restoration has been broadly conserved in many species. Behavioral studies in humans and animals provide a necessary foundation to link the insights being obtained from human EEG and fMRI to those from animal neurophysiology. The aggregate of data resulting with multiple approaches across species has begun to clarify the neuronal bases of auditory restoration. Different types of neural responses supporting restoration have been found, supportive of multiple mechanisms working within a species. Yet a general principle has emerged that responses correlated with restoration mimic the response that would have been given to the uninterrupted sound of interest. Using the same technology to study different species will help us to better harness animal models of ‘auditory scene analysis’ to clarify the conserved neural mechanisms shaping the perceptual organization of sound and to advance strategies to improve hearing in natural environmental settings.

from Hearing Research

Motion-onset auditory-evoked potentials critically depend on history

The aim of the present study was to determine whether motion history affects motion-onset auditory-evoked potentials (motion-onset AEPs). AEPs were recorded from 33 EEG channels in 16 subjects to the motion onset of a sound (white noise) virtually moving in the horizontal plane at a speed of 60 deg/s from straight ahead to the left (−30°). AEPs for baseline and adaptation were compared. A stimulus trial comprised three consecutive phases: 2,000 ms adaptation phase, 1,000 ms stationary phase, and 500 ms test phase. During the adaptation phase of the adaptation condition, a sound source moved twice from +30° to −30° to top up preceding adaptation. In the baseline condition, neither top-up nor pre-adaptation were exerted. For both conditions, a stationary sound was presented centrally in the stationary phase, moving leftwards in the test phase. Typical motion-onset AEPs were obtained for the baseline condition, namely a fronto-central response complex dominated by a negative and a positive component, the so-called change-N1 and change-P2 after around 180 and 250 ms, respectively. For the adaptation condition, this complex was shifted significantly into the positive range, indicating that adaptation abolished a negativity within a time window of approximately 160 to 270 ms. A respective shift into the negative range was evident at occipito-parietal sites. In conclusion, while adaptation has to be taken into account as a potential confound in the design of motion-AEP studies, it might also be of benefit in order to isolate AEP correlates of motion processing.

from Experimental Brain Research

Information flow in the auditory cortical network

Auditory processing in the cerebral cortex is comprised of an interconnected network of auditory and auditory-related areas distributed throughout the forebrain. The nexus of auditory activity is located in temporal cortex among several specialized areas, or fields, that receive dense inputs from the medial geniculate complex. These areas are collectively referred to as auditory cortex. Auditory activity is extended beyond auditory cortex via connections with auditory-related areas elsewhere in the cortex. Within this network, information flows between areas to and from countless targets, but in a manner that is characterized by orderly regional, areal and laminar patterns. These patterns reflect some of the structural constraints that passively govern the flow of information at all levels of the network. In addition, the exchange of information within these circuits is dynamically regulated by intrinsic neurochemical properties of projecting neurons and their targets. This article begins with an overview of the principal circuits and how each is related to information flow along major axes of the network. The discussion then turns to a description of neurochemical gradients along these axes, highlighting recent work on glutamate transporters in the thalamocortical projections to auditory cortex. The article concludes with a brief discussion of relevant neurophysiological findings as they relate to structural gradients in the network.

from Hearing Research

Age-related changes in the functional neuroanatomy of overt speech production

Alterations of existing neural networks during healthy aging, resulting in behavioral deficits and changes in brain activity, have been described for cognitive, motor, and sensory functions. To investigate age-related changes in the neural circuitry underlying overt non-lexical speech production, functional MRI was performed in 14 healthy younger (21–32 years) and 14 healthy older individuals (62–84 years). The experimental task involved the acoustically cued overt production of the vowel /a/ and the polysyllabic utterance /pataka/. In younger and older individuals, overt speech production was associated with the activation of a widespread articulo-phonological network, including the primary motor cortex, the supplementary motor area, the cingulate motor areas, and the posterior superior temporal cortex, similar in the /a/ and /pataka/ condition. An analysis of variance with the factors age and condition revealed a significant main effect of age. Irrespective of the experimental condition, significantly greater activation was found in the bilateral posterior superior temporal cortex, the posterior temporal plane, and the transverse temporal gyri in younger compared to older individuals. Significantly greater activation was found in the bilateral middle temporal gyri, medial frontal gyri, middle frontal gyri, and inferior frontal gyri in older vs. younger individuals. The analysis of variance did not reveal a significant main effect of condition and no significant interaction of age and condition. These results suggest a complex reorganization of neural networks dedicated to the production of speech during healthy aging.

from Neurobiology of Aging

Responses to Interaural Time Delay in Human Cortex

from the Journal of Neurosphysiology

Humans use differences in the timing of sounds at the two ears to determine the location of a sound source. Various models have been posited for the neural representation of these interaural time differences (ITDs). These models make opposing predictions about the lateralization of ITD processing in the human brain. The weighted-image model predicts that sounds leading in time at one ear activate maximally the opposite brain hemisphere for all values of ITD. In contrast, the -limit model assumes that ITDs beyond half the period of the stimulus centre-frequency are not explicitly encoded in the brain, and that such ‘long’ ITDs activate maximally the side of the brain to which the sound is heard. A previous neuro-imaging study revealed activity in the human inferior colliculus consistent with the -limit. Here we show that, cortical responses to sounds with ITDs within the -limit are in line with the predictions of both models. However contrary to the immediate predictions of both models, neural activation is bilateral for ‘long’ ITDs, despite these being perceived as clearly lateralized. Furthermore, processing of long ITDs leads to higher activation in cortex than processing of short ITDs. These data show that coding of ITD in cortex is fundamentally different from coding of ITD in brainstem. We discuss these results in the context of the two models.

Cortical Sensorimotor Control in Vocalization: A Functional Magnetic Resonance Imaging Study

from Laryngoscope

Abstract:
Background: Verbal communication is a human feature and volitional vocalization is its basis. However, little is known regarding the cortical areas involved in human vocalization.

Methods: Therefore, functional magnetic resonance imaging at 3 Tesla was performed in 16 healthy adults to evaluate brain activations related to voice production. The main experiments included tasks involving motor control of laryngeal muscles with and without intonation. In addition, reference mappings of the sensorimotor hand area and the auditory cortices were performed.

Results: Related to vocalization, in addition to activation of the most lateral aspect of the primary sensorimotor cortex close to the Sylvian fissure (M1c), we found activations medially (M1a) and laterally (M1b) of the well-known sensorimotor hand area. Moreover, the supplementary motor area and the anterior cingulate cortex were activated.

Conclusions: Although M1a could be ascribed to motor control of breathing, M1b has been associated with laryngeal motor control. Consequently, even though M1c represents a laryngeal sensorimotor area, its exclusiveness as suggested previously could not be confirmed. Activations in the supplementary motor area and anterior cingulate cortex were ascribed to “vocal-motor planning.” The present data provide the basis for further functional magnetic resonance imaging studies in patients with neurological laryngeal disorders.