Blog Archives

Voices behind the left shoulder: Two patients with right-sided temporal lobe epilepsy

Auditory vocal hallucinations are sometimes observed in temporal-lobe epilepsy, but are a frequent sign of psychosis and may rarely be mistaken for the latter. Here we report two patients who suffered from auditory vocal hallucinations, described as unintelligible human voices perceived at their left side during epileptic seizures. MEG revealed interictal epileptic discharges within the anterior partition of the right superior temporal gyrus; signal-to-noise ratio of these discharges was overall poor in EEG. The findings suggest that auditory vocal hallucinations without verbal content can evolve in the right hemisphere and are probably independent of language lateralization. This is in accordance with evidence from functional imaging, whereas most previous reports of seizures with auditory vocal hallucinations were confined to the left hemisphere.

from Neurological Sciences

Before the N400: Effects of lexical–semantic violations in visual cortex

There exists an increasing body of research demonstrating that language processing is aided by context-based predictions. Recent findings suggest that the brain generates estimates about the likely physical appearance of upcoming words based on syntactic predictions: words that do not physically look like the expected syntactic category show increased amplitudes in the visual M100 component, the first salient MEG response to visual stimulation. This research asks whether violations of predictions based on lexical–semantic information might similarly generate early visual effects. In a picture–noun matching task, we found early visual effects for words that did not accurately describe the preceding pictures. These results demonstrate that, just like syntactic predictions, lexical–semantic predictions can affect early visual processing around 100 ms, suggesting that the M100 response is not exclusively tuned to recognizing visual features relevant to syntactic category analysis. Rather, the brain might generate predictions about upcoming visual input whenever it can. However, visual effects of lexical–semantic violations only occurred when a single lexical item could be predicted. We argue that this may be due to the fact that in natural language processing, there is typically no straightforward mapping between lexical–semantic fields (e.g., flowers) and visual or auditory forms (e.g., tulip, rose, magnolia). For syntactic categories, in contrast, certain form features do reliably correlate with category membership. This difference may, in part, explain why certain syntactic effects typically occur much earlier than lexical–semantic effects.

from Brain and Language

Speech perception in the child brain: Cortical timing and its relevance to literacy acquisition

Speech processing skills go through intensive development during mid-childhood, providing basis also for literacy acquisition. The sequence of auditory cortical processing of speech has been characterized in adults, but very little is known about the neural representation of speech sound perception in the developing brain. We used whole-head magnetoencephalography (MEG) to record neural responses to speech and nonspeech sounds in first-graders (7-8-year-old) and compared the activation sequence to that in adults. In children, the general location of neural activity in the superior temporal cortex was similar to that in adults, but in the time domain the sequence of activation was strikingly different. Cortical differentiation between sound types emerged in a prolonged response pattern at about 250 ms after sound onset, in both hemispheres, clearly later than the corresponding effect at about 100 ms in adults that was detected specifically in the left hemisphere. Better reading skills were linked with shorter-lasting neural activation, speaking for interdependence of the maturing neural processes of auditory perception and developing linguistic skills. This study uniquely utilized the potential of MEG in comparing both spatial and temporal characteristics of neural activation between adults and children. Besides depicting the group-typical features in cortical auditory processing, the results revealed marked interindividual variability in children

from Human Brain Mapping

Neuromagnetic evidence for a featural distinction of English consonants: Sensor- and source-space data

Speech sounds can be classified on the basis of their underlying articulators or on the basis of the acoustic characteristics resulting from particular articulatory positions. Research in speech perception suggests that distinctive features are based on both articulatory and acoustic information. In recent years, neuroelectric and neuromagnetic investigations provided evidence for the brain’s early sensitivity to distinctive features and their acoustic consequences, particularly for place of articulation distinctions. Here, we compare English consonants in a Mismatch Field design across two broad and distinct places of articulation – labial and coronal – and provide further evidence that early evoked auditory responses are sensitive to these features. We further add to the findings of asymmetric consonant processing, although we do not find support for coronal underspecification. Labial glides (Experiment 1) and fricatives (Experiment 2) elicited larger Mismatch responses than their coronal counterparts. Interestingly, their M100 dipoles differed along the anterior/posterior dimension in the auditory cortex that has previously been found to spatially reflect place of articulation differences. Our results are discussed with respect to acoustic and articulatory bases of featural speech sound classifications and with respect to a model that maps distinctive phonetic features onto long-term representations of speech sounds.

from Brain and Language

Auditory cortex tracks the temporal regularity of sustained noisy sounds

Neuroimaging studies have revealed dramatic asymmetries between the responses to temporally regular and irregular sounds in the antero-lateral part of Heschl’s gyrus. For example, the magnetoencephalography (MEG) study of Krumbholz et al. [Cerebr. Cortex 13, 765-772 (2003)] showed that the transition from a noise to a similar noise with sufficient temporal regularity to provoke a pitch evoked a pronounced temporal-regularity onset response (TRon response), whereas a comparable transition in the reverse direction revealed essentially no temporal-regularity offset response (TRoff response). The current paper presents a follow-up study in which the asymmetry is examined with much greater power, and the results suggest an intriguing reinterpretation of the onset/offset asymmetry. The TR-related activity in auditory cortex appears to be composed of a transient (TRon) and a TR-related sustained response (TRsus), with a highly variable TRon/TRsus amplitude ratio. The TRoff response is generally dominated by the break-down of the TRsus activity, which occurs so rapidly as to preclude the involvement of higher-level cortical processing. The time-course of the TR-related activity suggests that TR processing might be involved in monitoring the environment and alerting the brain to the onset and offset of behaviourally relevant, animate sources.

from Hearing Research

Responsiveness of the human auditory cortex to degraded speech sounds: Reduction of amplitude resolution vs. additive noise

The cortical mechanisms underlying human speech perception in acoustically adverse conditions remain largely unknown. Besides distortions from external sources, degradation of the acoustic structure of the sound itself poses further demands on perceptual mechanisms. We conducted a magnetoencephalography (MEG) study to reveal whether the perceptual differences between these distortions are reflected in cortically generated auditory evoked fields (AEFs). To mimic the degradation of the internal structure of sound and external distortion, we degraded speech sounds by reducing the amplitude resolution of the signal waveform and by using additive noise, respectively. Since both distortion types increase the relative strength of high frequencies in the signal spectrum, we also used versions of the stimuli which were low-pass filtered to match the tilted spectral envelope of the undistorted speech sound. This enabled us to examine whether the changes in the overall spectral shape of the stimuli affect the AEFs. We found that the auditory N1m response was substantially enhanced as the amplitude resolution was reduced. In contrast, the N1m was insensitive to distorted speech with additive noise. Changing the spectral envelope had no effect on the N1m. We propose that the observed amplitude enhancements are due to an increase in noisy spectral harmonics produced by the reduction of the amplitude resolution, which activates the periodicity-sensitive neuronal populations participating in pitch extraction processes. The current findings suggest that the auditory cortex processes speech sounds in a differential manner when the internal structure of sound is degraded compared with speech distorted by external noise.

from Brain Topography

Temporal dynamics of sinusoidal and non-sinusoidal amplitude modulation

Previous behavioural studies in human subjects have demonstrated the importance of amplitude modulations to the process of intelligible speech perception. In functional neuroimaging studies of amplitude modulation processing, the inherent assumption is that all sounds are decomposed into simple building blocks, i.e. sinusoidal modulations. The encoding of complex and dynamic stimuli is often modelled to be the linear addition of a number of sinusoidal modulations and so, by investigating the response of the cortex to sinusoidal modulation, an experimenter can probe the same mechanisms used to encode speech. The experiment described in this paper used magnetoencephalography to measure the auditory steady-state response produced by six sounds, all modulated in amplitude at the same frequency but which formed a continuum from sinusoidal to pulsatile modulation. Analysis of the evoked response shows that the magnitude of the envelope-following response is highly non-linear, with sinusoidal amplitude modulation producing the weakest steady-state response. Conversely, the phase of the steady-state response was related to the shape of the modulation waveform, with the sinusoidal amplitude modulation producing the shortest latency relative to the other stimuli. It is shown that a point in auditory cortex produces a strong envelope following response to all stimuli on the continuum, but the timing of this response is related to the shape of the modulation waveform. The results suggest that steady-state response characteristics are determined by features of the waveform outside of the modulation domain and that the use of purely sinusoidal amplitude modulations may be misleading, especially in the context of speech encoding.

from the European Journal of Neuroscience

Explicit processing of verbal and spatial features during letter-location binding modulates oscillatory activity of a fronto-parietal network

The present study investigated the binding of verbal and spatial features in immediate memory. In a recent study, we demonstrated incidental and asymmetrical letter-location binding effects when participants attended to letter features (but not when they attended to location features) that were associated with greater oscillatory activity over prefrontal and posterior regions during the retention period. We were interested to investigate whether the patterns of brain activity associated with the incidental binding of letters and locations observed when only the verbal feature is attended differ from those reflecting the binding resulting from the controlled/explicit processing of both verbal and spatial features. To achieve this, neural activity was recorded using magnetoencephalography (MEG) while participants performed two working memory tasks. Both tasks were identical in terms of their perceptual characteristics and only differed with respect to the task instructions. One of the tasks required participants to process both letters and locations. In the other, participants were instructed to memorize only the letters, regardless of their location. Time-frequency representation of MEG data based on the wavelet transform of the signals was calculated on a single trial basis during the maintenance period of both tasks. Critically, despite equivalent behavioural binding effects in both tasks, single and dual feature encoding relied on different neuroanatomical and neural oscillatory correlates. We propose that enhanced activation of an anterior–posterior dorsal network observed in the task requiring the processing of both features reflects the necessity for allocating greater resources to intentionally process verbal and spatial features in this task.

from Neuropsychologia

Effects of language comprehension on visual processing – MEG dissociates early perceptual and late N400 effects

We investigated whether and when information conveyed by spoken language impacts on the processing of visually presented objects. In contrast to traditional views, grounded-cognition posits direct links between language comprehension and perceptual processing. We used a magnetoencephalographic cross-modal priming paradigm to disentangle these views. In a sentence-picture verification task, pictures (e.g. of a flying duck) were paired with three sentence conditions: A feature-matching sentence about a duck in the air, a feature-mismatching sentence about a duck in a lake, and an unrelated sentence. Brain responses to pictures showed enhanced activity in the N400 time-window for the unrelated compared to both related conditions in the left temporal lobe. The M1 time-window revealed more activation for the feature-matching than for the other two conditions in the occipital cortex. These dissociable effects on early visual processing and semantic integration support models in which language comprehension engages two complementary systems, a perceptual and an abstract one.

from Brain and Language

Reorganization of functional connectivity as a correlate of cognitive recovery in acquired brain injury

Cognitive processes require a functional interaction between specialized multiple, local and remote brain regions. Although these interactions can be strongly altered by an acquired brain injury, brain plasticity allows network reorganization to be principally responsible for recovery. The present work evaluates the impact of brain injury on functional connectivity patterns. Networks were calculated from resting-state magnetoencephalographic recordings from 15 brain injured patients and 14 healthy controls by means of wavelet coherence in standard frequency bands. We compared the parameters defining the network, such as number and strength of interactions as well as their topology, in controls and patients for two conditions: following a traumatic brain injury and after a rehabilitation treatment. A loss of delta- and theta-based connectivity and conversely an increase in alpha- and beta-band-based connectivity were found. Furthermore, connectivity parameters approached controls in all frequency bands, especially in slow-wave bands. A correlation between network reorganization and cognitive recovery was found: the reduction of delta-band-based connections and the increment of those based on alpha band correlated with Verbal Fluency scores, as well as Perceptual Organization and Working Memory Indexes, respectively. Additionally, changes in connectivity values based on theta and beta bands correlated with the Patient Competency Rating Scale. The current study provides new evidence of the neurophysiological mechanisms underlying neuronal plasticity processes after brain injury, and suggests that these changes are related with observed changes at the behavioural level.

from Brain

Effects of DBS on auditory and somatosensory processing in Parkinson’s disease

Motor symptoms of Parkinson’s disease (PD) can be relieved by deep brain stimulation (DBS). The mechanism of action of DBS is largely unclear. Magnetoencephalography (MEG) studies on DBS patients have been unfeasible because of strong magnetic artifacts. An artifact suppression method known as spatiotemporal signal space separation (tSSS) has mainly overcome these difficulties. We wanted to clarify whether tSSS enables noninvasive measurement of the modulation of cortical activity caused by DBS. We have studied auditory and somatosensory-evoked fields (AEFs and SEFs) of advanced PD patients with bilateral subthalamic nucleus (STN) DBS using MEG. AEFs were elicited by 1-kHz tones and SEFs by electrical pulses to the median nerve with DBS on and off. Data could be successfully acquired and analyzed from 12 out of 16 measured patients. The motor symptoms were significantly relieved by DBS, which clearly enhanced the ipsilateral auditory N100m responses in the right hemisphere. Contralateral N100m responses and somatosensory P60m responses also had a tendency to increase when bilateral DBS was on. MEG with tSSS offers a novel and powerful tool to investigate DBS modulation of the evoked cortical activity in PD with high temporal and spatial resolution. The results suggest that STN-DBS modulates auditory processing in advanced PD. Hum Brain Mapp, 2010. © 2010 Wiley-Liss, Inc.

from Human Brain Mapping

Neural Dynamics of the Intention to Speak

When we talk we communicate our intentions. Although the origin of intentional action is debated in cognitive neuroscience, the question of how the brain generates the intention in speech remains still open. Using magnetoencephalography, we investigated the cortical dynamics engaged when healthy subjects attended to either their intention to speak or their actual speech. We found that activity in the right and left parietal cortex increased before subjects became aware of intending to speak. Within the time window of parietal activation, we also observed a transient left frontal activity in Broca’s area, a crucial region for inner speech. During attention to speech, neural activity was detected in left prefrontal and temporal areas and in the temporoparietal junction. In agreement with previous results, our findings suggest that the parietal cortex plays a multimodal role in monitoring intentional mechanisms in both action and language. The coactivation of parietal regions and Broca’s area may constitute the cortical circuit specific for controlling intentional processes during speech.

from Cerebral Cortex

The effects of cortical ischemic stroke on auditory processing in humans as indexed by transient brain responses

Left-hemispheric ischemic stroke impairs the processing of sinusoidal and speech sounds. This deficit seems to depend on the severity and location of stroke.

from Clinical Neurophysiology

The effects of healthy aging on auditory processing in humans as indexed by transient brain responses

Aging seems to affect the temporal dynamics of cortical auditory processing. The transient brain response is sensitive both to spectral complexity and aging-related changes in the timing of cortical activation.

from Clinical Neurophysiology

Processing of binaural spatial information in human auditory cortex: Neuromagnetic responses to interaural timing and level differences

This study was designed to test two hypotheses about binaural hearing: (1) that binaural cues are primarily processed in the hemisphere contralateral to the perceived location of a sound; and (2) that the two main binaural cues, interaural timing differences and interaural level differences, are processed in separate channels in the auditory cortex. Magnetoencephalography was used to measure brain responses to dichotic pitches–a perception of pitch created by segregating a narrow band of noise from a wider band of noise–derived from interaural timing or level disparities. Our results show a strong modulation of interhemispheric M100 amplitudes by ITD cues. When these cues simulated source presentation unilaterally from the right hemispace, M100 amplitude changed from a predominant right hemisphere pattern to a bilateral pattern. In contrast, ILD cues lacked any capacity to alter the right hemispheric distribution. These data indicate that intrinsic hemispheric biases are large in comparison to any contralaterality biases in the auditory system. Importantly, both types of binaural cue elicited a circa 200 ms latency object-related negativity component, believed to reflect automatic cortical processes involved in distinguishing concurrent auditory objects. These results support the conclusion that ITDs and ILDs are processed by distinct neuronal populations to relatively late stages of cortical processing indexed by the M100. However information common to the two cues seems to be extracted for use in a subsequent stage of auditory scene segregation indexed by the object related negativity. This may place a new bound on the extent to which sound location cues are processed in separate channels of the auditory cortex.

from Neuropsychologia