Blog Archives

Auditory repetition enhancement at short interstimulus intervals for frequency-modulated tones

Frequency-modulated (FM) sweeps are important components of most natural sounds. To examine the processing of these stimuli we applied a two-tone paradigm. Repeated stimulus presentation usually leads to reduced neuronal responses. However, in a former study repetition enhancements have been observed when FM tones were separated by short interstimulus intervals (ISIs) of ≤ 200 ms. To further investigate this repetition effect in response to FM tones, we recorded magnetoencephalogram (MEG) in humans during the presentation of consecutive FM sweep pairs separated by ISIs between 100 and 600 ms. We presented FM sweep pairs in six experimental conditions: a) two upward FM tones, b) two downward FM tones, c) an upward followed by a downward FM tone and d) a downward followed by an upward FM tone. Sequences of single upward and single downward FM tones served as control conditions. N1m amplitude was enhanced for repeated compared with different FM-direction tone pairs. This effect was found for the shortest ISI of 100 ms and disappeared at longer ISIs. Furthermore, mean peak latencies in response to the second tone were prolonged in same-direction pairs at the shortest ISI of 100 ms. At ISIs ≥ 300 ms slight enhancement effects occurred between 180–400 ms after the second stimulus. This is in accordance with a previous MEG study from our laboratory which demonstrated an enhancement effect for sustained fields at latencies of 150–350 ms after the second stimulus for same compared to different FM tone pairs separated by an ISI of 200 ms.

from Brain Research

Plasticity of the Human Auditory Cortex Related to Musical Training

During the last decades music neuroscience has become a rapidly growing field within the area of neuroscience. Music is particularly well suited for studying neuronal plasticity in the human brain because musical training is more complex and multimodal than most other daily life activities, and because prospective and professional musicians usually pursue the training with high and long-lasting commitment. Therefore, music has increasingly been used as a tool for the investigation of human cognition and its underlying brain mechanisms. Music relates to many brain functions like perception, action, cognition, emotion, learning and memory and therefore music is an ideal tool to investigate how the human brain is working and how different brain functions interact. Novel findings have been obtained in the field of induced cortical plasticity by musical training. The positive effects, which music in its various forms has in the healthy human brain are not only important in the framework of basic neuroscience, but they also will strongly affect the practices in neuro-rehabilitation.

from Neuroscience & Biobehavioral Reviews

The adaptive pattern of the auditory N1 peak revealed by standardized low-resolution brain electromagnetic tomography

The N1 peak in the late auditory evoked potential (LAEP) decreases in amplitude following stimulus repetition, displaying an adaptive pattern. The present study explored the functional neural substrates that may underlie the N1 adaptive pattern using standardized Low Resolution Electromagnetic Tomography (sLORETA). Fourteen young normal hearing (NH) listeners participated in the study. Tone bursts (80 dB SPL) were binaurally presented via insert earphones in trains of ten; the inter-stimulus interval was 0.7 s and the inter-train interval was 15 s. Current source density analysis was performed for the N1 evoked by the 1st, 2nd and 10th stimuli (S1, S2 and S10) at three different timeframes that corresponded to the latency ranges of the N1 waveform subcomponents (70–100, 100–130 and 130–160 ms). The data showed that S1 activated broad regions in different cortical lobes and the activation was much smaller for S2 and S10. Response differences in the LAEP waveform and sLORETA were observed between S1 and S2, but not between the S2 and S10. The sLORETA comparison map between S1 and S2 response showed the activation was located in the parietal lobe for the 70–100 ms timeframe, the frontal and limbic lobes for the 100–130 ms timeframe, and the frontal lobe for the 130–160 ms timeframe. These sLORETA comparison results suggest a parieto-frontal network that might help to sensitize the brain to novel stimuli by filtering out repetitive and irrelevant stimuli. This study demonstrates that sLORETA may be useful for identifying generators of scalp-recorded event related potentials and for examining the physiological features of these generators. This technique could be especially useful for cortical source localization in individuals who cannot be examined with functional magnetic resonance imaging or magnetoencephalography (e.g., cochlear implant users).

from Brain Research

Synaptic Short-term Plasticity in Auditory Cortical Circuits

The auditory system must be able to adapt to changing acoustic environment and still maintain accurate representation of signals. Mechanistically, this is a difficult task because the responsiveness of a large heterogeneous population of interconnected neurons must be adjusted properly and precisely. Synaptic short-term plasticity (STP) is widely regarded as a viable mechanism for adaptive processes. Although the cellular mechanism for STP is well characterized, the overall effect on information processing at the network level is poorly understood. The main challenge is that there are many cell types in auditory cortex, each of which exhibit different forms and degrees of STP. In this article, I will review the basic properties of STP in auditory cortical circuits and discuss the possible impact on signal processing.

from Hearing Research

Voices behind the left shoulder: Two patients with right-sided temporal lobe epilepsy

Auditory vocal hallucinations are sometimes observed in temporal-lobe epilepsy, but are a frequent sign of psychosis and may rarely be mistaken for the latter. Here we report two patients who suffered from auditory vocal hallucinations, described as unintelligible human voices perceived at their left side during epileptic seizures. MEG revealed interictal epileptic discharges within the anterior partition of the right superior temporal gyrus; signal-to-noise ratio of these discharges was overall poor in EEG. The findings suggest that auditory vocal hallucinations without verbal content can evolve in the right hemisphere and are probably independent of language lateralization. This is in accordance with evidence from functional imaging, whereas most previous reports of seizures with auditory vocal hallucinations were confined to the left hemisphere.

from Neurological Sciences

The modulatory influence of a predictive cue on the auditory steady-state response

Whether attention exerts its impact already on primary sensory levels is still a matter of debate. Particularly in the auditory domain the amount of empirical evidence is scarce. Recently noninvasive and invasive studies have shown attentional modulations of the auditory Steady-State Response (aSSR). This evoked oscillatory brain response is of importance to the issue, because the main generators have been shown to be located in primary auditory cortex. So far, the issue whether the aSSR is sensitive to the predictive value of a cue preceding a target has not been investigated. Participants in the present study had to indicate on which ear the faster amplitude modulated (AM) sound of a compound sound (42 and 19 Hz AM frequencies) was presented. A preceding auditory cue was either informative (75%) or uninformative (50%) with regards to the location of the target. Behaviorally we could confirm that typical attentional modulations of performance were present in case of a preceding informative cue. With regards to the aSSR we found differences between the informative and uninformative condition only when the cue/target combination was presented to the right ear. Source analysis indicated this difference to be generated by a reduced 42 Hz aSSR in right primary auditory cortex. Our and previous data by others show a default tendency of “40 Hz” AM sounds to be processed by the right auditory cortex. We interpret our results as active suppression of this automatic response pattern, when attention needs to be allocated to right ear input. Hum Brain Mapp, 2011. © 2011 Wiley-Liss, Inc.

from Human Brain Mapping

Intensity-Invariant Coding in the Auditory System

The auditory system faithfully represents sufficient details from sound sources such that downstream cognitive processes are capable of acting upon this information effectively even in the face of signal uncertainty, degradation or interference. This robust sound source representation leads to an invariance in perception vital for animals to interact effectively with their environment. Due to unique nonlinearities in the cochlea, sound representations early in the auditory system exhibit a large amount of variability as a function of stimulus intensity. In other words, changes in stimulus intensity, such as for sound sources at differing distances, create a unique challenge for the auditory system to encode sounds invariantly across the intensity dimension. This challenge and some strategies available to sensory systems to eliminate intensity as an encoding variable are discussed, with a special emphasis upon sound encoding.

from Neuroscience and Biobehavioral Reviews

Morphometric Differences in the Heschl’s Gyrus of Hearing Impaired and Normal Hearing Infants

This study investigates the morphometry of Heschl’s gyrus and its included primary auditory cortex (PAC) in hearing impaired (HI) and normal hearing (NH) infants. Fourty-two infants, age 8–19 months, with NH (n = 26) or hearing impairment (n = 16) were studied using high-resolution 3D magnetic resonance imaging. Gray matter (GM) and white matter (WM) volumes were obtained using software for automatic brain imaging segmentation to estimate the volume of each tissue within manually defined regions for the anterior portion of Heschl’s gyrus (aHG) in each individual subject, transformed to an infant brain template space. Interactions among group (HI, NH), tissue type (GM, WM), and hemisphere (left, right) were examined using analysis of variance. Whole-brain voxel-based morphometry was utilized to explore volume differences between groups across the entire brain. The HI group showed increased GM and decreased WM in aHG compared with the NH group; likely effects of auditory deprivation. The HI group did not exhibit their typical L > R asymmetry pattern that the NH group showed. Increased GM in aHG in HI infants may represent abnormal cortical development in PAC as seen in animal models of sensory deprivation. Lower WM volume is consistent with studies with deaf adults.

from Cerebral Cortex

Aided cortical auditory evoked potentials in response to changes in hearing aid gain

Objective: There is interest in using cortical auditory evoked potentials (CAEPs) to evaluate hearing aid fittings and experience-related plasticity associated with amplification; however, little is known about hearing aid signal processing effects on these responses. The purpose of this study was to determine the effect of clinically relevant hearing aid gain settings, and the resulting in-the-canal signal-to-noise ratios (SNRs), on the latency and amplitude of P1, N1, and P2 waves. Design & Sample: Evoked potentials and in-the-canal acoustic measures were recorded in nine normal-hearing adults in unaided and aided conditions. In the aided condition, a 40-dB signal was delivered to a hearing aid programmed to provide four levels of gain (0, 10, 20, and 30 dB). As a control, unaided stimulus levels were matched to aided condition outputs (i.e. 40, 50, 60, and 70 dB) for comparison purposes. Results: When signal levels are defined in terms of output level, aided CAEPs were surprisingly smaller and delayed relative to unaided CAEPs, probably resulting from increases to noise levels caused by the hearing aid. Discussion: These results reinforce the notion that hearing aids modify stimulus characteristics such as SNR, which in turn affects the CAEP in a way that does not reliably reflect hearing aid gain.

from the International Journal of Audiology

Developmental Plasticity Of Auditory Cortical Inhibitory Synapses

Functional inhibitory synapses form in auditory cortex well before the onset of normal hearing. However, their properties change dramatically during normal development, and many of these maturational events are delayed by hearing loss. Here, we review recent findings on the developmental plasticity of inhibitory synapse strength, kinetics, and GABAA receptor localization in auditory cortex. Although hearing loss generally leads to a reduction of inhibitory strength, this depends on the type of presynaptic interneuron. Furthermore, plasticity of inhibitory synapses also depends on the postsynaptic target. Hearing loss leads reduced GABAA receptor localization to the membrane of excitatory, but not inhibitory neurons. A reduction in normal activity in development can also affect the use-dependent plasticity of inhibitory synapses. Even moderate hearing loss can disrupt inhibitory short- and long-term synaptic plasticity. Thus, the cortex did not compensate for the loss of inhibition in the brainstem, but rather exacerbated the response to hearing loss by further reducing inhibitory drive. Together, these results demonstrate that inhibitory synapses are exceptionally dynamic during development, and deafness-induced perturbation of inhibitory properties may have a profound impact on auditory processing.

from Hearing Research

Lateralization of Speech Production Starts in Sensory Cortices—A Possible Sensory Origin of Cerebral Left Dominance for Speech

Speech production is a left-lateralized brain function, which could arise from a left dominance either in speech executive or sensory processes or both. Using functional magnetic resonance imaging in healthy subjects, we show that sensory cortices already lateralize when speaking is intended, while the frontal cortex only lateralizes when speech is acted out. The sequence of lateralization, first temporal then frontal lateralization, suggests that the functional lateralization of the auditory cortex could drive hemispheric specialization for speech production.

from Cerebral Cortex

The role of planum temporale in processing accent variation in spoken language comprehension

A repetition–suppression functional magnetic resonance imaging paradigm was used to explore the neuroanatomical substrates of processing two types of acoustic variation—speaker and accent—during spoken sentence comprehension. Recordings were made for two speakers and two accents: Standard Dutch and a novel accent of Dutch. Each speaker produced sentences in both accents. Participants listened to two sentences presented in quick succession while their haemodynamic responses were recorded in an MR scanner. The first sentence was spoken in Standard Dutch; the second was spoken by the same or a different speaker and produced in Standard Dutch or in the artificial accent. This design made it possible to identify neural responses to a switch in speaker and accent independently. A switch in accent was associated with activations in predominantly left-lateralized areas including posterior temporal regions, including superior temporal gyrus, planum temporale (PT), and supramarginal gyrus, as well as in frontal regions, including left pars opercularis of the inferior frontal gyrus (IFG). A switch in speaker recruited a predominantly right-lateralized network, including middle frontal gyrus and prenuneus. It is concluded that posterior temporal areas, including PT, and frontal areas, including IFG, are involved in processing accent variation in spoken sentence comprehension.

from Human Brain Mapping

Speech perception in the child brain: Cortical timing and its relevance to literacy acquisition

Speech processing skills go through intensive development during mid-childhood, providing basis also for literacy acquisition. The sequence of auditory cortical processing of speech has been characterized in adults, but very little is known about the neural representation of speech sound perception in the developing brain. We used whole-head magnetoencephalography (MEG) to record neural responses to speech and nonspeech sounds in first-graders (7-8-year-old) and compared the activation sequence to that in adults. In children, the general location of neural activity in the superior temporal cortex was similar to that in adults, but in the time domain the sequence of activation was strikingly different. Cortical differentiation between sound types emerged in a prolonged response pattern at about 250 ms after sound onset, in both hemispheres, clearly later than the corresponding effect at about 100 ms in adults that was detected specifically in the left hemisphere. Better reading skills were linked with shorter-lasting neural activation, speaking for interdependence of the maturing neural processes of auditory perception and developing linguistic skills. This study uniquely utilized the potential of MEG in comparing both spatial and temporal characteristics of neural activation between adults and children. Besides depicting the group-typical features in cortical auditory processing, the results revealed marked interindividual variability in children

from Human Brain Mapping

Synaptic Morphology and the Influence of Auditory Experience

The auditory experience is crucial for the normal development and maturation of brain structure and the maintenance of the auditory pathways. The specific aims of this review are (i) to provide a brief background of the synaptic morphology of the endbulb of Held in hearing and deaf animals; (ii) to argue the importance of this large synaptic ending in linking neural activity along ascending pathways to environmental acoustic events; (iii) to describe how the re-introduction of electrical activity changes this synapse; and (iv) to examine how changes at the endbulb synapse initiate trans-synaptic changes in ascending auditory projections to the superior olivary complex, the inferior complex, and the auditory cortex.

from Hearing Research

Relationship Between Behavioral and Physiological Spectral-Ripple Discrimination

Previous studies have found a significant correlation between spectral-ripple discrimination and speech and music perception in cochlear implant (CI) users. This relationship could be of use to clinicians and scientists who are interested in using spectral-ripple stimuli in the assessment and habilitation of CI users. However, previous psychoacoustic tasks used to assess spectral discrimination are not suitable for all populations, and it would be beneficial to develop methods that could be used to test all age ranges, including pediatric implant users. Additionally, it is important to understand how ripple stimuli are processed in the central auditory system and how their neural representation contributes to behavioral performance. For this reason, we developed a single-interval, yes/no paradigm that could potentially be used both behaviorally and electrophysiologically to estimate spectral-ripple threshold. In experiment 1, behavioral thresholds obtained using the single-interval method were compared to thresholds obtained using a previously established three-alternative forced-choice method. A significant correlation was found (r = 0.84, p = 0.0002) in 14 adult CI users. The spectral-ripple threshold obtained using the new method also correlated with speech perception in quiet and noise. In experiment 2, the effect of the number of vocoder-processing channels on the behavioral and physiological threshold in normal-hearing listeners was determined. Behavioral thresholds, using the new single-interval method, as well as cortical P1-N1-P2 responses changed as a function of the number of channels. Better behavioral and physiological performance (i.e., better discrimination ability at higher ripple densities) was observed as more channels added. In experiment 3, the relationship between behavioral and physiological data was examined. Amplitudes of the P1-N1-P2 “change” responses were significantly correlated with d′ values from the single-interval behavioral procedure. Results suggest that the single-interval procedure with spectral-ripple phase inversion in ongoing stimuli is a valid approach for measuring behavioral or physiological spectral resolution.

from JARO