Blog Archives

N1, P2 and T-complex of the auditory brain event-related potentials to tones with varying rise times in adults with and without dyslexia

Dyslexia is a learning difficulty affecting the acquisition of fluent reading and spelling skills due to poor phonological processing. Underlying deficits in processing sound rise time have also been found in children and adults with dyslexia. However, the neural basis for these deficits is unknown. In the present study event-related potentials were used to index neural processing and examine the effect of rise time manipulation on the obligatory N1, T-complex and P2 responses in English speaking adults with and without dyslexia. The Tb wave of the T-complex showed differences between groups, with the amplitudes for Tb becoming less negative with increased rise time for the participants with dyslexia only. Frontocentral N1 and P2 did not show group effects. Enhanced Tb amplitude that is modulated by rise time could indicate altered neural networks at the lateral surface of the superior temporal gyrus in adults with dyslexia.

from the International Journal of Psychophysiology

Advertisements

Aided cortical auditory evoked potentials in response to changes in hearing aid gain

Objective: There is interest in using cortical auditory evoked potentials (CAEPs) to evaluate hearing aid fittings and experience-related plasticity associated with amplification; however, little is known about hearing aid signal processing effects on these responses. The purpose of this study was to determine the effect of clinically relevant hearing aid gain settings, and the resulting in-the-canal signal-to-noise ratios (SNRs), on the latency and amplitude of P1, N1, and P2 waves. Design & Sample: Evoked potentials and in-the-canal acoustic measures were recorded in nine normal-hearing adults in unaided and aided conditions. In the aided condition, a 40-dB signal was delivered to a hearing aid programmed to provide four levels of gain (0, 10, 20, and 30 dB). As a control, unaided stimulus levels were matched to aided condition outputs (i.e. 40, 50, 60, and 70 dB) for comparison purposes. Results: When signal levels are defined in terms of output level, aided CAEPs were surprisingly smaller and delayed relative to unaided CAEPs, probably resulting from increases to noise levels caused by the hearing aid. Discussion: These results reinforce the notion that hearing aids modify stimulus characteristics such as SNR, which in turn affects the CAEP in a way that does not reliably reflect hearing aid gain.

from the International Journal of Audiology

What subcortical–cortical relationships tell us about processing speech in noise

To advance our understanding of the biological basis of speech-in-noise perception, we investigated the effects of background noise on both subcortical- and cortical-evoked responses, and the relationships between them, in normal hearing young adults. The addition of background noise modulated subcortical and cortical response morphology. In noise, subcortical responses were later, smaller in amplitude and demonstrated decreased neural precision in encoding the speech sound. Cortical responses were also delayed by noise, yet the amplitudes of the major peaks (N1, P2) were affected differently, with N1 increasing and P2 decreasing. Relationships between neural measures and speech-in-noise ability were identified, with earlier subcortical responses, higher subcortical response fidelity and greater cortical N1 response magnitude all relating to better speech-in-noise perception. Furthermore, it was only with the addition of background noise that relationships between subcortical and cortical encoding of speech and the behavioral measures of speech in noise emerged. Results illustrate that human brainstem responses and N1 cortical response amplitude reflect coordinated processes with regards to the perception of speech in noise, thereby acting as a functional index of speech-in-noise perception.

from the European Journal of Neuroscience

Pre-attentive and attentive processing of French vowels

This study aimed at investigating the effects of acoustic distance and of speaker variability on the pre-attentive and attentive perception of French vowels by French adult speakers. The electroencephalogram (EEG) was recorded while participants watched a silent movie (Passive condition) and discriminated deviant vowels (Active condition). The auditory sequence included 4 French vowels, /u/ (standard) and /o/, /y/ and /ø/ as deviants, produced by 3 different speakers. As the vowel /o/ is closer to /u/ than the other deviants in acoustic distance, we predicted smaller mismatch negativity (MMN) and smaller N1 component, as well as higher error rate and longer reaction times. Results were in line with these predictions. Moreover, the MMN was elicited by all deviant vowels independently of speaker variability. By contrast, the Vowel by Speaker interaction was significant in the Active listening condition thereby showing that subtle within-category differences are processed at the attentive level. These results suggest that while vowels are categorized pre-attentively according to phonemic representations and independently of speaker variability, participants are sensitive to between-speaker differences when they focus attention on vowel processing.

from Brain Research

Cognitive processing effects on auditory event-related potentials and the evoked cardiac response

The phasic evoked cardiac response (ECR) produced by innocuous stimuli requiring cognitive processing may be described as the sum of two independent response components. An initial heart rate (HR) deceleration (ECR1), and a slightly later HR acceleration (ECR2), have been hypothesised to reflect stimulus registration and cognitive processing load, respectively. This study investigated the effects of processing load in the ECR and the event-related potential, in an attempt to find similarities between measures found important in the autonomic orienting reflex context and ERP literature. We examined the effects of cognitive load within-subjects, using a long inter-stimulus interval (ISI) ANS-style paradigm. Subjects (N = 40) were presented with 30-35 80 dB, 1000 Hz tones with a variable long ISI (7-9 s), and required to silently count, or allowed to ignore, the tone in two counterbalanced stimulus blocks. The ECR showed a significant effect of counting, allowing separation of the two ECR components by subtracting the NoCount from the Count condition. The auditory ERP showed the expected obligatory processing effects in the N1, and substantial effects of cognitive load in the late positive complex (LPC). These data offer support for ANS-CNS connections worth pursuing further in future work.

from the International Journal of Psychophysiology

ERP evidence of hemispheric independence in visual word recognition

This study examined the capability of the left hemisphere (LH) and the right hemisphere (RH) to perform a visual recognition task independently as formulated by the Direct Access Model (Fernandino, Iacoboni, & Zaidel, 2007). Healthy native Hebrew speakers were asked to categorize nouns and non-words (created from nouns by transposing two middle letters) into man-made and natural categories while their performance and ERPs were recorded. The stimuli were presented parafoveally to the right and left visual fields. As predicted by the Direct Access Model, ERP data showed that both the left hemisphere and right hemisphere were able to differentiate between words and non-words as early as 170 ms post-stimulus; these results were significant only for the contralaterally presented stimuli. The N1 component, which is considered to reflect orthographic processing, was larger in both hemispheres in response to the contralateral than the ipsilateral presented stimuli. This finding provides evidence for the RH capability to access higher level lexical information at the early stages of visual word recognition, thus lending weight to arguments for the relatively independent nature of this process.

from Brain and Language

Age trends in auditory oddball evoked potentials via component scoring and deconvolution

Age trends in component scores can be related to physiological changes in the brain. However, component scores show a high degree of redundancy, which limits their information content, and are often invalid when applied to young children. Deconvolution provides additional information on development not available through other methods.

from Clinical Neurophysiology

A Method for Removing Cochlear Implant Artifact

When cortical auditory evoked potentials (CAEPs) are recorded in individuals with a cochlear implant (CI), electrical artifact can make the CAEP difficult or impossible to measure. Since increasing the interstimulus interval (ISI) increases the amplitude of physiological responses without changing the artifact, subtracting CAEPs recorded with a short ISI from those recorded with a longer ISI should show the physiological response without any artifact. In the first experiment, N1-P2 responses were recorded using a speech syllable and tone, paired with ISIs that changed randomly between 0.5 and 4 seconds. In the second experiment, the same stimuli, at ISIs of either 500 or 3000 ms, were presented in blocks that were homogeneous or random with respect to the ISI or stimulus. In the third experiment, N1-P2 responses were recorded using pulse trains with 500 and 3000 ms ISIs in 4 CI listeners. The results demonstrated: 1) N1-P2 response amplitudes generally increased with increasing ISI. 2) Difference waveforms were largest for the homogeneous and random-stimulus blocks than for the random-ISI block. 3) The subtraction technique almost completely eliminated the electrical artifact in individuals with cochlear implants. Therefore, the subtraction technique is a feasible method of removing from the N1-P2 response the electrical artifact generated by the cochlear implant.

from Hearing Research

A Method for Removing Cochlear Implant Artifact

When cortical auditory evoked potentials (CAEPs) are recorded in individuals with a cochlear implant (CI), electrical artifact can make the CAEP difficult or impossible to measure. Since increasing the interstimulus interval (ISI) increases the amplitude of physiological responses without changing the artifact, subtracting CAEPs recorded with a short ISI from those recorded with a longer ISI should show the physiological response without any artifact. In the first experiment, N1-P2 responses were recorded using a speech syllable and tone, paired with ISIs that changed randomly between 0.5 and 4 seconds. In the second experiment, the same stimuli, at ISIs of either 500 or 3000 ms, were presented in blocks that were homogeneous or random with respect to the ISI or stimulus. In the third experiment, N1-P2 responses were recorded using pulse trains with 500 and 3000 ms ISIs in 4 CI listeners. The results demonstrated: 1) N1-P2 response amplitudes generally increased with increasing ISI. 2) Difference waveforms were largest for the homogeneous and random-stimulus blocks than for the random-ISI block. 3) The subtraction technique almost completely eliminated the electrical artifact in individuals with cochlear implants. Therefore, the subtraction technique is a feasible method of removing from the N1-P2 response the electrical artifact generated by the cochlear implant.

from Hearing Research

Electrophysiological indices of spatial attention during global/local processing in good and poor phonological decoders

Previous research suggests a relationship between spatial attention and phonological decoding in developmental dyslexia. The aim of this study was to examine differences between good and poor phonological decoders in the allocation of spatial attention to global and local levels of hierarchical stimuli. A further aim was to investigate the relationship between global/local processing and electrophysiological indices (N1, N2) of spatial attention in these groups. Good (n = 18) and poor (n = 16) phonological decoders were selected on the basis of non-word reading ability. Participants responded to either the global or local level of hierarchical stimuli presented in the left or right visual field in a sustained attention task. Poor phonological decoders showed slower RT relative to good phonological decoders regardless of whether attention was directed to either global or local processing levels. This was accompanied by a lack of task-related modulation of the posterior N1 and N2 Event-Related Potential (ERP) components, suggesting differences in the early allocation of spatial attention and later perceptual processing respectively. Poor decoders also showed greater N2 amplitude overall, suggestive of compensatory processing at later perceptual stages. There was preliminary evidence for sex differences in hemispheric lateralisation, with a reversal of hemispheric lateralisation observed among male and female poor phonological decoders. These findings have important implications for the understanding of the relationship between spatial attention and phonological decoding in developmental dyslexia.

from Brain and Language

Right visual field advantage in parafoveal processing: Evidence from eye-fixation-related potentials

Readers acquire information outside the current eye fixation. Previous research indicates that having only the fixated word available slows reading, but when the next word is visible, reading is almost as fast as when the whole line is seen. Parafoveal-on-foveal effects are interpreted to reflect that the characteristics of a parafoveal word can influence fixation on a current word. Prior studies also show that words presented to the right visual field (RVF) are processed faster and more accurately than words in the left visual field (LVF). This asymmetry results either from an attentional bias, reading direction, or the cerebral asymmetry of language processing. We used eye-fixation-related potentials (EFRP), a technique that combines eye-tracking and electroencephalography, to investigate visual field differences in parafoveal-on-foveal effects. After a central fixation, a prime word appeared in the middle of the screen together with a parafoveal target that was presented either to the LVF or to the RVF. Both hemifield presentations included three semantic conditions: the words were either semantically associated, non-associated, or the target was a non-word. The participants began reading from the prime and then made a saccade towards the target, subsequently they judged the semantic association. Between 200 and 280 ms from the fixation onset, an occipital P2 EFRP-component differentiated between parafoveal word and non-word stimuli when the parafoveal word appeared in the RVF. The results suggest that the extraction of parafoveal information is affected by attention, which is oriented as a function of reading direction.

from Brain and Language

ERP Evaluation of Auditory Sensory Memory Systems in Adults with Intellectual Disability

Abstract
Auditory sensory memory stage can be functionally divided into two subsystems; transient-detector system and permanent feature-detector system (Ntnen, 1992). We assessed these systems in persons with intellectual disability by measuring event-related potentials (ERPs) N1 and mismatch negativity (MMN), which reflect the two auditory subsystems, respectively. Added to these, P3a (an ERP reflecting stage after sensory memory) was evaluated. Either synthesized vowels or simple tones were delivered during a passive oddball paradigm to adults with and without intellectual disability. ERPs were recorded from midline scalp sites (Fz, Cz, and Pz). Relative to control group, participants with the disability exhibited greater N1 latency and less MMN amplitude. The results for N1 amplitude and MMN latency were basically comparable between both groups. IQ scores in participants with the disability revealed no significant relation with N1 and MMN measures, whereas the IQ scores tended to increase significantly as P3a latency reduced. These outcomes suggest that persons with intellectual disability might own discrete malfunctions for the two detector systems in auditory sensory-memory stage. Moreover, the processes following sensory memory might be partly related to a determinant of mental development.

from the International Journal of Neuroscience

Tuning of the visual word processing system: Distinct developmental ERP and fMRI effects

Visual tuning for words vs. symbol strings yields complementary increases of fast occipito-temporal activity (N1 or N170) in the event-related potential (ERP), and posterior-anterior gradients of increasing word-specific activity with functional magnetic resonance imaging (fMRI) in the visual word form system (VWFS). However, correlation of these coarse ERP and fMRI tuning responses seems limited to the most anterior part of the VWFS in adult and adolescent readers (Brem et al. [ [2006]]: Neuroimage 29:822-837). We thus focused on fMRI tuning gradients of young readers with their more pronounced ERP print tuning, and compared developmental aspects of ERP and fMRI response tuning in the VWFS. Children (10.3 y, n = 19), adolescents (16.2 y, n = 13) and adults (25.2 y, n = 18) were tested with the same implicit reading paradigm using counterbalanced ERP and fMRI imaging. The word-specific occipito-temporal N1 specialization, its corresponding source activity, as well as the integrated source activity (0-700 ms) were most prominent in children and showed a marked decrease with age. The posterior-anterior fMRI gradient of word-specific activity instead which was fully established in children did not develop further, but exhibited a dependence on reading skills independent of age. To conclude, prominent developmental dissociation of the ERP and fMRI tuning patterns emerged despite convergent VWFS localization. The ERP response may selectively reflect fast visual aspects of print specialization, which become less important with age, while the fMRI response seems dominated by integrated task- and reading-related activations in the same regions. Hum Brain Mapp, 2009. © 2009 Wiley-Liss, Inc.

from Human Brain Mapping

The representation of voice onset time in the cortical auditory evoked potentials of young children

Our results demonstrate that a representation of VOT, as recorded by scalp electrodes, exists in the developing cortical evoked response, but that representation is different than that in the adult response. The results describe the developmental changes in cortical representation of VOT in children ages 2–8 years.

Significance
The child’s CAEP reflects physiologic processes, which are involved in the cortical encoding of VOT. Overall, cortical representation of VOT in children ages 2–8 is different than in adults.

from Clinical Neurophysiology

Alterations in Event Related Potentials (ERP) Associated with Tinnitus Distress and Attention

from Applied Psychophysiology and Biofeedback

Abstract Tinnitus related distress corresponds to different degrees of attention paid to the tinnitus. Shifting attention to a signal other than the tinnitus is therefore particularly difficult for patients with high tinnitus related distress. As attention effects on Event Related Potentials (ERP) have been shown this should be reflected in ERP measurements (N100, phase locking). In order to prove this hypothesis single sweep ERP recordings were obtained in 41 tinnitus patients as well as 10 control subjects during a period of time when attention was shifted to a tone (attended) and during a second phase (unattended) when they did not focus attention to the tone. Whereas tinnitus patients with low distress showed a significant reduction in both N100 amplitude and phase locking when comparing the attended and unattended measurement condition a group of patients with high tinnitus related distress did not show such ERP alterations. Using single sweep ERP measurements the results of our study show, that attention in high tinnitus related distress patients is captured by their tinnitus significantly more than in low distress patients. Furthermore our results provide the basis for future neurofeedback based tinnitus therapies aiming at maximizing the ability to shift attention away from the tinnitus.