Blog Archives

Effects of the Epilarynx Area on Vocal Fold Dynamics and the Primary Voice Signal

We conclude that the size of the epilaryngeal area has significant influence on vocal fold dynamics but does not significantly affect the resultant SPL.

from the Journal of Voice

The cortical activation effect of phonation on a motor task: A functional MRI study

It is well-known that sound production can affect the motor system. We investigated whether a short, loud phonation affected cortical activation caused by a motor task using functional MRI. Fifteen right-handed healthy subjects were recruited for this study. We compared the cortical activation caused by the performance of a motor task (right hand grasp-release movements) to that caused by the performance of the motor task with phonation(“ah” sound). We found that performance of the motor task with phonation resulted in less activation in the primary sensori-motor cortex than did the performance of the motor task alone. It seemed that phonation during the motor task enhanced the efficiency of cortical activation compared to that caused by the motor task alone.

from Neurorehabilitation

The origin of the ocular vestibular evoked myogenic potential (OVEMP)

from Clinical Neurophysiology

Worldwide experience with sequential phase-shift sound cancellation treatment of predominant tone tinnitus

Results: A total of 493 patients were treated. A reduction in tinnitus volume (defined as ≥6 dB) was seen in 49–72 per cent of patients.

from the Journal of Laryngology and Otology

A critical review of the neurophysiological evidence underlying clinical vestibular testing using sound, vibration and galvanic stimuli

In addition to activating cochlear receptors, air conducted sound (ACS) and bone conducted vibration (BCV) activate vestibular otolithic receptors, as shown by neurophysiological evidence from animal studies – evidence which is the foundation for using ACS and BCV for clinical vestibular testing by means of vestibular-evoked myogenic potentials (VEMPs). Recent research is elaborating the specificity of ACS and BCV on vestibular receptors. The evidence that saccular afferents can be activated by ACS has been mistakenly interpreted as showing that ACS only activates saccular afferents. That is not correct – ACS activates both saccular and utricular afferents, just as BCV activates both saccular and utricular afferents, although the patterns of activation for ACS and BCV do not appear to be identical. The otolithic input to sternocleidomastoid muscle appears to originate predominantly from the saccular macula. The otolithic input to the inferior oblique appears to originate predominantly from the utricular macula. Galvanic stimulation by surface electrodes on the mastoids very generally activates afferents from all vestibular sense organs. This review summarizes the physiological results, the potential artifacts and errors of logic in this area, reconciles apparent disagreements in this field. The neurophysiological results on BCV have led to a new clinical test of utricular function – the n10 of the oVEMP. The cVEMP tests saccular function while the oVEMP tests utricular function.

from Clinical Neurophysiology

The relative effectiveness of different stimulus waveforms in evoking VEMPs: Significance of stimulus energy and frequency

We compared the effectiveness of a series of different sound stimulus waveforms in evoking VEMPs in normal volunteers. The waveforms were clicks (0.1–0.8 ms), biphasic clicks (0.8 ms) and sine waves (1250 Hz, 0.8 ms and 500 Hz, 2 ms) with different peak intensity and duration but similar root mean square area. VEMP amplitudes varied widely (corrected values 0.35 to 1.06), but when the amplitudes were plotted against the physical energy content and A-weighted intensity (L_{Aeq}: a measure of acoustic energy) of the waveforms, the relationship was found to be highly linear. However, when the stimuli were matched for their A-weighted energy, a 500 Hz 2 ms sine wave was the most effective waveform, suggesting that frequency tuning in the vestibular system is also an important factor. VEMP amplitude is thus determined by three stimulus-related factors: physical energy, transmission through the middle ear and vestibular frequency tuning. Use of a 500 Hz stimulus will maximise the prevalence and amplitude of the VEMP for a given sound exposure level.

from Vestibular Research

The human sound-evoked vestibulo-ocular reflex and its electromyographic correlate

Objective
Sound and vibration evoke a short-latency eye movement or “sound-evoked vestibulo-ocular reflex” (VOR) and an infraorbital surface potential: the “ocular vestibular-evoked myogenic potential” (OVEMP). We examined their relationship by measuring the modulation of both responses by gaze and stimulus parameters.

Methods
In seven subjects with superior semicircular-canal dehiscence (SCD) and six controls, the sound-evoked VOR was measured in 3D using scleral search coils. OVEMPs were recorded simultaneously, using surface electromyography.

Results
Eye movement onset (11.6 ± 0.8 ms) coincided with the OVEMP peak (12.1 ± 0.35 ms). OVEMP and VOR magnitudes were 5–15 times larger in SCD compared with controls. OVEMP amplitudes were maximal on upgaze and abolished on downgaze; VOR magnitudes were unaffected. When stimulus type was changed from sound to vibration, OVEMP and VOR changed concordantly: increasing in controls and decreasing in SCD. OVEMP and VOR tuned to identical stimulus frequencies. OVEMP and VOR magnitudes on upgaze were significantly correlated (R = 0.83–0.97).

Conclusion
Selective decrease of the OVEMP upon downgaze is consistent with relaxation or retraction of the inferior oblique muscles. The temporal relationship of OVEMP and VOR and their identical modulation by external factors confirms a common origin.

Significance
Sound-evoked OVEMP and VOR represent the electrical and mechanical correlates of the same vestibulo-ocular response.

from Clinical Neurophysiology

Experimental investigation of the influence of a posterior gap on glottal flow and sound

from the Journal of the Acoustical Society of America

The influence of a posterior gap on the airflow through the human glottis was investigated using a driven synthetic model. Instantaneous orifice discharge coefficient of a glottal shaped orifice was obtained from the time-varying orifice area and the velocity distribution of the pulsated jet measured on the axial plane using a single hot-wire probe. Instantaneous orifice discharge coefficient values were found to undergo a cyclic hysteresis loop when plotted versus Reynolds number and time, indicating a pressure head increase and a net energy transfer from the air flow to the orifice wall. The net energy transferred was estimated to be around 10% of the value presumably required to achieve self-sustained oscillation. The radiated sound pressure was measured to characterize the influence of the minimum flow through the posterior gap on the broadband component of the radiated sound. The presence of a posterior gap was found to significantly increase the broadband sound level produced over the frequency range in which human hearing is most sensitive. ©2008 Acoustical Society of America

When fish talk, scientists listen

from EurekAlert.org

In a paper published this week in Science, three Marine Biological Laboratory (MBL) visiting investigators show that the sophisticated neural circuitry that midshipman use to vocalize develops in a similar region of the central nervous system as the circuitry that allows a human to laugh or a frog to croak, evidence that the ability to make and respond to sound is an ancient part of the vertebrate success story

CSHL scientists make progress in determining how the brain selectively interprets sound

from EurekAlert.org

The researchers used a new technique called “in vivo cell-attached patch clamp recording” which measures the reaction of individual neurons. This recording technique samples neurons in a fair and unbiased way, unlike traditional approaches, which favored the largest and most active neurons. Using this technique, the team found that only 5% of neurons in the auditory cortex had a “high firing rate” when receiving a range of sounds of varying length, frequency, and volume. The experiment included white noise and natural animal sounds.

Sparse Representation Of Sounds In The Unanesthetized Auditory Cortex

from Medical News Today.com

How do neuronal populations in the auditory cortex represent sounds? Although sound-evoked neural responses in the anesthetized auditory cortex are mainly transient, recent experiments in the unanesthetized preparation have emphasized subpopulations with other response properties.

Published this week in the open-access journal PLoS Biology, Tomas Hromadka, Anthony Zador, and colleagues show how they quantified the relative contributions of these different subpopulations in the auditory cortex of awake head-fixed rats.

Lend me your ears — and the world will sound very different

from EurekAlert.org

Recognising people, objects or animals by the sound they make is an important survival skill and something most of us take for granted. But very similar objects can physically make very dissimilar sounds and we are able to pick up subtle clues about the identity and source of the sound. Scientists funded by the Biotechnology and Biological Sciences Research Council (BBSRC) are working out how the human ear and the brain come together to help us understand our acoustic environment. They have found that the part of the brain that deals with sound, the auditory cortex, is adapted in each individual and tuned to the world around us. We learn throughout our lives how to localise and identify different sounds. It means that if you could hear the world through someone else’s ears it would sound very different to what you are used to.

Auditory Neurons In Humans Far More Sensitive To Fine Sound Frequencies Than Most Mammals

from Medical News Today.com

The human ear is exquisitely tuned to discern different sound frequencies, whether such tones are high or low, near or far. But the ability of our ears pales in comparison to the remarkable knack of single neurons in the brain to distinguish between the very subtlest of sound frequencies.

Learning and generalization on asynchrony and order tasks at sound offset: Implications for underlying neural circuitry

from Learning & Memory

Normal auditory perception relies on accurate judgments about the temporal relationships between sounds. Previously, we used a perceptual-learning paradigm to investigate the neural substrates of two such relative-timing judgments made at sound onset: detecting stimulus asynchrony and discriminating stimulus order. Here, we conducted parallel experiments at sound offset. Human adults practiced 1 h/d for 6–8 d on either asynchrony detection or order discrimination at sound offset with tones at 0.25 and 4.0 kHz. As at sound onset, learning on order-offset discrimination did not generalize to the other task (asynchrony), an untrained temporal position (onset), or untrained frequency pairs, indicating that this training affected a quite specialized neural circuit. In contrast, learning on asynchrony-offset detection generalized to the other task (order) and temporal position (onset), though not to untrained frequency pairs, implying that the training on this condition influenced a less specialized, or more interdependent, circuit. Finally, the learning patterns induced by single-session exposure to asynchrony and order tasks differed depending on whether these tasks were performed primarily at sound onset or offset, suggesting that this exposure modified circuitry specialized to separately process relative-timing tasks at these two temporal positions. Overall, it appears that the neural processes underlying relative-timing judgments are malleable, and that the nature of the affected circuitry depends on the duration of exposure (multihour or single-session) and the parameters of the judgment(s) made during that exposure.

Effects of binaural electronic hearing protectors on localization and response time to sounds in tEffects of binaural electronic hearing protectors on localization and response time to sounds in the horizontal planehe horizontal plane

from Noise & Health

The effects of electronic hearing protector devices (HPDs) on localization and response time (RT) to stimuli were assessed at six locations in the horizontal plane. The stimuli included a firearm loading, telephone ringing and .5-kHz and 4-kHz tonebursts presented during continuous traffic noise. Eight normally hearing adult listeners were evaluated under two conditions: (a) ears unoccluded; (b) ears occluded with one of three amplitude-sensitive sound transmission HPDs. All HPDs were found to affect localization, and performance was dependent on stimuli and location. Response time (RT) was less in the unoccluded condition than for any of the HPD conditions for the broadband stimuli. In the HPD conditions, RT to incorrect responses was significantly less than RT to correct responses for 120 degrees and 240 degrees , the two locations with the greatest number of errors. The RTs to incorrect responses were significantly greater than to correct responses for 60 degrees and 300 degrees , the two locations with the least number of errors. The HPDs assessed in this study did not preserve localization ability under most stimulus conditions.