During the past ten years, research using Near-infrared Spectroscopy (NIRS) to study the developing brain has provided groundbreaking evidence of brain functions in infants. This paper presents a theoretically-oriented review of this wealth of evidence, summarizing recent NIRS data on language processing, without neglecting other neuroimaging or behavioral studies in infancy and adulthood. We review three competing classes of hypotheses (i.e. signal-driven, domain-driven, and learning biases hypotheses) regarding the causes of hemispheric specialization for speech processing. We assess the fit between each of these hypotheses and neuroimaging evidence in speech perception and show that none of the three hypotheses can account for the entire set of observations on its own. However, we argue that they provide a good fit when combined within a developmental perspective. According to our proposed scenario, lateralization for language emerges out of the interaction between pre-existing left-right biases in generic auditory processing (signal-driven hypothesis), and a left-hemisphere predominance of particular learning mechanisms (learning-biases hypothesis). As a result of this completed developmental process, the native language is represented in the left hemisphere predominantly. The integrated scenario enables to link infant and adult data, and points to many empirical avenues that need to be explored more systematically.
Heterogeneity of the Left Temporal Lobe in Semantic Representation and Control: Priming Multiple versus Single Meanings of Ambiguous Words
Semantic judgments involve both representations of meaning plus executive mechanisms that guide knowledge retrieval in a task-appropriate way. These 2 components of semantic cognition—representation and control—are commonly linked to left temporal and prefrontal cortex, respectively. This simple proposal, however, remains contentious because in most functional neuroimaging studies to date, the number of concepts being activated and the involvement of executive processes during retrieval are confounded. Using functional magnetic resonance imaging, we examined a task in which semantic representation and control demands were dissociable. Words with multiple meanings like “bank” served as targets in a double-prime paradigm, in which multiple meaning activation and maximal executive demands loaded onto different priming conditions. Anterior inferior temporal gyrus (ITG) was sensitive to the number of meanings that were retrieved, suggesting a role for this region in semantic representation, while posterior middle temporal gyrus (pMTG) and inferior frontal cortex showed greater activation in conditions that maximized executive demands. These results support a functional dissociation between left ITG and pMTG, consistent with a revised neural organization in which left prefrontal and posterior temporal areas work together to underpin aspects of semantic control.
from Cerebral Cortex
Distributed processing and cortical specialization for speech and environmental sounds in human temporal cortex
Using functional MRI, we investigated whether auditory processing of both speech and meaningful non-linguistic environmental sounds in superior and middle temporal cortex relies on a complex and spatially distributed neural system. We found that evidence for spatially distributed processing of speech and environmental sounds in a substantial extent of temporal cortices. Most importantly, regions previously reported as selective for speech over environmental sounds also contained distributed information. The results indicate that temporal cortices supporting complex auditory processing, including regions previously described as speech-selective, are in fact highly heterogeneous.
from Brain and Language
Neuromagnetic auditory steady-state responses to amplitude modulated sounds following dichotic or monaural presentation
Attending to carrier frequency change in stimulation enhances the right hemisphere ASSR amplitude for dichotic stimulation.
Superior temporal activation as a function of linguistic knowledge: Insights from deaf native signers who speechread
Studies of spoken and signed language processing reliably show involvement of the posterior superior temporal cortex. This region is also reliably activated by observation of meaningless oral and manual actions. In this study we directly compared the extent to which activation in posterior superior temporal cortex is modulated by linguistic knowledge irrespective of differences in language form. We used a novel cross-linguistic approach in two groups of volunteers who differed in their language experience. Using fMRI, we compared deaf native signers of British Sign Language (BSL), who were also proficient speechreaders of English (i.e., two languages) with hearing people who could speechread English, but knew no BSL (i.e., one language). Both groups were presented with BSL signs and silently spoken English words, and were required to respond to a signed or spoken target. The interaction of group and condition revealed activation in the superior temporal cortex, bilaterally, focused in the posterior superior temporal gyri (pSTG, BA 42/22). In hearing people, these regions were activated more by speech than by sign, but in deaf respondents they showed similar levels of activation for both language forms – suggesting that posterior superior temporal regions are highly sensitive to language knowledge irrespective of the mode of delivery of the stimulus material.
from Brain and Language
We report the case of patient M, who suffered unilateral left posterior temporal and parietal damage, brain regions typically associated with language processing. Language function largely recovered since the infarct, with no measurable speech comprehension impairments. However, the patient exhibited a severe impairment in nonverbal auditory comprehension. We carried out extensive audiological and behavioral testing in order to characterize M’s unusual neuropsychological profile. We also examined the patient’s and controls’ neural responses to verbal and nonverbal auditory stimuli using functional magnetic resonance imaging (fMRI). We verified that the patient exhibited persistent and severe auditory agnosia for nonverbal sounds in the absence of verbal comprehension deficits or peripheral hearing problems. Acoustical analyses suggested that his residual processing of a minority of environmental sounds might rely on his speech processing abilities. In the patient’s brain, contralateral (right) temporal cortex as well as perilesional (left) anterior temporal cortex were strongly responsive to verbal, but not to nonverbal sounds, a pattern that stands in marked contrast to the controls’ data. This substantial reorganization of auditory processing likely supported the recovery of M’s speech processing.
In a single study, silent speechreading and signed language processing were investigated using fMRI. Deaf native signers of British sign language (BSL) who were also proficient speechreaders of English were the focus of the research. Separate analyses contrasted different aspects of the data. In the first place, we found that the left superior temporal cortex, including auditory regions, was strongly activated in the brains of deaf compared with hearing participants when processing silently spoken (speechread) word lists. In the second place, we found that within the signed language, cortical activation patterns reflected the presence and type of mouth action that accompanied the manual sign. Signed items that incorporated oral as well as manual actions were distinguished from signs using only manual actions. Signs that used speechlike oral actions could be differentiated from those that did not. Thus, whether in speechreading or in sign language processing, speechlike mouth actions differentially activated regions of the superior temporal lobe that are accounted auditory association cortex in hearing people.
One inference is that oral actions that are speechlike may have preferential access to ‘auditory speech’ parts of the left superior temporal cortex in deaf people. This could occur not only when deaf people were reading speech, but also when they were processing a signed language. For the deaf child, it is likely that observation of speech helps to construct and to constrain the parameters of spoken language acquisition. This has implications for programmes of intervention and therapy for cochlear implantation.
from the International Journal of Audiology