Blog Archives

Evaluating hemispheric divisions in processing fixated words: The evidence from Arabic

Some studies have claimed that, when fixating a word, a precise split in foveal processing produces substantial effects on word recognition because all letters to the left and right of fixation project to different, contralateral hemispheres. Recently in this Journal, Jordan, Paterson, Kurtev, and Xu (2010, Cortex, 46, 298–309) evaluated this claim using precisely-controlled procedures of fixation and stimulus presentation and found no evidence of split-foveal processing. However, in line with other research in this area, these findings were obtained using a Latinate alphabetic language (in this case English) which may induce specific effects on performance. Consequently, here we report a further study which used stimuli from a fundamentally different, non-Latinate alphabetic language (Arabic) with characteristics better suited to revealing effects of split-foveal processing, if they exist. Participants made lexical decisions to five-letter Arabic words (and nonwords) when fixations were made immediately to the right (location 1) or left (location 6) of each stimulus, or at one of the four possible inter-letter locations (locations 2–5). Fixation location was carefully controlled using an eye-tracker linked to a fixation-contingent display and all stimuli were presented within foveal vision to avoid confounding influences of extrafoveal projections. Performance was equally poorest when fixating locations 1 and 6 (i.e., when words were shown entirely to either the left or right of fixation), equally intermediate for locations 2 and 5, and equally superior for locations 3 and 4 (i.e., the centre of words). Moreover, additional, word-specific analyses showed no evidence of the effects of fixation location on individual word recognition also predicted by split-foveal processing. These findings from a non-Latinate language complement those reported previously for English to provide further evidence that while fixation location influences word recognition, these influences occur with no functional division in hemispheric processing at the point of fixation.

from Cortex

Advertisements

Evaluating hemispheric divisions in processing fixated words: The evidence from Arabic☆

Some studies have claimed that, when fixating a word, a precise split in foveal processing produces substantial effects on word recognition because all letters to the left and right of fixation project to different, contralateral hemispheres. Recently in this Journal, Jordan, Paterson, Kurtev, and Xu (2010, Cortex, 46, 298–309) evaluated this claim using precisely-controlled procedures of fixation and stimulus presentation and found no evidence of split-foveal processing. However, in line with other research in this area, these findings were obtained using a Latinate alphabetic language (in this case English) which may induce specific effects on performance. Consequently, here we report a further study which used stimuli from a fundamentally different, non-Latinate alphabetic language (Arabic) with characteristics better suited to revealing effects of split-foveal processing, if they exist. Participants made lexical decisions to five-letter Arabic words (and nonwords) when fixations were made immediately to the right (location 1) or left (location 6) of each stimulus, or at one of the four possible inter-letter locations (locations 2–5). Fixation location was carefully controlled using an eye-tracker linked to a fixation-contingent display and all stimuli were presented within foveal vision to avoid confounding influences of extrafoveal projections. Performance was equally poorest when fixating locations 1 and 6 (i.e., when words were shown entirely to either the left or right of fixation), equally intermediate for locations 2 and 5, and equally superior for locations 3 and 4 (i.e., the centre of words). Moreover, additional, word-specific analyses showed no evidence of the effects of fixation location on individual word recognition also predicted by split-foveal processing. These findings from a non-Latinate language complement those reported previously for English to provide further evidence that while fixation location influences word recognition, these influences occur with no functional division in hemispheric processing at the point of fixation.

from Cortex

Evidence for right hemisphere phonology in a backward masking task

The extent to which orthographic and phonological processes are available during the initial moments of word recognition within each hemisphere is under specified, particularly for the right hemisphere. Few studies have investigated whether each hemisphere uses orthography and phonology under constraints that restrict the viewing time of words and reduce overt phonological demands. The current study used backward masking in the divided visual field paradigm to explore hemisphere differences in the availability of orthographic and phonological word recognition processes. A 20 ms and 60 ms SOA were used to track the time course of how these processes develop during pre-lexical moments of word recognition. Nonword masks varied in similarity to the target words such that there were four types: orthographically and phonologically similar, orthographically but not phonologically similar, phonologically but not orthographically similar and unrelated. The results showed the left hemisphere has access to both orthography and phonology early in the word recognition process. With more time to process the stimulus, the left hemisphere is able to use phonology which benefits word recognition to a larger extent than orthography. The right hemisphere also demonstrates access to both orthography and phonology in the initial moments of word recognition, however, orthographic similarity improves word recognition to a greater extent than phonological similarity.

from Brain and Language

Cues and cue interactions in segmenting words in fluent speech

Fluent speech does not contain obvious breaks to word boundaries, yet there are a number of cues that listeners can use to help them segment the speech stream. Most of these cues have been investigated in isolation from one another. In previous work, Norris, McQueen, Cutler, and Butterfield (1997) suggested that listeners use a Possible Word Constraint when segmenting fluent speech into individual words. This constraint limits the word recognition system to consider only those parsings that could conceivably be words in the language (that is, those that do not strand illegal sequences). The present paper examines how this constraint interacts with other cues to segmentation, such as junctural and allophonic cues and neighborhood probabilities. Segmentation was influenced both by the PWC and by the presence of acoustic cues to juncture, such as the acoustic results of a speaker’s intention to produce a particular phoneme as the end of one syllable vs. as the start of another (vuff-apple vs. vuh-fapple). In contrast, segmentation was not affected by the legality of a syllable-final vowel (tense vs. lax), or by the similarity of a sequence to words. This suggests that acoustic cues in the signal play a far larger role in segmentation than do sources of bias from the lexicon, and that probabilistic lexical information from the lexicon (such as neighborhood information) is unlikely to be used in the process of word segmentation.

from the Journal of Memory and Language

Communication between the cerebral hemispheres in dyslexic and skilled adult readers

It has often been suggested that problems of communication between the cerebral hemispheres are part of the profile of impairments in dyslexia. Henderson, Barca and Ellis (2007) obtained evidence in support of that suggestion in a study which compared the «bilateral advantage» in adult dyslexics and good readers. The term «bilateral advantage» refers to the fact that if two copies of the same word are presented very briefly, one to the left of a central fixation point and one to the right, good adult readers identify the word more efficiently than if only a single copy is presented to the left or right of fixation. The bilateral advantage is thought to depend on effective and rapid communication of visual information between the hemispheres across the corpus callosum. Henderson, et al. (2007) found that adult dyslexics do not show a bilateral advantage and concluded that interhemispheric communication of visual information about words is indeed impaired in adult dyslexics. Additional experiments provided further insights into the precise nature of the dyslexic deficit. Those findings, and the wider literature on interhemispheric communication and the bilateral advantage, are discussed. The possible consequences of impaired callosal transfer of visual information on reading are also considered.

from Revista de Logopedia, Foniatría y Audiología

Interplay between morphology and frequency in lexical access: The case of the base frequency effect

A major issue in lexical processing concerns storage and access of lexical items. Here we make use of the base frequency effect to examine this. Specifically, reaction time to morphologically complex words (words made up of base and suffix, e.g., agree+able) typically reflects frequency of the base element (i.e., total frequency of all words in which agree appears) rather than surface word frequency (i.e., frequency of agreeable itself). We term these complex words decomposable. However, a class of words termed whole-word do not show such sensitivity to base frequency (e.g., serenity).

Using an event-related MRI design, we exploited the fact that processing low-frequency words increases BOLD activity relative to high frequency ones, and examined effects of base frequency on brain activity for decomposable and whole-word items. Morphologically complex words, half high and half low base frequency, were compared to matched high and low frequency simple monomorphemic words using a lexical decision task.

Morphologically complex words increased activation in left inferior frontal and left superior temporal cortices versus simple words. The only area to mirror the behavioral distinction between decomposable and whole-word types was the thalamus. Surprisingly, most frequency-sensitive areas failed to show base frequency effects. This variety of responses to frequency and word type across brain areas supports an integrative view of multiple variables during lexical access, rather than a dichotomy between memory-based access and on-line computation. Lexical access appears best captured as interplay of several neural processes with different sensitivities to various linguistic factors including frequency and morphological complexity.

from Brain Research

When deaf signers read English: Do written words activate their sign translations?

Deaf bilinguals for whom American Sign Language (ASL) is the first language and English is the second language judged the semantic relatedness of word pairs in English. Critically, a subset of both the semantically related and unrelated word pairs were selected such that the translations of the two English words also had related forms in ASL. Word pairs that were semantically related were judged more quickly when the form of the ASL translation was also similar whereas word pairs that were semantically unrelated were judged more slowly when the form of the ASL translation was similar. A control group of hearing bilinguals without any knowledge of ASL produced an entirely different pattern of results. Taken together, these results constitute the first demonstration that deaf readers activate the ASL translations of written words under conditions in which the translation is neither present perceptually nor required to perform the task.

from Cognition

Spoken Word Recognition in School-Age Children With SLI: Semantic, Phonological, and Repetition Priming

Conclusions: Although children with SLI have priming mechanisms similar to those of their age-matched peers, the absence of semantic and phonological priming suggests that these connections are not strong enough by themselves to yield priming effects. These findings are discussed in the context of semantic and phonological priming, representation, and generalized slowing.

from the Journal of Speech, Language, and Hearing Research

An examination of orthographic and phonological processing using the task-choice procedure

The task-choice procedure provides a way for assessing whether stimuli are processed immediately upon presentation and in parallel with other cognitive operations. In this procedure, the task changes on a trial-by-trial basis and the cue informing participants about the task appears either before or simultaneously with the target, which is either degraded or clear. Of interest is whether the effect of stimulus clarity will disappear when the cue is presented simultaneously with the target, suggesting capacity-free processing, or whether the effect of stimulus clarity will remain, suggesting target processing is delayed. Besner and Care developed this procedure using nonword targets and found that phonological information was not extracted in parallel with deciphering the task cue. The current experiment examined whether phonological and orthographic information could be extracted from word and nonword stimuli in a capacity-free manner. Results indicate that in both tasks some processing does occur in a capacity-free manner when words are used but not when nonwords are used. These data may be consistent with interactive activation models which posit top-down lexical connections that facilitate the extraction of sublexical codes.

from Language and Cognitive Processes

Spanish/English Bilingual Listeners on Clinical Word Recognition Tests: What to Expect and How to Predict

A Spanish word recognition test would likely yield more favorable results for S/E bilingual listeners who were Spanish-dominant or who acquired English at 10 years of age or older. It may be necessary for listeners who acquired English at 7–10 years of age to be evaluated in both English and Spanish.

from the Journal of Speech, Language, and Hearing Research

Faces are special but not too special: Spared face recognition in amnesia is based on familiarity

Most current theories of human memory are material-general in the sense that they assume that the medial temporal lobe (MTL) is important for retrieving the details of prior events, regardless of the specific type of materials. Recent studies of amnesia have challenged the material-general assumption by suggesting that the MTL may be necessary for remembering words, but is not involved in remembering faces. We examined recognition memory for faces and words in a group of amnesic patients, which included hypoxic patients and patients with extensive left or right MTL lesions. Recognition confidence judgments were used to plot receiver operating characteristics (ROCs) in order to more fully quantify recognition performance and to estimate the contributions of recollection and familiarity. Consistent with the extant literature, an analysis of overall recognition accuracy showed that the patients were impaired at word memory but had spared face memory. However, the ROC analysis indicated that the patients were generally impaired at high confidence recognition responses for faces and words, and they exhibited significant recollection impairments for both types of materials. Familiarity for faces was preserved in all patients, but extensive left MTL damage impaired familiarity for words. These results suggest that face recognition may appear to be spared because performance tends to rely heavily on familiarity, a process that is relatively well preserved in amnesia. The findings challenge material-general theories of memory, and suggest that both material and process are important determinants of memory performance in amnesia, and different types of materials may depend more or less on recollection and familiarity.<p><p>from <a href=”http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6T0D-511C074-5&_user=108452&_coverDate=09%2F15%2F2010&_rdoc=1&_fmt=high&_orig=search&_origin=search&_sort=d&_docanchor=&view=c&_acct=C000059732&_version=1&_urlVersion=0&_userid=108452&md5=db026493 “><em>Neuropsychologia</em></a></p>

Impaired word recognition in Alzheimer’s disease: the role of age of acquisition

Studies of word production in patients with Alzheimer’s disease have identified the age of acquisition of words as an important predictor of retention or loss, with early acquired words remaining accessible for longer than later acquired words. If, as proposed by current theories, effects of age of acquisition reflect the involvement of semantic representations in task performance, then some aspects of word recognition in patients with Alzheimer’s disease should also be better for early than later acquired words. We employed a version of the lexical decision task which we term the lexical selection task. This required participants to indicate which of four items on a page was a real word (the three ‘foils’ being orthographically plausible nonwords). Twenty-two patients with probably Alzheimer’s disease were compared with an equal number of matched controls. The controls made few errors on the test, demonstrating that the controls were cognitively intact, and that the words were familiar to participants of their age and level of education. The Alzheimer patients were impaired overall, and recognized fewer late than early acquired words correctly. Performance of the Alzheimer patients on the lexical selection task correlated significantly with their scores on the Mini Mental State Examination. Word recognition becomes impaired as Alzheimer’s disease progresses, at which point effects of age of acquisition can be observed on the accuracy of performance.

from Neuropsychologia

Potent prosody: Comparing the effects of distal prosody, proximal prosody, and semantic context on word segmentation

Recent work shows that word segmentation is influenced by distal prosodic characteristics of the input several syllables from the segmentation point (Dilley & McAuley, 2008). Here, participants heard eight-syllable sequences with a lexically ambiguous four-syllable ending (e.g., crisis turnip vs. cry sister nip). The prosodic characteristics of the initial five syllables were resynthesized in a manner predicted to favor parsing of the final syllables as either a monosyllabic or a disyllabic word; the acoustic characteristics of the final three syllables were held constant. Experiments 1a–c replicated earlier results showing that utterance-initial prosody influences segmentation utterance-finally, even when lexical content is removed through low-pass filtering, and even when an on-line cross-modal paradigm is used. Experiments 2 and 3 pitted distal prosody against, respectively, distal semantic context and prosodic attributes of the test words themselves. Although these factors jointly affected which words participants heard, distal prosody remained an extremely robust segmentation cue. These findings suggest that distal prosody is a powerful factor for consideration in models of word segmentation and lexical access.

from the Journal of Memory and Language

An examination of orthographic and phonological processing using the task-choice procedure

The task-choice procedure provides a way for assessing whether stimuli are processed immediately upon presentation and in parallel with other cognitive operations. In this procedure, the task changes on a trial-by-trial basis and the cue informing participants about the task appears either before or simultaneously with the target, which is either degraded or clear. Of interest is whether the effect of stimulus clarity will disappear when the cue is presented simultaneously with the target, suggesting capacity-free processing, or whether the effect of stimulus clarity will remain, suggesting target processing is delayed. Besner and Care developed this procedure using nonword targets and found that phonological information was not extracted in parallel with deciphering the task cue. The current experiment examined whether phonological and orthographic information could be extracted from word and nonword stimuli in a capacity-free manner. Results indicate that in both tasks some processing does occur in a capacity-free manner when words are used but not when nonwords are used. These data may be consistent with interactive activation models which posit top-down lexical connections that facilitate the extraction of sublexical codes.

from Language and Cognitive Processes

The mechanisms underlying the interhemispheric integration of information in foveal word recognition: Evidence for transcortical inhibition

Words are processed as units. This is not as evident as it seems, given the division of the human cerebral cortex in two hemispheres and the partial decussation of the optic tract. In two experiments, we investigated what underlies the unity of foveally presented words: A bilateral projection of visual input in foveal vision, or interhemispheric inhibition and integration as proposed by the SERIOL model of visual word recognition. Experiment 1 made use of pairs of words and nonwords with a length of four letters each. Participants had to name the word and ignore the nonword. The visual field in which the word was presented and the distance between the word and the nonword were manipulated. The results showed that the typical right visual field advantage was observed only when the word and the nonword were clearly separated. When the distance between them became smaller, the right visual field advantage turned into a left visual field advantage, in line with the interhemispheric inhibition mechanism postulated by the SERIOL model. Experiment 2, using 5-letters stimuli, confirmed that this result was not due to the eccentricity of the word relative to the fixation location but to the distance between the word and the nonword.

from Brain and Language