Cognitive models of reading all assume some division of labor among processing pathways in mapping among print, sound and meaning. Many studies of the neural basis of reading have used task manipulations such as rhyme or synonym judgment to tap these processes independently. Here we take advantage of specific properties of the Chinese writing system to test how differential availability of sublexical information about sound and meaning, as well as the orthographic structure of characters, pseudo-characters and “artificial” control stimuli influence brain activation in the context of the same one-back task. Analyses combine a data-driven approach that identifies temporally coherent patterns of activity over the course of the entire experiment with hypothesis-testing based on the correlation of these patterns with predictors for different stimulus classes. The results reveal a large network of task-related activity. Both the extent of this network and activity in regions commonly observed in studies of Chinese reading are apparently related to task difficulty. Other regions, including temporo-parietal cortex, were sensitive to particular sublexical functional units in mapping among print, sound, and meaning.
from Brain and Language
There is an ongoing debate in cognitive psychology as to whether syllables have to be seen as functional units not only for speech perception and production, but also for the process of silent reading or visual word recognition. For the present study, we used a perceptive identification task where single disyllabic 5-letter German words were briefly presented to the participants for 50 or 60 milliseconds. The percentage of errors in identifying these stimuli was the dependent variable. During presentation in the experiment we manipulated the viewing position for these items, so that initial fixation for each repeatedly presented word varied systematically across all five letter positions. Typically, for such manipulations, word recognition is best when initial fixation is at a position slightly left from the word center — a finding referred to as the optimal viewing position effect. We found that the shape of the optimal viewing position function is sensitive to syllabic structure: The optimal viewing position shifted one letter position to the right with increasing initial syllable length (two vs. three letters in our stimulus material). This finding suggests that efficient reading benefits from a very early processing of syllabic information. It corroborates other recent empirical findings suggesting that also during silent reading orthographic word forms are automatically segmented into their syllabic constituents.
Reaction times in lexical decision are more sensitive to a words’ length and orthographic-neighborhood density when the stimulus is presented to the left visual field (LVF) than to the right visual field (RVF). We claim that the length effect is equivalent to the neighborhood effect, and propose a novel explanation of why the LVF, but not the RVF, is sensitive to density, based on different firing rates of abstract-letter representations encoding letters falling in the LVF versus RVF. We support this proposal with a large-scale implemented model of lexical decision utilizing spiking units, which provides a reasonable fit to the data from the English Lexicon Project under simulated central presentation, while replicating the observed hemifield asymmetries under simulated lateralized presentation.
from Brain and Language
This investigation moves beyond the traditional studies of word reading to identify how the production complexity of words affects reading accuracy in an individual with deep dyslexia (JO). We examined JO’s ability to read words aloud while manipulating both the production complexity of the words and the semantic context. The classification of words as either phonetically simple or complex was based on the Index of Phonetic Complexity. The semantic context was varied using a semantic blocking paradigm (i.e., semantically blocked and unblocked conditions). In the semantically blocked condition words were grouped by semantic categories (e.g., table, sit, seat, couch), whereas in the unblocked condition the same words were presented in a random order. JO’s performance on reading aloud was also compared to her performance on a repetition task using the same items. Results revealed a strong interaction between word complexity and semantic blocking for reading aloud but not for repetition. JO produced the greatest number of errors for phonetically complex words in semantically blocked condition. This interaction suggests that semantic processes are constrained by output production processes which are exaggerated when derived from visual rather than auditory targets. This complex relationship between orthographic, semantic, and phonetic processes highlights the need for word recognition models to explicitly account for production processes.
from the Journal of Neurolinguistics
Effects of visual complexity and sublexical information in the occipitotemporal cortex in the reading of Chinese phonograms: A single-trial analysis with MEG
We employ a linear mixed-effects model to estimate the effects of visual form and the linguistic properties of Chinese characters on M100 and M170 MEG responses from single-trial data of Chinese and English speakers in a Chinese lexical decision task. Cortically constrained minimum-norm estimation is used to compute the activation of M100 and M170 responses in functionally defined regions of interest. Both Chinese and English participants’ M100 responses tend to increase in response to characters with a high numbers of strokes. English participants’ M170 responses show a posterior distribution and only reflect the effect of the visual complexity of characters. On the other hand, the Chinese participants’ left hemisphere M170 is increased when reading characters with high number of strokes, and their right hemisphere M170 is increased when reading characters with small combinability of semantic radicals. Our results suggest that expertise with words and the decomposition of word forms underlies processing in the left and right occipitotemporal regions in the reading of Chinese characters by Chinese speakers.
from Brain and Language
Research using the masked priming paradigm has suggested that there is a form of morphological decomposition that is robust to orthographic alterations, even when the words are not semantically related (e.g., badger/badge). In contrast, delayed priming is influenced by semantic relatedness but it is not clear whether it can survive orthographic changes. In this paper, we ask whether morpho-orthographic segmentation breaks down in the presence of the extensive orthographic changes found in Greek morphology (orthographic opacity). The effects of semantic relatedness and orthographic opacity are examined in masked (Experiment 1) and delayed priming (Experiment 2). Significant masked priming was observed for pairs that shared orthography, irrespective of whether they shared meaning (mania/mana, “mania/mother”). Delayed priming was observed for pairs that were semantically related, irrespective of orthographic opacity (poto/pino, “drink/I drink”). The results are discussed in terms of theories of morphological processing in visual word recognition.
We used rapid event-related fMRI to explore factors modulating the activation of orthographic and phonological representations of print during a visual lexical decision task. Stimuli included homophonous word and nonword stimuli (MAID, BRANE), which have been shown behaviorally to produce longer response times due to phonological mediation effects. We also manipulated participants’ reliance on orthography by varying the extent to which nonword foils were orthographically typical (wordlike context) or atypical (non-wordlike context) of real words. Key findings showed that reading low frequency homophones in the wordlike context produced activation in regions associated with phonological processing (i.e., opercular region of the left inferior frontal gyrus [IFG; BA 44]), the integration of orthography and phonology (i.e., the inferior parietal lobule (IPL), and lexicosemantic processing (i.e., left middle temporal gyrus, [MTG]). Pseudohomophones in the wordlike context produced greater activity relative to other nonword trials in regions engaged during both phonological processing (i.e., left IFG/precentral gyrus; BA 6/9]), and semantic processing (triangular region of the left IFG; BA 47). Homophonous effects in the non-wordlike context were primarily isolated to medial extrastriate regions, hypothesized to be involved in low level visual processing and not reading-related processing per se. These findings demonstrate that the degree to which phonological and orthographic representations of print are activated depends not only on homophony, but also on the word-likeness of nonword stimuli. Implications for models of visual word recognition are discussed.
from Brain Research
Learning to assign lexical stress during reading aloud: Corpus, behavioral, and computational investigations
Models of reading aloud have tended to focus on the mapping between graphemes and phonemes in monosyllables. Critical adaptations of these models are required when considering the reading of polysyllables, which constitute over 90% of word types in English. In this paper, we examined one such adaptation – the process of stress assignment in learning to read. We used a triangulation of corpus, behavioral, and computational modeling techniques. A corpus analysis of age-appropriate reading materials for children aged 5–12 years revealed that the beginnings and endings of English bisyllabic words are highly predictive of stress position, but that endings are more reliable cues in texts for older children. Children aged 5–12 years revealed sensitivity to both the beginnings and endings when reading nonwords, but older children relied more on endings for determining stress assignment. A computational model that learned to map orthography onto stress showed the same age-related trajectory as the children when assigning stress to nonwords. These results reflect the gradual process of learning the statistical properties of written input and provide key constraints for adequate models of reading aloud.
from the Journal of Memory and Language
Early on during word recognition, letter positions are not accurately coded. Evidence for this comes from transposed-letter (TL) priming effects, in which letter strings generated by transposing two adjacent letters (e.g., jugde) produce large priming effects, more than primes with the letters replaced in the corresponding position (e.g., junpe). Dominant accounts of TL priming effect such as the Open Bigrams model ([Grainger and van Heuven, 2003] and [Whitney and Cornelissen, 2008]) and the SOLAR model (Davis & Bowers, 2006) explain this effect by proposing a higher level of representation than individual letter identities in which letter position is not coded accurately. An alternative to this is to assume that position coding is noisy (e.g., Gomez, Ratcliff, & Perea, 2008). We propose an extension to the Bayesian Reader (Norris, 2006) that incorporates letter position noise during sampling from perceptual input. This model predicts “leakage” of letter identity to nearby positions, which is not expected from models incorporating alternative position coding schemes. We report three masked priming experiments testing predictions from this model.
from the Journal of Memory and Language
Consonants and Vowels Contribute Differently to Visual Word Recognition: ERPs of Relative Position Priming
This paper shows that the nature of letters—consonant versus vowel—modulates the process of letter position assignment during visual word recognition. We recorded Event Related Potentials while participants read words in a masked priming semantic categorization task. Half of the words included a vowel as initial, third, and fifth letters (e.g., acero [steel]). The other half included a consonant as initial, third, and fifth (e.g., farol [lantern]). Targets could be preceded 1) by the initial, third, and fifth letters (relative position; e.g., aeo—acero and frl—farol), 2) by 3 consonants or vowels that did not appear in the target word (control; e.g., iui—acero and tsb—farol), or 3) by the same words (identity: acero–acero, farol–farol). The results showed modulation in 2 time windows (175–250 and 350–450 ms). Relative position primes composed of consonants produced similar effects to the identity condition. These 2 differed from the unrelated control condition, which showed a larger negativity. In contrast, relative position primes composed of vowels produced similar effects to the unrelated control condition, and these 2 showed larger negativities as compared with the identity condition. This finding has important consequences for cracking the orthographic code and developing computational models of visual word recognition.
from Cerebral Cortex
Individual differences in the joint effects of semantic priming and word frequency revealed by RT distributional analyses: The role of lexical integrity
Word frequency and semantic priming effects are among the most robust effects in visual word recognition, and it has been generally assumed that these two variables produce interactive effects in lexical decision performance, with larger priming effects for low-frequency targets. The results from four lexical decision experiments indicate that the joint effects of semantic priming and word frequency are critically dependent upon differences in the vocabulary knowledge of the participants. Specifically, across two Universities, additive effects of the two variables were observed in means, and in RT distributional analyses, in participants with more vocabulary knowledge, while interactive effects were observed in participants with less vocabulary knowledge. These results are discussed with reference to [Borowsky, R., & Besner, D. (1993). Visual word recognition: A multistage activation model. Journal of Experimental Psychology: Learning, Memory, and Cognition, 19, 813–840] multistage account and [Plaut, D. C., & Booth, J. R. (2000). Individual and developmental differences in semantic priming: Empirical and computational support for a single-mechanism account of lexical processing. Psychological Review, 107, 786–823] single-mechanism model. In general, the findings are also consistent with a flexible lexical processing system that optimizes performance based on processing fluency and task demands.
from the Journal of Memory and Language
The visual word recognition literature has been dominated by the study of monosyllabic words in factorial experiments, computational models, and megastudies. However, it is not yet clear whether the behavioral effects reported for monosyllabic words generalize reliably to multisyllabic words. Hierarchical regression techniques were used to examine the effects of standard variables (phonological onsets, stress pattern, length, orthographic N, phonological N, word frequency) and additional variables (number of syllables, feedforward and feedback phonological consistency, novel orthographic and phonological similarity measures, semantics) on the pronunciation and lexical decision latencies of 6115 monomorphemic multisyllabic words. These predictors accounted for 61.2% and 61.6% of the variance in pronunciation and lexical decision latencies, respectively, higher than the estimates reported by previous monosyllabic studies. The findings we report represent a well-specified set of benchmark phenomena for constraining nascent multisyllabic models of English word recognition.
from the Journal of Memory and Language
Reading aloud: Qualitative differences in the relation between stimulus quality and word frequency as a function of context
Virtually all theories of visual word recognition assume (typically implicitly) that when a pathway is used, processing within that pathway always unfolds in the same way. This view is challenged by the observation that simple variations in list composition are associated with qualitative changes in performance. The present experiments demonstrate that when reading aloud, the joint effects of stimulus quality and word frequency on response time are driven by the presence/absence of nonwords in the list. Interacting effects of these factors are seen when only words appear in the experiment, whereas additive effects are seen when words and nonwords are randomly intermixed. One way to explain these and other data appeals to the distinction between cascaded processing (or interactive activation) on the one hand versus a thresholded mode of processing on the other, with contextual factors determining which mode of processing dominates. (PsycINFO Database Record (c) 2008 APA, all rights reserved)
Groups of Grade 3 children were tested on measures of word-level literacy and undertook tasks that required the ability to associate sounds with letter sequences and that involved visual, auditory and phonological-processing skills. These groups came from different language backgrounds in which the language of instruction was Arabic, Chinese, English, Hungarian or Portuguese. Similar measures were used across the groups, with tests being adapted to be appropriate for the language of the children. Findings indicated that measures of decoding and phonological-processing skills were good predictors of word reading and spelling among Arabic- and English-speaking children, but were less able to predict variability in these same early literacy skills among Chinese- and Hungarian-speaking children, and were better at predicting variability in Portuguese word reading than spelling. Results were discussed with reference to the relative transparency of the script and issues of dyslexia assessment across languages. Overall, the findings argue for the need to take account of features of the orthography used to represent a language when developing assessment procedures for a particular language and that assessment of word-level literacy skills and a phonological perspective of dyslexia may not be universally applicable across all language contexts. Copyright (c) 2008 John Wiley & Sons, Ltd.