While the vast majority of linguistic processes apply locally, consonant harmony appears to be an exception. In this phonological process, consonants share the same value of a phonological feature, such as secondary place of articulation. In sibilant harmony, [s] and [∫] (‘sh’) alternate such that if a word contains the sound [∫], all [s] sounds become [∫]. This can apply locally as a first-order or non-locally as a second-order pattern. In the first-order case, no consonants intervene between the two sibilants (e.g., [pisasu], [pi∫a∫u]). In second-order case, a consonant may intervene (e.g., [sipasu], [∫ipa∫u]). The fact that there are languages that allow second-order non-local agreement of consonant features has led some to question whether locality constraints apply to consonant harmony. This paper presents the results from two artificial grammar learning experiments that demonstrate the privileged role of locality constraints, even in patterns that allow second-order non-local interactions. In Experiment 1, we show that learners do not extend first-order non-local relationships in consonant harmony to second-order non-local relationships. In Experiment 2, we show that learners will extend a consonant harmony pattern with second-order long distance relationships to a consonant harmony with first-order long distance relationships. Because second-order non-local application implies first-order non-local application, but first-order non-local application does not imply second-order non-local application, we establish that local constraints are privileged even in consonant harmony.
from the Journal of Memory and Language
It is often assumed that language is supported by domain-specific neural mechanisms, in part based on neuropsychological data from aphasia. If, however, language relies on domain-general mechanisms, it would be expected that deficits in non-linguistic cognitive processing should co-occur with aphasia. In this paper, we report a study of sequential learning by agrammatic aphasic patients and control participants matched for age, socio-economic status and non-verbal intelligence. Participants were first exposed to strings derived from an artificial grammar after which they were asked to classify a set of new strings, some of which were generated by the same grammar whereas others were not. Although both groups of participants performed well in the training phase of the experiment, only the control participants were able to classify novel test items better than chance. The results show that breakdown of language in agrammatic aphasia is associated with an impairment in artificial grammar learning, indicating damage to domain-general neural mechanisms subserving both language and sequential learning.
It is commonly held that implicit knowledge expresses itself as fluency. A perceptual clarification task was used to examine the relationship between perceptual processing fluency, subjective familiarity, and grammaticality judgments in a task frequently used to produce implicit knowledge, artificial grammar learning (AGL). Four experiments examined the effects of naturally occurring differences and manipulated differences in perceptual fluency, where decisions were based on a brief exposure to test-strings (during the clarification task only) or normal exposure. When perceptual fluency was not manipulated, it was weakly related to familiarity and grammaticality judgments, but unrelated to grammatical status and hence not a source of accuracy. Counterbalanced grammatical and ungrammatical strings did not differ in perceptual fluency but differed substantially in subjective familiarity. When fluency was manipulated, faster clarifying strings were rated as more familiar and were more often endorsed as grammatical but only where exposure was brief. Results indicate that subjective familiarity derived from a source other than perceptual fluency, is the primary basis for accuracy in AGL. Perceptual fluency is found to be a dumb heuristic influencing responding only in the absence of actual implicit knowledge.