Monthly Archives: October 2009
The recognition that contentful universals are rare and often “banal” does not undermine the fact that most non-universal but recurring patterns of language are amenable to explanation. These patterns are sensical or motivated solutions to interacting and often conflicting factors. As implied by the Evans & Levinson’s (E&L’s) article, linguistics would be well served to move beyond the essentialist bias that seeks universal, innate, unchanging categories with rigid boundaries.
Evans & Levinson’s (E&L’s) major point is that human languages are intriguingly diverse rather than (like animal communication systems) uniform within the species. This does not establish a “myth” about language universals, or advance the ill-framed pseudo-debate over universal grammar. The target article does, however, repeat a troublesome myth about Fitch and Hauser’s (2004) work on pattern learning in cotton-top tamarins.
Evans & Levinson (E&L) focus on differences between languages at a superficial level, rather than examining common processes. Their emphasis on trivial details conceals uniform design features and universally shared strategies. Lexical category distinctions between nouns and verbs are probably universal. Non-local dependencies are a general property of languages, not merely non-configurational languages. Even the latter class exhibits constituency.
This commentary argues that Evans & Levinson (E&L) should expand their two-track model to a three-track model in which biological and cultural evolution interact with the evolution of an individual’s language repertories in ontogeny. It also comments on the relevance of the argument from the poverty of the stimulus and offers a caveat, based on analogous issues in biology, on the metaphor of language as a container, whether of meanings or of other content.
I present the so-called Verb-Object Constraint as a serious proposal for a true linguistic universal. It provides an example of the kind of abstraction in linguistic analysis that seems warranted, of how different languages can confirm such a universal in different ways, and why approaches that avoid all abstractness miss important linguistic generalizations.
Understanding the universal aspects of human language structure requires comparison at multiple levels of analysis. While Evans & Levinson (E&L) focus mostly on substantive variation in language, equally revealing insights can come from studying formal universals. I first discuss how Artificial Grammar Experiments can test universal preferences for certain types of abstract phonological generalizations over others. I then discuss moraic onsets in the language Arrernte, and how its apparent substantive variation ultimately rests on a formal universal regarding syllable-weight sensitivity.
Conflation of our unique human endowment for language with innate, so-called universal, grammar has banished language from its biological home. The facts reviewed by Evans & Levinson (E&L) fit the biology of cultural transmission. My commentary highlights our dedicated learning capacity for vocal production learning as the form of our language endowment compatible with those facts.
Evans & Levinson (E&L) perform a major service for cognitive science. The assumption of Chomskyan generative linguistics – that there are absolute unrestricted universals of grammatical structure – is empirically untenable. However, E&L are too reluctant to abandon word classes and grammatical relations in syntax. Also, a cognitive scientist can already draw on a substantial linguistics literature on variationist, evolutionary models of language.
Conditional universals have always interested linguists more than unrestricted universals, which are often impossible to demonstrate empirically because categories cannot be defined in a cross-linguistically meaningful way. But deep dependencies have not been confirmed by more recent empirical research, and those universals with solid empirical support mostly relate to scalar patterns that can plausibly be related to processing cost.
Modern linguistics has highlighted the fundamental invariance of human language: A rich invariant structure has emerged from comparative studies nourished by sophisticated formal models; languages also differ along important dimensions, but variation is constrained in severe and systematic ways. I illustrate this research direction in the domains of island constraints, word order restrictions, and the expression of referential dependencies. Both language invariance and language variability within systematic limits are highly relevant for the cognitive sciences.
Evans & Levinson (E&L) argue that language universals are a myth. Christiansen and Chater (2008) have recently suggested that innate universal grammar is also a myth. This commentary explores the connection between these two theses, and draws wider implications for the cognitive science of language.
Talk of linguistic universals has given cognitive scientists the impression that languages are all built to a common pattern. In fact, there are vanishingly few universals of language in the direct sense that all languages exhibit them. Instead, diversity can be found at almost every level of linguistic organization. This fundamentally changes the object of enquiry from a cognitive science perspective. This target article summarizes decades of cross-linguistic work by typologists and descriptive linguists, showing just how few and unprofound the universal characteristics of language are, once we honestly confront the diversity offered to us by the world’s 6,000 to 8,000 languages. After surveying the various uses of “universal,” we illustrate the ways languages vary radically in sound, meaning, and syntactic organization, and then we examine in more detail the core grammatical machinery of recursion, constituency, and grammatical relations. Although there are significant recurrent patterns in organization, these are better explained as stable engineering solutions satisfying multiple design constraints, reflecting both cultural-historical factors and the constraints of human cognition.
Linguistic diversity then becomes the crucial datum for cognitive science: we are the only species with a communication system that is fundamentally variable at all levels. Recognizing the true extent of structural diversity in human language opens up exciting new research directions for cognitive scientists, offering thousands of different natural experiments given by different languages, with new opportunities for dialogue with biological paradigms concerned with change and diversity, and confronting us with the extraordinary plasticity of the highest human skills.
Converging findings from English, Mandarin, and other languages suggest that observed “universals” may be algorithmic. First, computational principles behind recently developed algorithms that acquire productive constructions from raw texts or transcribed child-directed speech impose family resemblance on learnable languages. Second, child-directed speech is particularly rich in statistical (and social) cues that facilitate learning of certain types of structures.
While endorsing Evans & Levinson’s (E&L’s) call for rigorous documentation of variation, we defend the idea of Universal Grammar as a toolkit of language acquisition mechanisms. The authors exaggerate diversity by ignoring the space of conceivable but nonexistent languages, trivializing major design universals, conflating quantitative with qualitative variation, and assuming that the utility of a linguistic feature suffices to explain how children acquire it.
Evans & Levinson (E&L) claim Kiowa number as a prime example of the semantically unexpected, threatening both Universal Grammar and Linguistic Universals. This commentary, besides correcting factual errors, shows that the primitives required for Kiowa also explain two unrelated semantically unexpected patterns and derive two robust Linguistic Universals. Consequently, such apparent exceptionality argues strongly for Universal Grammar and against E&L.