Generics are statements such as “tigers are striped” and “ducks lay eggs”. They express general, though not universal or exceptionless, claims about kinds (Carlson & Pelletier, 1995). For example, the generic “ducks lay eggs” seems true even though many ducks (e.g. the males) do not lay eggs. The universally quantified version of the statement should be rejected, however: it is incorrect to say “all ducks lay eggs”, since many ducks do not lay eggs. We found that adults nonetheless often judged such universal statements true, despite knowing that only one gender had the relevant property (Experiment 1). The effect was not due to participants interpreting the universals as quantifying over subkinds, or as applying to only a subset of the kind (e.g. only the females) (Experiment 2), and it persisted even when people judged that male ducks did not lay eggs only moments before (Experiment 3). It also persisted when people were presented with correct alternatives such as “some ducks do not lay eggs” (Experiment 4). Our findings reveal a robust generic overgeneralization effect, predicted by the hypothesis that generics express primitive, default generalizations.
from the Journal of Memory and Language
Achieving a clearer picture of categorial distinctions in the brain is essential for our understanding of the conceptual lexicon, but much more fine-grained investigations are required in order for this evidence to contribute to lexical research. Here we present a collection of advanced data-mining techniques that allows the category of individual concepts to be decoded from single trials of EEG data. Neural activity was recorded while participants silently named images of mammals and tools, and category could be detected in single trials with an accuracy well above chance, both when considering data from single participants, and when group-training across participants. By aggregating across all trials, single concepts could be correctly assigned to their category with an accuracy of 98%. The pattern of classifications made by the algorithm confirmed that the neural patterns identified are due to conceptual category, and not any of a series of processing-related confounds. The time intervals, frequency bands and scalp locations that proved most informative for prediction permit physiological interpretation: the widespread activation shortly after appearance of the stimulus (from 100 ms) is consistent both with accounts of multi-pass processing, and distributed representations of categories. These methods provide an alternative to fMRI for fine-grained, large-scale investigations of the conceptual lexicon.
from Brain and Language
The modifier effect is the reduction in perceived likelihood of a generic property sentence, when the head noun is modified. We investigated the prediction that the modifier effect would be stronger for mutable than for central properties, without finding evidence for this predicted interaction over the course of five experiments. However Experiment 6, which provided a brief context for the modified concepts to lend them greater credibility, did reveal the predicted interaction. It is argued that the modifier effect arises primarily from a general lack of confidence in generic statements about the typical properties of unfamiliar concepts. Neither prototype nor classical models of concept combination receive support from the phenomenon.
from the Journal of Memory and Language
Recently, three accounts have emerged on the role of the anterior temporal lobes (ATLs) in semantic memory. One account claims that the ATLs are domain-general semantic hubs, another claims that they underlie knowledge of unique entities specifically, and yet another account claims that they support social conceptual knowledge generally. Here, we review neuropsychological and neuroimaging studies that bear on these three accounts and offer suggestions for future research to elucidate the roles of the ATLs in semantic memory. (JINS, 2009, 15, 645–649.)
In this précis of our recent book, Semantic Cognition: A Parallel Distributed Processing Approach (Rogers & McClelland 2004), we present a parallel distributed processing theory of the acquisition, representation, and use of human semantic knowledge. The theory proposes that semantic abilities arise from the flow of activation among simple, neuron-like processing units, as governed by the strengths of interconnecting weights; and that acquisition of new semantic information involves the gradual adjustment of weights in the system in response to experience. These simple ideas explain a wide range of empirical phenomena from studies of categorization, lexical acquisition, and disordered semantic cognition. In this précis we focus on phenomena central to the reaction against similarity-based theories that arose in the 1980s and that subsequently motivated the “theory-theory” approach to semantic knowledge. Specifically, we consider (1) how concepts differentiate in early development, (2) why some groupings of items seem to form “good” or coherent categories while others do not, (3) why different properties seem central or important to different concepts, (4) why children and adults sometimes attest to beliefs that seem to contradict their direct experience, (5) how concepts reorganize between the ages of 4 and 10, and (6) the relationship between causal knowledge and semantic knowledge. The explanations our theory offers for these phenomena are illustrated with reference to a simple feed-forward connectionist model. The relationships between this simple model, the broader theory, and more general issues in cognitive science are discussed.