Cognitive Psychology

Published by Elsevier
Online ISSN: 1095-5623
Print ISSN: 0010-0285
Publications
Recent research has documented specific linkages between language and conceptual organization in the developing child. However, most of the evidence for these linkages derives from children who have made significant linguistic and conceptual advances. We therefore focus on the emergence of one particular linkage--the noun-category linkage--in infants at the early stages of lexical acquisition. We propose that when infants embark upon the process of lexical acquisition, they are initially biased to interpret a word applied to an object as referring to that object and to other members of its kind. We further propose that this initial expectation will become increasingly specific over development, as infants begin to distinguish among the grammatical categories as they are marked in their native language and assign them more specific types of meaning. To test this hypothesis, we conducted three experiments using a modified novelty-preference paradigm to reveal whether and how novel words influence object categorization in 12- to 13-month old infants. The data reveal that a linkage between words and object categories emerges early enough to serve as a guide in infants' efforts to map words to meanings. Both nouns and adjectives focused infants' attention on object categories, particularly at the superordinate level. Further, infants' progress in early word learning was associated with their appreciation of this linkage between words and object categories. These results are interpreted within a developmental and cross-linguistic account of the emergence of linkages between linguistic and conceptual organization.
 
We propose a linear ballistic accumulator (LBA) model of decision making and reaction time. The LBA is simpler than other models of choice response time, with independent accumulators that race towards a common response threshold. Activity in the accumulators increases in a linear and deterministic manner. The simplicity of the model allows complete analytic solutions for choices between any number of alternatives. These solutions (and freely-available computer code) make the model easy to apply to both binary and multiple choice situations. Using data from five previously published experiments, we demonstrate that the LBA model successfully accommodates empirical phenomena from binary and multiple choice tasks that have proven difficult for other theoretical accounts. Our results are encouraging in a field beset by the tradeoff between complexity and completeness.
 
Schematic drawing of the events shown in the first two and last two familiarization trials in the left-right condition of Experiment 1. 
Schematic drawing of the transparentand opaque-cover test events in the true-belief condition of Experiment 1. The infants' names in this and the following experiments were obtained primarily from pur chased mailing lists and birth announcements in the newspaper. Parents were reimbursed for their transportation expenses but were not compensated for their participation. The racial and ethnic compo sition of the infants tested was 81% Caucasian, 6% Asian (or mixed Asian-Caucasian), 6% Hispanic (or mixed Hispanic-Caucasian), 4% African American (or mixed African American-Caucasian), and 1% American Indian; 2% of the parents chose "other race." No information was collected on parents' education, occupation, and income. 
Mean looking times at the test events in the false-and true-belief conditions of Experiment 1, in the false-and no-key conditions of Experiment 2, and in the ignorance condition of Experiment 3. Error bars represent standard errors. 12 of 14 infants looked longer at the transparent cover event, Wilcoxon signed-ranks T = 19, p = .035; in the true-belief condition, 11 of 14 infants looked longer at the opaque-cover event, T = 6, p = .002. 
Schematic drawing of the final phases of the correct and incorrect-cover test events in the transparentand opaque covers ignorance conditions of Experiment 3. 
Reports that infants in the second year of life can attribute false beliefs to others have all used a search paradigm in which an agent with a false belief about an object's location searches for the object. The present research asked whether 18-month-olds would still demonstrate false-belief understanding when tested with a novel non-search paradigm. An experimenter shook an object, demonstrating that it rattled, and then asked an agent, "Can you do it?" In response to this prompt, the agent selected one of two test objects. Infants realized that the agent could be led through inference (Experiment 1) or memory (Experiment 2) to hold a false belief about which of the two test objects rattled. These results suggest that 18-month-olds can attribute false beliefs about non-obvious properties to others, and can do so in a non-search paradigm. These and additional results (Experiment 3) help address several alternative interpretations of false-belief findings with infants.
 
Episodic memories for autobiographical events that happen in unique spatiotemporal contexts are central to defining who we are. Yet, before 2years of age, children are unable to form or store episodic memories for recall later in life, a phenomenon known as infantile amnesia. Here, we studied the development of allocentric spatial memory, a fundamental component of episodic memory, in two versions of a real-world memory task requiring 18month- to 5-year-old children to search for rewards hidden beneath cups distributed in an open-field arena. Whereas children 25-42-months-old were not capable of discriminating three reward locations among 18 possible locations in absence of local cues marking these locations, children older than 43months found the reward locations reliably. These results support previous findings suggesting that allocentric spatial memory, if present, is only rudimentary in children under 3.5years of age. However, when tested with only one reward location among four possible locations, children 25-39-months-old found the reward reliably in absence of local cues, whereas 18-23-month-olds did not. Our findings thus show that the ability to form a basic allocentric representation of the environment is present by 2years of age, and its emergence coincides temporally with the offset of infantile amnesia. However, the ability of children to distinguish and remember closely related spatial locations improves from 2 to 3.5years of age, a developmental period marked by persistent deficits in long-term episodic memory known as childhood amnesia. These findings support the hypothesis that the differential maturation of distinct hippocampal circuits contributes to the emergence of specific memory processes during early childhood.
 
The impact of four long-term knowledge variables on serial recall accuracy was investigated. Serial recall was tested for high and low frequency words and high and low phonotactic frequency nonwords in 2 groups: monolingual English speakers and French-English bilinguals. For both groups the recall advantage for words over nonwords reflected more fully correct recalls with fewer recall attempts that consisted of fragments of the target memory items (one or two of the three target phonemes recalled correctly); completely incorrect recalls were equivalent for the 2 list types. However, word frequency (for both groups), nonword phonotactic frequency (for the monolingual group), and language familiarity all influenced the proportions of completely incorrect recalls that were made. These results are not consistent with the view that long-term knowledge influences on immediate recall accuracy can be exclusively attributed to a redintegration process of the type specified in multinomial processing tree model of immediate recall. The finding of a differential influence on completely incorrect recalls of these four long-term knowledge variables suggests instead that the beneficial effects of long-term knowledge on short-term recall accuracy are mediated by more than one mechanism.
 
The present research examined 2.5-month-old infants' reasoning about occlusion events. Three experiments investigated infants' ability to predict whether an object should remain continuously hidden or become temporarily visible when passing behind an occluder with an opening in its midsection. In Experiment 1, the infants were habituated to a short toy mouse that moved back and forth behind a screen. Next, the infants saw two test events that were identical to the habituation event except that a portion of the screen's midsection was removed to create a large window. In one event (high-window event), the window extended from the screen's upper edge; the mouse was shorter than the bottom of the window and thus did not become visible when passing behind the screen. In the other event (low-window event), the window extended from the screen's lower edge; although the mouse was shorter than the top of the window and hence should have become fully visible when passing behind the screen, it never appeared in the window. The infants tended to look equally at the high- and low-window events, suggesting that they were not surprised when the mouse failed to appear in the low window. However, positive results were obtained in Experiment 2 when the low-window event was modified: a portion of the screen above the window was removed so that the left and right sections of the screen were no longer connected (two-screens event). The infants looked reliably longer at the two-screens than at the high-window event. Together, the results of Experiments 1 and 2 suggested that, at 2.5 months of age, infants possess only very limited expectations about when objects should and should not be occluded. Specifically, infants expect objects (1) to become visible when passing between occluders and (2) to remain hidden when passing behind occluders, irrespective of whether these have openings extending from their upper or lower edges. Experiment 3 provided support for this interpretation. The implications of these findings for models of the origins and development of infants' knowledge about occlusion events are discussed.
 
A series of experiments was conducted in which a word initially appeared in parafoveal vision, followed by the subject's eye movement to the stimulus. During the eye movement, the initially displayed word was replaced by a word which the subject read. Under certain conditions, the prior parafoveal word facilitated naming the foveal word. Three alternative hypotheses were explored concerning the nature of the facilitation. The verbalization hypothesis suggests that information acquired from the parafoveal word permits the subject to begin to form the speech musculature properly for saying the word. The visual features integration hypothesis suggests that visual information obtained from the parafoveal word is integrated with foveal information after the saccade. The preliminary letter identification hypothesis suggests that some abstract code about the letters of the parafoveal word is stored and integrated with information available in the fovea after the saccade. The results of the experiments supported the latter hypothesis in that information about the beginning letters of words was facilitatory in the task. The other two hypotheses were disconfirmed by the results of the experiments.
 
The visual world contains more information than can be perceived in a single glance. Consequently, one's perceptual representation of the environment is built up via the integration of information across saccadic eye movements. The properties of transsaccadic integration were investigated in six experiments. Subjects viewed a random-dot pattern in one fixation, then judged whether a second dot pattern viewed in a subsequent fixation was identical to or different from the first. Interpattern interval, pattern complexity, and pattern displacement were varied in order to determined the duration, capacity, and representational format of transsaccadic memory. The experimental results indicated that transsaccadic memory is an undetailed, limited-capacity, long-lasting memory that is not strictly tied to absolute spatial position. In all these respects it is similar to, and perhaps identical with, visual short-term memory. The implication of these results for theories of perceptual stability across saccades are discussed.
 
Given that there are no spaces between words in Chinese, how words are segmented when reading is something of a mystery. Four Chinese characters, which either constituted one 4-character word or two 2-character words, were shown briefly to subjects. Subjects were quite accurate in reporting the 4-character word, but could usually only report the first 2-character word, demonstrating that word segmentation influences character recognition. The results suggest that even with these simple 4-character strings, there is an element of seriality in reading Chinese words: processing is initially focused at least to some extent on the first word. We also found that the processing of characters that are not consistent with the context is inhibited, suggesting inhibition from word representations to character representations. A simple model of Chinese word segmentation and word recognition is presented to account for the data.
 
The current research investigates infants' perception of a novel object from a category that is familiar to young infants: key rings. We ask whether experiences obtained outside the lab would allow young infants to parse the visible portions of a partly occluded key ring display into one single unit, presumably as a result of having categorized it as a key ring. This categorization was marked by infants' perception of the keys and ring as a single unit that should move together, despite their attribute differences. We showed infants a novel key ring display in which the keys and ring moved together as one rigid unit (Move-together event) or the ring moved but the keys remained stationary throughout the event (Move-apart event). Our results showed that 8.5-month-old infants perceived the keys and ring as connected despite their attribute differences, and that their perception of object unity was eliminated as the distinctive attributes of the key ring were removed. When all of the distinctive attributes of the key ring were removed, the 8.5-month-old infants perceived the display as two separate units, which is how younger infants (7-month-old) perceived the key ring display with all its distinctive attributes unaltered. These results suggest that on the basis of extensive experience with an object category, infants come to identify novel members of that category and expect them to possess the attributes typical of that category.
 
Studies of speech perception first revealed a surprising discontinuity in the way in which stimulus values on a physical continuum are perceived. Data which demonstrate the effect in nonspeech modes have challenged the contention that categorical perception is a hallmark of the speech mode, but the psychophysical models that have been proposed have not resolved the issues raised by empirical findings. This study provides data from judgments of four sensory continua, two visual and two tactual-kinesthetic, which show that the adaptation level for a set of stimuli serves as a category boundary whether stimuli on the continuum differ by linear or logarithmic increments. For all sensory continua studied, discrimination of stimuli belonging to different perceptual categories was more accurate than discrimination of stimuli belonging to the same perceptual category. Moreover, shifts in the adaptation level produced shifts in the location of the category boundary. The concept of Adaptation-level Based Categorization (ABC) provides a unified account of judgmental processes in categorical perception without recourse to post hoc constructs such as implicit anchors or external referents.
 
Visual spatial memory was investigated in Australian Aboriginal children of desert origin. The investigation arose from an environmental pressures hypothesis relating particular skills to survival requirements in a particular habitat, and follows one of a series of suggestions made by R. B. Lockard (American Psychologist, 1971, 26, 168–179) for research in the related field of comparative psychology. Aboriginal children, from 6 to 17 years, performed at significantly higher levels than white Australian children on the tasks. Item type did not affect scores of Aboriginal children, while for white Australian children familiar items were easier than less familiar, which, if potentially nameable, were easier than items unable to be differentiated by name. These indications of strategy difference between the groups were supported by overt differences in task behavior. Aboriginal children appeared to use visual strategies, while most white Australian children probably attempted verbal strategies. Extent of traditional orientation of their group of origin had little effect on the scores of Aboriginal children, who were superior performers whether they came from traditional or nontraditional backgrounds. The likely effects of differential child-rearing practices and interactions between learning and natural endowment are discussed.
 
Wheel-generated motions have served as a touchstone for discussion of the perception of wholes and parts since the beginning of Gestalt psychology. The reason is that perceived common motions of the whole and the perceived relative motions of the parts are not obviously found in the absolute motion paths of points on a rolling wheel. In general, two types of theories have been proposed as to how common and relative motions are derived from absolute motions: one is that the common motions are extracted from the display first, leaving relative motions as the residual; the other is that relative motions are extracted first leaving common motions as the residual. A minimum principle can be used to defend both positions, but application of the principle seems contingent on the particular class of stimuli chosen. We propose a third view. It seems that there are at least two simultaneous processes—one for common motions and one for relative motions—involved in the perception of these and other stimuli and that a minimum principle is involved in both. However, for stimuli in many domains the minimization of relative motion dominates the perception. In general, we propose that any given stimulus can be organized to minimize the complexity of either its common motions or its relative motions; that which component is minimized depends on which of two processes reaches completion first (that for common or that for relative motions); and that the similarity of any two displays depends on whether common or relative motions are minimized.
 
Four experiments explored participants' understanding of the abstract principles governing computer simulations of complex adaptive systems. Experiments 1, 2, and 3 showed better transfer of abstract principles across simulations that were relatively dissimilar, and that this effect was due to participants who performed relatively poorly on the initial simulation. In Experiment 4, participants showed better abstract understanding of a simulation when it was depicted with concrete rather than idealized graphical elements. However, for poor performers, the idealized version of the simulation transferred better to a new simulation governed by the same abstraction. The results are interpreted in terms of competition between abstract and concrete construals of the simulations. Individuals prone toward concrete construals tend to overlook abstractions when concrete properties or superficial similarities are salient.
 
The purpose of this article is threefold: (a) introduce a new paradigm for investigating how intervening concepts are learned, (b) report four new experiments that provide converging evidence for the acquisition of intervening concepts, and (c) propose a simple associative learning mechanism to account for the results. The new paradigm utilizes a stimulus-response-feedback task in which subjects learn trial by trial how a multivariate set of inputs maps into a multivariate set of outputs. The first two experiments use evidence based on a principal component analysis to replicate the finding that intervening-concept learning occurs spontaneously, but only in environments that contain an intervening factor. The next experiment provides a second converging line of evidence for this conclusion by showing that subjects can use an intervening concept to make accurate inferences to a new fourth output during a transfer test. The last experiment provides a third line of evidence by showing that subjects can use an intervening concept to make accurate inferences from a new fourth input. The results are explained by a hidden-unit connectionist learning mechanism that includes both accuracy and parsimony as learning objectives.
 
Five experiments evaluated the contributions of rule, exemplar, fragment, and episodic knowledge in artificial grammar learning using memorization versus hypothesis-testing training tasks. Strings of letters were generated from a biconditional grammar that allows different sources of responding to be unconfounded. There was no evidence that memorization led to passive abstraction of rules or encoding of whole training exemplars. Memorizers instead used explicit fragment knowledge to identify the grammatical status of test items, although this led to chance performance. Successful hypothesis-testers classified at near-perfect levels by processing training and test stimuli according to their rule structure. The results support the episodic-processing account of implicit and explicit learning.
 
Several tasks from the heuristics and biases literature were examined in light of Slovic and Tversky's (1974) understanding/acceptance principle-that the deeper the understanding of a normative principle, the greater the tendency to respond in accord with it. The principle was instantiated both correlationally and experimentally. An individual differences version was used to examine whether individuals higher in tendencies toward reflective thought and in cognitive ability would be more likely to behave normatively. In a second application of the understanding/acceptance principle, subjects were presented with arguments both for and against normative choices and it was observed whether, on a readministration of the task, performance was more likely to move in a normative direction. Several discrepancies between performance and normative models could be explained by the understanding/acceptance principle. However, several gaps between descriptive and normative models (particularly those deriving from some noncausal base rate problems) were not clarified by the understanding/acceptance principle-they could not be explained in terms of varying task understanding or tendencies toward reflective thought. The results demonstrate how the variation and instability in responses can be analyzed to yield inferences about why descriptive and normative models of human reasoning and decision making sometimes do not coincide.
 
Much of the research on mathematical cognition has focused on the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9, with considerably less attention paid to more abstract number classes. The current research investigated how people understand decimal proportions - rational numbers between 0 and 1 expressed in the place-value symbol system. The results demonstrate that proportions are represented as discrete structures and processed in parallel. There was a semantic interference effect: When understanding a proportion expression (e.g., "0.29"), both the correct proportion referent (e.g., 0.29) and the incorrect natural number referent (e.g., 29) corresponding to the visually similar natural number expression (e.g., "29") are accessed in parallel, and when these referents lead to conflicting judgments, performance slows. There was also a syntactic interference effect, generalizing the unit-decade compatibility effect for natural numbers: When comparing two proportions, their tenths and hundredths components are processed in parallel, and when the different components lead to conflicting judgments, performance slows. The results also reveal that zero decimals - proportions ending in zero - serve multiple cognitive functions, including eliminating semantic interference and speeding processing. The current research also extends the distance, semantic congruence, and SNARC effects from natural numbers to decimal proportions. These findings inform how people understand the place-value symbol system, and the mental implementation of mathematical symbol systems more generally.
 
A basic but unresolved issue in the study of memory retrieval is whether multiple independent cues can be used concurrently (i.e., in parallel) to recall a single, common response. A number of empirical results, as well as potentially applicable theories, suggest that retrieval can proceed in parallel, though set forth a model that denies that possibility. In this paper, five quantitative models are developed to test broad candidate principles. In multiple experiments, subjects were trained to retrieve a vocal digit response for each member of a set of letter or color cues. In subsequent test and transfer phases, single cue trials were randomly mixed with dual cue trials on which the two cues always required the same response. For the first few repetitions of each new set of dual cue items, there was no evidence of parallel retrieval over any part of the RT distribution. After more repetitions, dual cue trials were performed faster than single cue trials, but only under conditions that were favorable to development of a "chunked" dual cue representation. These results indicate that associative independence is an important modulating variable that must be heeded in any general model of attention and memory retrieval. Further, the results are most consistent with a model that places the performance bottleneck prior to the retrieval stage of processing.
 
The human ability to focus memory retrieval operations on a particular list, episode or memory structure has not been fully appreciated or documented. In Experiment 1-3, we make it increasingly difficult for participants to switch between a less recent list (multiple study opportunities), and a more recent list (single study opportunity). Task performance was good, although there was a cost associated with switching. In Experiment 4, list-specific learning experiences were used to create a generalized memory as a step towards semantic memory. List-specific memories intruded during attempts to retrieve the generalized memory and the generalized memory enhanced list-specific performance. The generalized memory also intruded in a free-association task. We propose that a hierarchy of contexts and control operations underlie the human ability to access different memory structures and that there is no sharp discontinuity in the control operations needed to access list-specific, generalized, and semantic memories.
 
Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the double dissociation between comprehension and error-detection ability observed in the aphasic patients. We propose a new theory of speech-error detection which is instead based on the production process itself. The theory borrows from studies of forced-choice-response tasks the notion that error detection is accomplished by monitoring response conflict via a frontal brain structure, such as the anterior cingulate cortex. We adapt this idea to the two-step model of word production, and test the model-derived predictions on a sample of aphasic patients. Our results show a strong correlation between patients' error-detection ability and the model's characterization of their production skills, and no significant correlation between error detection and comprehension measures, thus supporting a production-based monitor, generally, and the implemented conflict-based monitor in particular. The successful application of the conflict-based theory to error-detection in linguistic, as well as non-linguistic domains points to a domain-general monitoring system.
 
The order of processing, whether global forms are processed prior to local forms or vice versa, has been of considerable interest. Many current theories hold that the more perceptually conspicuous form is identified first. An alternative view is presented here in which the stuctural relations among elements are an important factor in explaining the relative speeds of global and local processing. We equated the conspicuity of the global and local forms in three experiments and still found advantages in the processing of global forms. Subjects were able to process the relations among the elements quickly, even before the elements themselves were identified. According to our alternative view, subjects created equivalence classes of similar and proximate local elements before identifying the constituent elements. The experiments required subjects to decide whether two displays were the same or different, and consequently, the results are relevant to work in higher-level cognition that stresses the importance of comparison processes (e.g., analogy and conceptual combination). We conclude by evaluating related work in higher-level cognition in light of our findings.
 
A large orthographic neighborhood (N) facilitates lexical decision for central and left visual field/right hemisphere (LVF/RH) presentation, but not for right visual field/left hemisphere (RVF/LH) presentation. Based on the SERIOL model of letter-position encoding, this asymmetric N effect is explained by differential activation patterns at the orthographic level. This analysis implies that it should be possible to negate the LVF/RH N effect and create an RVF/LH N effect by manipulating contrast levels in specific ways. In Experiment 1, these predictions were confirmed. In Experiment 2, we eliminated the N effect for both LVF/RH and central presentation. These results indicate that the letter level is the primary locus of the N effect under lexical decision, and that the hemispheric specificity of the N effect does not reflect differential processing at the lexical level.
 
Much of learning and reasoning occurs in pedagogical situations-situations in which a person who knows a concept chooses examples for the purpose of helping a learner acquire the concept. We introduce a model of teaching and learning in pedagogical settings that predicts which examples teachers should choose and what learners should infer given a teacher's examples. We present three experiments testing the model predictions for rule-based, prototype, and causally structured concepts. The model shows good quantitative and qualitative fits to the data across all three experiments, predicting novel qualitative phenomena in each case. We conclude by discussing implications for understanding concept learning and implications for theoretical claims about the role of pedagogy in human learning.
 
When processing sequences of rapidly varying stimuli, the visual system must satisfy two conflicting requirements. To maintain perceptual continuity, sequential stimuli must be integrated into a single, unified percept. On the other hand, to detect rapid changes, sequential stimuli must be segregated from each other. We propose that these conflicting demands are reconciled by a process that codes the temporal relationship between contiguous stimuli: Stimuli that are coded as co-extensive are integrated and those that are coded as disjoint are segregated. This approach represents a conceptual departure from the more traditional "intrinsic persistence" view of temporal integration. The approach provides a parsimonious account of the results of two temporal-integration tasks in which the durations of the leading and trailing displays were varied over a broad range. The data were accurately fit by a quantitative model in which temporal codes were determined by the correlation in time between the visual responses to the leading and trailing displays.
 
Preschoolers' success on the appearance-reality task is a milestone in theory-of-mind development. On the standard task children see a deceptive object, such as a sponge that looks like a rock, and are asked, "What is this really?" and "What does this look like?" Children below 412 years of age fail saying that the object not only is a sponge but also looks like a sponge. We propose that young children's difficulty stems from ambiguity in the meaning of "looks like." This locution can refer to outward appearance ("Peter looks like Paul") but in fact often refers to likely reality ("That looks like Jim"). We propose that "looks like" is taken to refer to likely reality unless the reality is already part of the common ground of the conversation. Because this joint knowledge is unclear to young children on the appearance-reality task, they mistakenly think the appearance question is about likely reality. Study 1 analyzed everyday conversations from the CHILDES database and documented that 2 and 3-year-olds are familiar with these two different uses of the locution. To disambiguate the meaning of "looks like," Study 2 clarified that reality was shared knowledge as part of the appearance question, e.g., "What does the sponge look like?" Study 3 used a non-linguistic measure to emphasize the shared knowledge of the reality in the appearance question. Study 4 asked children on their own to articulate the contrast between appearance and reality. At 91%, 85%, and 81% correct responses, children were at near ceiling levels in each of our manipulations while they failed the standard versions of the tasks. Moreover, we show how this discourse-based explanation accounts for findings in the literature. Thus children master the appearance-reality distinction by the age of 3 but the standard task masks this understanding because of the discourse structure involved in talking about appearances.
 
A new general explanation for u-shaped backward masking is analyzed and found to predict shifts in the interstimulus interval (ISI) that produces strongest masking. This predicted shift is then compared to six sets of masking data. The resulting comparisons force the general explanation to make certain assumptions to account for the data. In this way, the experimental data promote the development of a new theory of backward masking. The new theory suggests interpretations of the data that are sometimes novel, often more precise, and sometimes contrary to interpretations that are prevalent in the literature.
 
People are capable of imagining and generating new category exemplars and categories. This ability has not been addressed by previous models of categorization, most of which focus on classifying category exemplars rather than generating them. We develop a formal account of exemplar and category generation which proposes that category knowledge is represented by probability distributions over exemplars and categories, and that new exemplars and categories are generated by sampling from these distributions. This sampling account of generation is evaluated in two pairs of behavioral experiments. In the first pair of experiments, participants were asked to generate novel exemplars of a category. In the second pair of experiments, participants were asked to generate a novel category after observing exemplars from several related categories. The results suggest that generation is influenced by both structural and distributional properties of the observed categories, and we argue that our data are better explained by the sampling account than by several alternative approaches.
 
Recent studies of redundancy gain indicate that it is especially large when redundant stimuli are presented to different hemispheres of an individual without a functioning corpus callosum. This suggests the hypothesis that responses to redundant stimuli are speeded partly because both hemispheres are involved in the activation of the response. A simple formal model incorporating this idea is developed and then elaborated to account for additional related findings. Predictions of the latter model are in good qualitative agreement with data from a number of sources, and there is neuroanatomic and psychophysiological support for its underlying structure.
 
Humans routinely make inductive generalizations about unobserved features of objects. Previous accounts of inductive reasoning often focus on inferences about a single object or feature: accounts of causal reasoning often focus on a single object with one or more unobserved features, and accounts of property induction often focus on a single feature that is unobserved for one or more objects. We explore problems where people must make inferences about multiple objects and features, and propose that people solve these problems by integrating knowledge about features with knowledge about objects. We evaluate three computational methods for integrating multiple systems of knowledge: the output combination approach combines the outputs produced by these systems, the distribution combination approach combines the probability distributions captured by these systems, and the structure combination approach combines a graph structure over features with a graph structure over objects. Three experiments explore problems where participants make inferences that draw on causal relationships between features and taxonomic relationships between animals, and we find that the structure combination approach provides the best account of our data.
 
Experimental designs of Experiments 1–3. This figure provides examples of a single block of 20 trials, although only eight trials are shown. The second column is the repetition status of each trial, where R and N stand for the repeated condition and the nonrepeated condition respectively. The third column is the match status for cue and target on that trial, where S stands for 'yes' responses (same = match), while D stand for 'no' responses (different = mismatch). Two words are presented in each trial as shown in the last three columns. First, the cue word appeared in the center of the screen above the middle line for 1000 ms (the cue is the upper of the two words shown) followed by the target word below the middle line. At that point, both words remained on the screen until participants responded. In Experiment 1, the cue word was always a category label and the second word was always a new exemplar. In Experiment 2, both words were always new exemplars (category repetition but no repeated words). In Experiment 3, exemplars were selected and used for cues and targets, with the cue and target presenting the same word twice for match trials (repeated cue words, but no need to access category). The matching task in each experiment is therefore slightly different: in Experiment 1 the cue provides the name of the category, in Experiment 2, the category must be inferred by the cue, and in Experiment 3 the matching is of the word rather than the category.  
Reaction time results as a function of within condition trial number (position) in Experiments 1–3. All results are collapsed over match versus mismatch trials, which did not interact with the position by repetition status interactions in any experiment. Trial number is not list position. Instead, trial number is the nth occurrence of the repeated or nonrepeated condition within the list of 20 total trials, where n can take on values 1–10. Trial number 1 is not shown because it is not yet known which category is repeating at that point within the list (thus, there is no difference between the conditions). The remaining nine trial numbers are broken into thirds. These results show reaction time differences between correct median RT to repeated conditions minus correct median RT to nonrepeated conditions. Experiments 1a and 1b both show a transition from benefits to deficits for the repeated condition as a function of increasing trial number. In contrast, Experiments 2 and 3 only show benefits for the repeated condition regardless of trial number.  
Reaction times for correct responses (in ms).
Accuracy (proportion correct).
How is the meaning of a word retrieved without interference from recently viewed words? The ROUSE theory of priming assumes a discounting process to reduce source confusion between subsequently presented words. As applied to semantic satiation, this theory predicted a loss of association between the lexical item and meaning. Four experiments tested this explanation in a speeded category-matching task. All experiments used lists of 20 trials that presented a cue word for 1s followed by a target word. Randomly mixed across the list, 10 trials used cues drawn from the same category whereas the other 10 trials used cues from 10 other categories. In Experiments 1a and 1b, the cues were repeated category labels (FRUIT-APPLE) and responses gradually slowed for the repeated category. In Experiment 2, the cues were nonrepeated exemplars (PEAR-APPLE) and responses remained faster for the repeated category. In Experiment 3, the cues were repeated exemplars in a word matching task (APPLE-APPLE) and responses again remained faster for the repeated category.
 
In a recent paper, Hu, Ericsson, Yang, and Lu (2009) found that an ability to memorize very long lists of digits is not mediated by the same mechanisms as exceptional memory for rapidly presented lists, which has been the traditional focus of laboratory research. Chao Lu is the holder of the Guinness World Record for reciting the most decimal positions of pi, yet he lacks an exceptional memory span for digits. In the first part of this paper we analyzed the reliability and structure of his reported encodings for lists of 300 digits and his application of the story mnemonic. Next, his study and recall times for lists of digits were analyzed to test hypotheses about his detailed encoding processes, and cued-recall performance was used to assess the structure of his encodings. Three experiments were then designed to interfere with the uniqueness of Chao Lu's story encodings, and evidence was found for his remarkable ability to adapt his encoding processes to reduce the interference. Finally, we show how his skills for encoding and recalling long lists can be accounted for within the theoretical framework of Ericsson and Kintsch's (1995) Long-Term Working Memory.
 
Imaginal perspective switches are often considered to be difficult, because they call for additional cognitive transformations of object coordinates (transformation hypothesis). Recent research suggests that problems can also result from conflicts between incompatible sensorimotor and cognitive object location codes during response specification and selection (interference hypothesis). Three experiments tested contrasting predictions of both accounts. Volunteers had to point to unseen object locations after imagined self-rotations and self-translations. Results revealed larger pointing latencies and errors for rotations as compared to translations, and monotic latency and error increases for both tasks as a function of the disparity of object directions between real and imagined perspective. Provision of advance information about the to-be-imagined perspective left both effects unchanged. These results, together with those from a systematic error analysis, deliver clear support for an interference account of imaginal perspective switches in remembered surroundings.
 
A model of cue-based probability judgment is developed within the framework of support theory. Cue diagnosticity is evaluated from experience as represented by error-free frequency counts. When presented with a pattern of cues, the diagnostic implications of each cue are assessed independently and then summed to arrive at an assessment of the support for a hypothesis, with greater weight placed on present than on absent cues. The model can also accommodate adjustment of support in light of the baserate or prior probability of a hypothesis. Support for alternatives packed together in a "residual" hypothesis is discounted; fewer cues are consulted in assessing support for alternatives as support for the focal hypothesis increases. Results of fitting this and several alternative models to data from four new multiple-cue probability learning experiments are reported.
 
We present a new model for lexical decision, REM-LD, that is based on REM theory (e.g., ). REM-LD uses a principled (i.e., Bayes' rule) decision process that simultaneously considers the diagnosticity of the evidence for the 'WORD' response and the 'NONWORD' response. The model calculates the odds ratio that the presented stimulus is a word or a nonword by averaging likelihood ratios for lexical entries from a small neighborhood of similar words. We report two experiments that used a signal-to-respond paradigm to obtain information about the time course of lexical processing. Experiment 1 verified the prediction of the model that the frequency of the word stimuli affects performance for nonword stimuli. Experiment 2 was done to study the effects of nonword lexicality, word frequency, and repetition priming and to demonstrate how REM-LD can account for the observed results. We discuss how REM-LD could be extended to account for effects of phonology such as the pseudohomophone effect, and how REM-LD can predict response times in the traditional 'respond-when-ready' paradigm.
 
When participants use cues to prepare for a likely stimulus or a likely response, reaction times are facilitated by valid cues but prolonged by invalid cues. In studies on combined expectancy effects, two cues give information regarding two dimensions of the forthcoming task. When the two cues consist of two separable stimuli their effects are approximately additive. When cues are presented as an integrated stimulus, cueing effects interact. A model is presented that simulates effects like these. The model assumes that cues affect different processing stages. When implicit information suggests that expectancies are unrelated, as for instance with separated cues, cueing effects at early and late levels of processing remain independent. When implicit information suggests that expectancies are related, as with integrated cues, however, a mechanism that is sensitive to the validity of the early stage cue, leads to an adjustment of the cueing effect at the late stage. The model is based on neurophysiologically plausible assumptions, it is given explicitly in mathematical terms, and it provides a good fit to a large body of empirical data.
 
Four experiments addressing the role of attention in phonetic perception are reported. The first experiment shows that the relative importance of two cues to the voicing distinction changes when subjects must perform an arithmetic distractor task at the same time as identifying a speech stimulus. The contribution of voice onset time to phonetic labeling decreases when subjects are distracted, while that of FO onset frequency increases. The second experiment shows a similar pattern for two cues to the distinction between the vowels /i/ (as in "beat") and /I/ (as in "bit"). Under low attention conditions, formant pattern has a smaller effect on phonetic labeling while vowel duration has a larger effect. Together these experiments indicate that careful attention to speech perception is necessary for strong acoustic cues (voice-onset time and formant patterns) to achieve their full impact on phonetic labeling, while weaker acoustic cues (FO onset frequency and vowel duration) achieve their full impact on phonetic labeling without close attention. Experiment 3 shows that this pattern is obtained when the distractor task places little demand on verbal short-term memory. Experiment 4 provides a data set for testing formal models of the role of attention in speech perception. Attention is shown to influence the signal-to-noise ratio in the phonetic encoding of acoustic cues; the sustained phonetic contribution of weak cues without close attention stems from reduced competition from strong cues. This principle is instantiated in a network model in which the role of attention is to reduce noise in the phonetic encoding of acoustic cues. Implications of this work for understanding speech perception and general theories of the role of attention in perception are discussed.
 
How might young learners parse speech into linguistically relevant units? Sensitivity to prosodic markers of these segments is one possibility. Seven experiments examined infants' sensitivity to acoustic correlates of phrasal units in English. The results suggest that: (a) 9 month olds, but not 6 month olds, are attuned to cues that differentially mark speech that is artificially segmented at linguistically COINCIDENT as opposed to NONCOINCIDENT boundaries (Experiments 1 and 2); (b) the pattern holds across both subject phrases and predicate phrases and across samples of both Child- and Adult-directed speech (Experiments 3, 4, and 7); and (c) both 9 month olds and adults show the sensitivity even when most phonetic information is removed by low-pass filtering (Experiments 5 and (6). Acoustic analyses suggest that pitch changes and in some cases durational changes are potential cues that infants might be using to make their discriminations. These findings are discussed with respect to their implications for theories of language acquisition.
 
Models of the spatial knowledge people acquire from maps and navigation and the procedures required for spatial judgments using this knowledge are proposed. From a map, people acquire survey knowledge encoding global spatial relations. This knowledge resides in memory in images that can be scanned and measured like a physical map. From navigation, people acquire procedural knowledge of the routes connecting diverse locations. People combine mental simulation of travel through the environment and informal algebra to compute spatial judgments. An experiment in which subjects learned an environment from navigation or from a map evaluates predictions of these models. With moderate exposure, map learning is superior for judgments of relative location and straight-line distances among objects. Learning from navigation is superior for orienting oneself with respect to unseen objects and estimating route distances. With extensive exposure, the performance superiority of maps over navigation vanishes. These and other results are consonant with the proposed mechanisms.
 
There are certain simple rotations of objects that most people cannot reason about accurately. Reliable gaps in the understanding of a fundamental physical domain raise the question of how learning to reason in that domain might proceed. Using virtual reality techniques, this project investigated the nature of learning to reason across the domain of simple rotations. Learning consisted of the acquisition of spatial intuitions: there was encoding of useful spatiotemporal information in specific problem types and a gradual accumulation of this understanding across the domain. This pattern of learning through the accumulation of intuitions is especially interesting for rotational motion, in which an elegant domain-wide kinematics is available to support insightful learning. Individual ability to reason about rotations correlated highly with mastery motivation, skill in fluid reasoning, and skill in reasoning about spatial transformations. Thus, general cognitive advantages aided the understanding of individual rotations without guaranteeing immediate generalization across the domain.
 
Adult knowledge of a language involves correctly balancing lexically-based and more language-general patterns. For example, verb argument structures may sometimes readily generalize to new verbs, yet with particular verbs may resist generalization. From the perspective of acquisition, this creates significant learnability problems, with some researchers claiming a crucial role for verb semantics in the determination of when generalization may and may not occur. Similarly, there has been debate regarding how verb-specific and more generalized constraints interact in sentence processing and on the role of semantics in this process. The current work explores these issues using artificial language learning. In three experiments using languages without semantic cues to verb distribution, we demonstrate that learners can acquire both verb-specific and verb-general patterns, based on distributional information in the linguistic input regarding each of the verbs as well as across the language as a whole. As with natural languages, these factors are shown to affect production, judgments and real-time processing. We demonstrate that learners apply a rational procedure in determining their usage of these different input statistics and conclude by suggesting that a Bayesian perspective on statistical learning may be an appropriate framework for capturing our findings.
 
Two implications of best-example theory for category acquisition were considered. The first is that categories which people acquire based on initial exposure to good exemplars should be learned more easily and (at first) more accurately than categories based on initial exposure to poor exemplars. The second is that people should generally learn that the best exemplars are category members, before learning that the poor exemplars are category members. These implications are based on the premise that people generalize based on similarity, and that the best example has maximal within-category similarity and minimal extra-category similarity, while the poor examples have minimal within-category similarity and relatively high extra-category similarity. Both implications were strongly supported by the present research. It was also found that when pressure to communicate was removed, comprehension and production of category names were virtually identical. The predictions of best-example theory concerning the conceptual structures underlying the words used by children who are just beginning to talk were discussed briefly. This research also allowed the replication of several important categorization results which had previously been found with real-world categories, with a set of artificial concrete object categories.
 
This study investigated the procedures subjects use to acquire knowledge from maps. In Experiment 1, three experienced and five novice map users provided verbal protocols while attempting to learn a map. The protocols suggested four categories of processes that subjects invoked during learning: attention, encoding, evaluation, and control. Good learners differed from poor learners primarily in their techniques for and success at encoding spatial information, their ability to accurately evaluate their learning progress, and their ability to focus attention on unlearned information. An analysis of the performance of experienced map users suggested that learning depended on particular procedures and not on familiarity with the task. In Experiment 2, subjects were instructed to use (a) six of the effective learning procedures from Experiment 1, (b) six procedures unrelated to learning success, or (c) their own techniques. The effective procedures set comprised three techniques for learning spatial information, two techniques for using self-generated feedback to guide subsequent study behaviors, and a procedure for partitioning the map into sections. Subjects using these procedures performed better than subjects in the other groups. In addition, subjects' visual memory ability predicted the magnitude of the performance differential.
 
The linguistic input to language learning is usually thought to consist of simple strings of words. We argue that input must also include information about how words group into syntactic phrases. Natural languages regularly incorporate correlated cues to phrase structure, such as prosody, function words, and concord morphology. The claim that such cues are necessary for successful acquisition of syntax was tested in a series of miniature language learning experiments with adult subjects. In each experiment, when input included some cue marking the phrase structure of sentences, subjects were entirely successful in learning syntax; in contrast, when input lacked such a cue (but was otherwise identical), subjects failed to learn significant portions of syntax. Cues to phrase structure appear to facilitate learning by indicating to the learner those domains within which distributional analyses may be most efficiently pursued, thereby reducing the amount and complexity of required input data. More complex target systems place greater premiums on efficient analysis; hence, such cues may be even more crucial for acquisition of natural language syntax. We suggest that the finding that phrase structure cues are a necessary aspect of language input reflects the limited capacities of human language learners; languages may incorporate structural cues in part to circumvent such limitations and ensure successful acquisition.
 
Several phonological and prosodic properties of words have been shown to relate to differences between grammatical categories. Distributional information about grammatical categories is also a rich source in the child's language environment. In this paper we hypothesise that such cues operate in tandem for developing the child's knowledge about grammatical categories. We term this the Phonological-Distributional Coherence Hypothesis (PDCH). We tested the PDCH by analysing phonological and distributional information in distinguishing open from closed class words and nouns from verbs in four languages: English, Dutch, French, and Japanese. We found an interaction between phonological and distributional cues for all four languages indicating that when distributional cues were less reliable, phonological cues were stronger. This provides converging evidence that language is structured such that language learning benefits from the integration of information about category from contextual and sound-based sources, and that the child's language environment is less impoverished than we might suspect.
 
This research demonstrates a process of nonconscious acquisition of information about a pattern of stimuli and the facilitating influence of this knowledge on subjects' subsequent performance. Subjects were exposed to a sequence of frames containing a target, and their task was to search for the target in each frame. The sequence of target locations followed a complex pattern. The specific sample of subjects was selected to ensure that they would be sufficiently motivated and that they would have appropriate analytical and verbal skills to report whatever they experienced while participating in the task: All subjects were faculty members of a psychology department. Extensive postexperimental interviews with subjects indicated that none of them noticed anything even remotely similar to the actual nature of the manipulation (i.e., the pattern). However, the accuracy and latency of their responses indicated that, in fact, they had acquired a specific working knowledge about the pattern, and that this knowledge facilitated their performance. The results demonstrate that nonconsciously acquired knowledge can automatically be utilized to facilitate performance, without requiring conscious awareness or control over this knowledge. This phenomenon is discussed as a ubiquitous process involved in the development of both elementary and high-level cognitive skills.
 
Learning sequential structures is of fundamental importance for a wide variety of human skills. While it has long been debated whether implicit sequence learning is perceptual or response-based, here we propose an alternative framework that cuts across this dichotomy and assumes that sequence learning rests on associative changes that can occur concurrently in distinct processing systems and support the parallel acquisition of multiple uncorrelated sequences. In three experiments we used a serial search task to test critical predictions of this framework. Experiments 1 and 2 showed that participants learnt uncorrelated sequences of auditory letters and manual responses, as well as sequences of visual letters, spatial locations, and manual responses simultaneously, as indicated by a reliable response time (RT) cost incurred by occasional deviants violating either of the sequences. This RT cost was reliable even when participants showing explicit knowledge were excluded. In Experiment 3 learning of spatial and nonspatial sequences was functionally dissociated: whereas a spatio-motor distractor task disrupted learning of location but not of letter sequences, a phonological distractor task had the reverse effect. The distractor tasks thus did not reduce unspecific attentional resources, but selectively disrupted the formation of sequential associations within spatial and nonspatial processing dimensions. These results support the view that implicit sequence learning rests on experience-dependent changes that can occur in parallel in multiple processing systems involved in spatial attention, object recognition, phonological processing, and manual response selection. The resulting dimension-specific sequence representations support independent predictions of what will appear next, where it will appear, and how one will have to respond to it.
 
Top-cited authors
Andrew Heathcote
  • University of Newcastle
Scott Brown
  • The University of Newcastle, Australia
Kenneth R. Paap
  • San Francisco State University
Larry V Hedges
  • Northwestern University
Jack L. Vevea
  • University of California, Merced