This paper considers the role of comparison in the development of knowledge. Results show that comparing similar objects makes them appear more similar. Comparing dissimilar objects, on the other hand does not make them appear more similar, and in some circumstances may make them appear less similar. The effect of comparison on similar items was especially striking since participants judged items to be more similar after comparison even if the comparison task was to list differences between the two items. Further, this effect appears specific to comparison and does not appear to be simply due to a "fleshing out" of object representations (listing properties of two objects without comparing the objects themselves served to increase the objects' similarity regardless of whether the objects were similar or dissimilar to start). This suggests that comparison may play a special role in partitioning bits of experience into categories, sharpening categorical boundaries, and otherwise helping us create conceptual structure above and beyond that offered by the world.
A critical question in Cognitive Science concerns how knowledge of specific domains emerges during development. Here we examined how limitations of the visual system during the first days of life may shape subsequent development of face processing abilities. By manipulating the bands of spatial frequencies of face images, we investigated what is the nature of the visual information that newborn infants rely on to perform face recognition. Newborns were able to extract from a face the visual information lying from 0 to 1 cpd (Experiment 1), but only a narrower 0-0.5 cpd spatial frequency range was successful to accomplish face recognition (Experiment 2). These results provide the first empirical support of a low spatial frequency advantage in individual face recognition at birth and suggest that early in life low-level, non-specific perceptual constraints affect the development of the face processing system.
This paper presents the results of a comprehensive meta-analysis of the relevant imaging literature on word production (82 experiments). In addition to the spatial overlap of activated regions, we also analyzed the available data on the time course of activations. The analysis specified regions and time windows of activation for the core processes of word production: lexical selection, phonological code retrieval, syllabification, and phonetic/articulatory preparation. A comparison of the word production results with studies on auditory word/non-word perception and reading showed that the time course of activations in word production is, on the whole, compatible with the temporal constraints that perception processes impose on the production processes they affect in picture/word interference paradigms.
Previous research indicates that, when a moving object collides with a stationary object, infants expect the stationary object to be displaced. The present experiment examined whether infants believe that the size of the moving object affects how far the stationary object is displaced. In the experiment, 11-month-old infants sat in front of a horizontal track; to the left of the track was an inclined ramp. A wheeled toy bug rested on the track at the bottom of the ramp. The infants in the midpoint condition were first familiarized with an event in which a medium-sized cylinder rolled down the ramp and hit the bug, causing it to roll to the middle of the track. Next, the infants saw one of two test events. In both events, novel cylinders were introduced, and the bug now rolled to the end of the track. The two test cylinders were identical to the familiarization cylinder in material but not in size: one was larger (large-cylinder event) and one was smaller (small-cylinder event) than the familiarization cylinder. The infants in the endpoint condition saw the same familiarization and test events as the infants in the midpoint condition except that the bug rolled to the end rather than to the middle of the track in the familiarization event. The infants in the midpoint condition looked reliably longer at the small-than at the large-cylinder event, whereas the infants in the endpoint condition tended to look equally at the two events. These results indicated that the infants (a) believed that the size of the cylinder affected the length of the bug's displacement and (b) used the familiarization event to calibrate their predictions about the test events. After watching the bug roll to the middle of the track when hit by the medium cylinder, the infants were surprised to see the bug roll to the end of the track with the small but not the large cylinder. After watching the bug roll to the end of the track when hit by the medium cylinder, however, the infants were not surprised to see the bug do the same with either the small or the large cylinder. Parallel results were obtained with adult subjects. The present findings have implications for research on the nature and development of infants' physical reasoning as well as for assessments of causal reasoning in infancy.
Recent research has suggested consonants and vowels serve different roles during language processing. While statistical computations are preferentially made over consonants but not over vowels, simple structural generalizations are easily made over vowels but not over consonants. Nevertheless, the origins of this asymmetry are unknown. Here we tested if a lifelong experience with language is necessary for vowels to become the preferred target for structural generalizations. We presented 11-month-old infants with a series of CVCVCV nonsense words in which all vowels were arranged according to an AAB rule (first and second vowels were the same, while the third vowel was different). During the test, we presented infants with new words whose vowels either followed or not, the aforementioned rule. We found that infants readily generalized this rule when implemented over the vowels. However, when the same rule was implemented over the consonants, infants could not generalize it to new instances. These results parallel those found with adult participants and demonstrate that several years of experience learning a language are not necessary for functional asymmetries between consonants and vowels to appear.
Research on initial conceptual knowledge and research on early statistical learning mechanisms have been, for the most part, two separate enterprises. We report a study with 11-month-old infants investigating whether they are sensitive to sampling conditions and whether they can integrate intentional information in a statistical inference task. Previous studies found that infants were able to make inferences from samples to populations, and vice versa [Xu, F., & Garcia, V. (2008). Intuitive statistics by 8-month-old infants. Proceedings of the National Academy of Sciences of the United States of America, 105, 5012-5015]. We found that when employing this statistical inference mechanism, infants are sensitive to whether a sample was randomly drawn from a population or not, and they take into account intentional information (e.g., explicitly expressed preference, visual access) when computing the relationship between samples and populations. Our results suggest that domain-specific knowledge is integrated with statistical inference mechanisms early in development.
One of the most intriguing findings on language comprehension is that violations of syntactic predictions can affect event-related potentials as early as 120 ms, in the same time-window as early sensory processing. This effect, the so-called early left-anterior negativity (ELAN), has been argued to reflect word category access and initial syntactic structure building (Friederici, 2002). In two experiments, we used magnetoencephalography to investigate whether (a) rapid word category identification relies on overt category-marking closed-class morphemes and (b) whether violations of word category predictions affect modality-specific sensory responses. Participants read sentences containing violations of word category predictions. Unexpected items varied in whether or not their word category was marked by an overt function morpheme. In Experiment 1, the amplitude of the visual evoked M100 component was increased for unexpected items, but only when word category was overtly marked by a function morpheme. Dipole modeling localized the generator of this effect to the occipital cortex. Experiment 2 replicated the main results of Experiment 1 and eliminated two non-morphology-related explanations of the M100 contrast we observed between targets containing overt category-marking and targets that lacked such morphology. Our results show that during reading, syntactically relevant cues in the input can affect activity in occipital regions at around 125 ms, a finding that may shed new light on the remarkable rapidity of language processing.
The present research examined whether 12.5-month-old infants take into account what objects an agent knows to be present in a scene when interpreting the agent's actions. In two experiments, the infants watched a female human agent repeatedly reach for and grasp object-A as opposed to object-B on an apparatus floor. Object-B was either (1) visible to the agent through a transparent screen; (2) hidden from the agent (but not the infants) by an opaque screen; or (3) placed by the agent herself behind the opaque screen, so that even though she could no longer see object-B, she knew of its presence there. The infants interpreted the agent's repeated actions toward object-A as revealing a preference for object-A over object-B only when she could see object-B (1) or was aware of its presence in the scene (3). These results indicate that, when watching an agent act on objects in a scene, 12.5-month-old infants keep track of the agent's representation of the physical setting in which these actions occur. If the agent's representation is incomplete, because the agent is ignorant about some aspect of the setting, infants use the agent's representation, rather than their own more complete representation, to interpret the agent's actions.
Osherson and Smith (1981, Cognition, 11, 237-262) discuss a number of problems which arise for a prototype-based account of the meanings of simple and complex concepts. Assuming that concept combination in such a theory is to be analyzed in terms of fuzzy logic, they show that some complex concepts inevitably get assigned the wrong meanings. In the present paper we argue that many of the problems O&S discovered are due to difficulties that are intrinsic to fuzzy set theory, and that most of them disappear when fuzzy logic is replaced by supervaluation theory. However, even after this replacement one of O&S's central problems remains: the theory still predicts that the degree to which an object is an instance of, say, "stripped apple" must be less than or equal to both the degree to which it is an instance of "striped" and the degree to which it is an instance of "apple", but this constraint conflicts with and the degree to which it is an instance of "apple", but this constraint conflicts with O&S's experimental results. The second part of the paper explores ways of solving this and related problems. This leads us to suggest a number of distinctions and principles concerning how prototypicality and other mechanisms interact and which seem important for semantics generally. Prominent among these are (i) the distinction between on the one hand the logical and semantic properties of concepts and on the other the linguistic that between concepts for which the extension is determined by their prototype and concepts for which extension and prototypicality are independent.
Recent research has documented that for infants as young as 12-13 months of age, novel words (both count nouns and adjectives) highlight commonalities among objects and, in this way, foster the formation of object categories. The current experiment was designed to capture more precisely the scope of this phenomenon. We asked whether novel words (count nouns; adjectives) are linked specifically to category-based commonalities from the start, or whether they also direct infants' attention to a wider range of commonalities, including property-based commonalities among objects (e.g. color, texture). The results indicate that by 12-13 months, (1) infants have begun to distinguish between novel words presented as count nouns versus. adjectives in fluent, infant-directed speech, and (2) infants expectations for novel words accord with this emerging sensitivity.
In a word learning experiment, 14- and 18-month-old infants are tested on their perceptual sensitivity to coda-consonant omissions. The results indicate that 14-month-olds are not sensitive to coda consonant omissions, showing a parallel with the omission of target coda consonants in early child language productions. At 18 months, infants are sensitive to coda-omission. The study strengthens the hypothesis that phonological wellformedness constraints influence infants' speech processing in general, and might restrict what is stored in their initial lexical representations. A lexical representation lacking information on the target coda consonant is, in turn, a likely source for coda-omissions in production.
Recent work on children's inferences concerning biological and chemical categories has suggested that children (and perhaps adults) are essentialists - a view known as psychological essentialism. I distinguish three varieties of psychological essentialism and investigate the ways in which essentialism explains the inferences for which it is supposed to account. Essentialism succeeds in explaining the inferences, I argue, because it attributes to the child belief in causal laws connecting category membership and the possession of certain characteristic appearances and behavior. This suggests that the data will be equally well explained by a non-essentialist hypothesis that attributes belief in the appropriate causal laws to the child, but makes no claim as to whether or not the child represents essences. I provide several reasons to think that this non-essentialist hypothesis is in fact superior to any version of the essentialist hypothesis.
Do 18-month-olds understand that an agent's false belief can be corrected by an appropriate, though not an inappropriate, communication? In Experiment 1, infants watched a series of events involving two agents, a ball, and two containers: a box and a cup. To start, agent1 played with the ball and then hid it in the box, while agent2 looked on. Next, in agent1's absence, agent2 moved the ball from the box to the cup. When agent1 returned, agent2 told her "The ball is in the cup!" (informative-intervention condition) or "I like the cup!" (uninformative-intervention condition). During test, agent1 reached for either the box (box event) or the cup (cup event). In the informative-intervention condition, infants who saw the box event looked reliably longer than those who saw the cup event; in the uninformative-intervention condition, the reverse pattern was found. These results suggest that infants expected agent1's false belief about the ball's location to be corrected when she was told "The ball is in the cup!", but not "I like the cup!". In Experiment 2, agent2 simply pointed to the ball's new location, and infants again expected agent1's false belief to be corrected. These and control results provide additional evidence that infants in the second year of life can attribute false beliefs to agents. In addition, the results suggest that by 18 months of age infants expect agents' false beliefs to be corrected by relevant communications involving words or gestures.
Generative linguistic theory stands on the hypothesis that grammar cannot be acquired solely on the basis of an analysis of the input, but depends, in addition, on innate structure within the learner to guide the process of acquisition. This hypothesis derives from a logical argument, however, and its consequences have never been examined experimentally with infant learners. Challenges to this hypothesis, claiming that an analysis of the input is indeed sufficient to explain grammatical acquisition, have recently gained attention. We demonstrate with novel experimentation the insufficiency of this countervailing view. Focusing on the syntactic structures required to determine the antecedent for the pronoun one, we demonstrate that the input to children does not contain sufficient information to support unaided learning. Nonetheless, we show that 18-month-old infants do have command of the syntax of one. Because this syntactic knowledge could not have been gleaned exclusively from the input, infants' mastery of this aspect of syntax constitutes evidence for the contribution of innate structure within the learner in acquiring a grammar.
In previous studies, children disoriented in small enclosures used room shape, but not wall colours, to find hidden objects. Their reorientation was said to depend solely on a "geometric module" informationally encapsulated with respect to colour. We argue that previous studies did not fully evaluate children's use of colour owing to a bias in the enclosures' design. In this study, disoriented 18-24 month olds searched for toys in small square enclosures with two blue and two white walls. Children successfully reoriented using wall colour. This shows that they can make location judgments based on the left/right sense of the colours of adjoining landmarks. Performance was no different when symmetric colourful shapes were added to walls, but improved with asymmetric shapes which could be used without left/right judgments. The relatively poor use of colour in previous studies may be explained partly by a bias in their design, and partly by children's limited ability to discriminate the left/right sense of nongeometric features.
Fluent speakers' representations of verbs include semantic knowledge about the nouns that can serve as their arguments. These "selectional restrictions" of a verb can in principle be recruited to learn the meaning of a novel noun. For example, the sentence He ate the carambola licenses the inference that carambola refers to something edible. We ask whether 15- and 19-month-old infants can recruit their nascent verb lexicon to identify the referents of novel nouns that appear as the verbs' subjects. We compared infants' interpretation of a novel noun (e.g., the dax) in two conditions: one in which dax is presented as the subject of animate-selecting construction (e.g., The dax is crying), and the other in which dax is the subject of an animacy-neutral construction (e.g., The dax is right here). Results indicate that by 19months, infants use their representations of known verbs to inform the meaning of a novel noun that appears as its argument.
Argues that A. F. Jorm's (see record
1980-25944-001) reasons—(1) impairment of grapheme–phoneme correspondence, (2) the effect of imageability on word reading, (3) patterns of errors made, and (4) short-term memory impairment—for regarding symptoms of developmental dyslexia as similar to those of deep dyslexia are unwarranted. Evidence supporting J. M. Holmes' (1973, 1978) position of comparing developmental dyslexia to surface dyslexia is discussed. (29 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
In this paper, I do not claim that any particular parameter-setting approach is correct, or even provide a characterization of subjectless sentences in children's speech. The only point of this paper is to show that Valian's argument that single-value solutions for setting the null subject parameter have insoluble problems is incorrect. Valian is correct that being able to analyze and interpret triggering data is a prerequisite for setting parameters, but a single-value solution of the sort described in this paper (and implicitly assumed in parameter-setting acquisition theories) is sufficient to do so; there is no need to invoke the dual-value solution that Valian argues is necessary. Furthermore, I argue that the single-value solution should be preferred on the grounds that (i) the mechanism I propose maintains many of the niceties of idealized parameter-setting acquisition theories whereas Valian's approach explicitly gives up on these attractive features of standard parameter-setting models, and (ii) it follows from dual-value theories but not single-value theories (depending on the nature of parameters and how many parameters there are) that parsing and speech production involve over-whelmingly difficult computations for children.
L'A. denonce l'inconsequence de la logique de l'experience mise en oeuvre par L. Frazier, G. B. Flores d'Arcais et R. Coolen dans leur article consacre aux verbes complexes separables de la langue neerlandaise. L'A. montre que les donnees ne verifient pas les propositions theoriques que les auteurs developpent dans le sens de l'integration morphologique et des relations entre les processus lexicaux/morphologiques, d'une part, et syntaxiques, d'autre part
Druks and Marshall (1995) argue that aphasic comprehension problems are accountable as the consequence of a disrupted Case assignment module. But the analysis they present does not support their argument and it provides a distorted view of grammatically based analyses of brain-language relations.
Noël et al. (Noël, M.-P., Fias, W., Brsybaert, M., 1997. About the influence of the presentation format on arithmetical-fact retrieval processes. Cognition 63, 335-374) examined the simple-multiplication errors of 24 Dutch- and 24 French-speaking adults for evidence that number reading interferes with language-specific, number-fact retrieval processes. They concluded that arithmetic memory is not influenced by reading-based interference and is based on a notation and language-independent mental representation. Alternative analyses of their error data, however, provide strong evidence that arithmetic performance is subject to reading-based interference and provide some support for the language-specificity of number-fact memory.
In a recent series of papers, Caramazza and Miozzo [Caramazza, A., 1997. How many levels of processing are there in lexical access? Cognitive Neuropsychology 14, 177-208; Caramazza, A., Miozzo, M., 1997. The relation between syntactic and phonological knowledge in lexical access: evidence from the 'tip-of-the-tongue' phenomenon. Cognition 64, 309-343; Miozzo, M., Caramazza, A., 1997. On knowing the auxiliary of a verb that cannot be named: evidence for the independence of grammatical and phonological aspects of lexical knowledge. Journal of Cognitive Neuropsychology 9, 160-166] argued against the lemma/lexeme distinction made in many models of lexical access in speaking, including our network model [Roelofs, A., 1992. A spreading-activation theory of lemma retrieval in speaking. Cognition 42, 107-142; Levelt, W.J.M., Roelofs, A., Meyer, A.S., 1998. A theory of lexical access in speech production. Behavioral and Brain Sciences, (in press)]. Their case was based on the observations that grammatical class deficits of brain-damaged patients and semantic errors may be restricted to either spoken or written forms and that the grammatical gender of a word and information about its form can be independently available in tip-of-the-tongue states (TOTs). In this paper, we argue that though our model is about speaking, not taking position on writing, extensions to writing are possible that are compatible with the evidence from aphasia and speech errors. Furthermore, our model does not predict a dependency between gender and form retrieval in TOTs. Finally, we argue that Caramazza and Miozzo have not accounted for important parts of the evidence motivating the lemma/lexeme distinction, such as word frequency effects in homophone production, the strict ordering of gender and phoneme access in LRP data, and the chronometric and speech error evidence for the production of complex morphology.
Campbell (1998) has questioned the conclusion of Noël et al. (1997) and has argued that alternative analyses of their data provide strong evidence that arithmetic performance is subject to reading-based interference and provide some support for the language-specificity of number-fact memory. We consider that Campbell reached conclusions different from those we had obtained because (1) he performed his analyses on a different data set (i.e. including also the table-unrelated errors), (2) he has given a double weight to the naming errors and (3) he has multiplied the analyses without correcting the corresponding P values. We thus consider that there exist interactions between language and performance in simple multiplication tasks, but that the current data can easily be explained without postulating that such interactions operate at the level of the retrieval stage. In other words, we consider that there are not definitive arguments, as yet, in favour of the hypothesis of modality-specific arithmetical-fact networks.
A great deal of psycholinguistic research has focused on the question of how adults interpret language in real time. This work has revealed a complex and interactive language processing system capable of rapidly coordinating linguistic properties of the message with information from the context or situation (e.g. Altmann & Steedman, 1988; Britt, 1994; Tanenhaus, Spivey-Knowlton, Eberhard & Sedivy, 1995; Trueswell & Tanenhaus, 1991). In the study of language acquisition, however, surprisingly little is known about how children process language in real time and whether they coordinate multiple sources of information during interpretation. The lack of child research is due in part to the fact that most existing techniques for studying language processing have relied upon the skill of reading, an ability that young children do not have or are only beginning to acquire. We present here results from a new method for studying children's moment-by-moment language processing abilities, in which a head-mounted eye-tracking system was used to monitor eye movements as participants responded to spoken instructions. The results revealed systematic differences in how children and adults process spoken language: Five Year Olds did not take into account relevant discourse/pragmatic principles when resolving temporary syntactic ambiguities, and showed little or no ability to revise initial parsing commitments. Adults showed sensitivity to these discourse constraints at the earliest possible stages of processing, and were capable of revising incorrect parsing commitments. Implications for current models of sentence processing are discussed.
Recent evidence shows that children can use cross-situational statistics to learn new object labels under referential ambiguity (e.g., Smith & Yu, 2008). Such evidence has been interpreted as support for proposals that statistical information about word-referent co-occurrence plays a powerful role in word learning. But object labels represent only a fraction of the vocabulary children acquire, and arguably represent the simplest case of word learning based on observations of world scenes. Here we extended the study of cross-situational word learning to a new segment of the vocabulary, action verbs, to permit a stronger test of the role of statistical information in word learning. In two experiments, on each trial 2.5-year-olds encountered two novel intransitive (e.g., "She's pimming!"; Experiment 1) or transitive verbs (e.g., "She's pimming her toy!"; Experiment 2) while viewing two action events. The consistency with which each verb accompanied each action provided the only source of information about the intended referent of each verb. The 2.5-year-olds used cross-situational consistency in verb learning, but also showed significant limits on their ability to do so as the sentences and scenes became slightly more complex. These findings help to define the role of cross-situational observation in word learning.
In his paper "Do young children have adult syntactic competence?" Tomasello (Cognition 74 (2000) 209) interprets young children's conservatism in language production as evidence that early language use, and verb use in particular, are based entirely on concrete lexical representations, showing no evidence of abstract syntactic categories such as "verb" or "transitive sentence". In this reply, I argue that Tomasello's interpretation depends on three questionable premises: (a) that anyone with a robust grammatical category of verbs would use new verbs in unattested sentence constructions; (b) that there are no reasons other than lack of syntactic competence for lexical effects in language use; and (c) that children always interpret a new verb presented in the context of an action on an object as a causal action verb, and therefore as one they should use transitively. I review evidence against all of these assumptions. Tomasello's data, among others', show that children indeed learn item-specific facts about verbs and other lexical items - as they must, to become competent speakers of their native language. However, other data suggest that more abstract descriptions of linguistic input also play a role in early language use. To achieve a complete picture of how children learn their native languages, we must explore the interactions of lexical and more abstract syntactic knowledge in language acquisition.
Caramazza and Costa, 2000 (Cognition 75, B51--B64) report three picture-word interference experiments testing the response set mechanism of the WEAVER+ + model of spoken word production. They argue that their findings are problematic for WEAVER+ + and that the model's architecture needs to be changed. I show that there is no need to fundamentally modify the model. Instead, the findings of Caramazza and Costa, and all previous findings, are explained by assuming that only a limited number of responses can be kept in short-term memory and that memory improves with response repetition.
Six unsuccessful attempts at replicating a key finding in the linguistic relativity literature [Boroditsky, L. (2001). Does language shape thought?: Mandarin and English speakers' conceptions of time. Cognitive Psychology, 43, 1 22] are reported. In addition to these empirical issues in replicating the original finding, theoretical issues present in the original report are discussed. In sum, we conclude that Boroditsky (2001) provides no support for the Whorfian hypothesis. (c) 2006 Elsevier B.V. All rights reserved.
English uses the horizontal spatial metaphors to express time (e.g., the good days ahead of us). Chinese also uses the vertical metaphors (e.g., 'the month above' to mean last month). Do Chinese speakers, then, think about time in a different way than English speakers? Boroditsky [Boroditsky, L. (2001). Does language shape thought? Mandarin and English speakers' conceptions of time. Cognitive Psychology, 43(1), 1-22] claimed that they do, and went on to conclude that 'language is a powerful tool in shaping habitual thought about abstract domains' (such as time). By estimating the frequency of usage, we found that Chinese speakers actually use the horizontal spatial metaphors more often than the vertical metaphors. This offered no logical ground for Boroditsky's claim. We were also unable to replicate her experiments in four different attempts. We conclude that Chinese speakers do not think about time in a different way than English speakers just because Chinese also uses the vertical spatial metaphors to express time.
In a recent Cognition paper (Cognition 85 (2002) B21), Bornkessel, Schlesewsky, and Friederici report ERP data that they claim "show that online processing difficulties induced by word order variations in German cannot be attributed to the relative infrequency of the constructions in question, but rather appear to reflect the application of grammatical principles during parsing" (p. B21). In this commentary we demonstrate that the posited contrast between grammatical principles and construction (in)frequency as sources of parsing problems is artificial because it is based on factually incorrect assumptions about the grammar of German and on inaccurate corpus frequency data concerning the German constructions involved.
Do different L1 (first language) writing systems differentially affect word identification in English as a second language (ESL)? Wang, Koda, and Perfetti [Cognition 87 (2003) 129] answered yes by examining Chinese students with a logographic L1 background and Korean students with an alphabetic L1 background for their phonological and orthographic processing skills on English word identification. Such a conclusion is premature, however. We propose that the L1 phonological system (rather than the L1 writing system) of the learner largely accounts for cognitive processes in learning to read a second language (L2).
Recently, [Kunde, W., Kiesel, A., & Hoffmann, J. (2003). Conscious control over the content of unconscious cognition. Cognition, 88, 223-242] used a masked priming paradigm to argue that neither the 'elaborate processing' or the 'evolving automaticity' view can account for the processing of unconscious numerical stimuli. In our Experiment 1 we replicated [Kunde, W., Kiesel, A., & Hoffmann, J. (2003). Conscious control over the content of unconscious cognition. Cognition, 88, 223-242] Experiment 4 and show that with a less demanding mask than that used by Kunde et al., 'elaborate processing' can explain priming results given that there are side conditions to trigger elaborate processing of unconscious stimuli. The second experiment further explores this influence of the masks by increasing the relevance of the symbols by which the mask is composed. The results show that an increase in relevance of the mask is accompanied by a decrease in the priming effect, though there was no significant change in conscious awareness of the prime.
In this paper we question the theoretical tenability of Hertwig, Benz, and Krauss's (2008) (HBK) argument that responses commonly taken as manifestations of the conjunction fallacy should be instead considered as reflecting "reasonable pragmatic and semantic inferences" because the meaning of and does not always coincide with that of the logical operator ∧. We also question the relevance of the experimental evidence that HBK provide in support of their argument as well as their account of the pertinent literature. Finally, we report two novel experiments in which we employed HBK's procedure to control for the interpretation of and. The results obtained overtly contradict HBK's data and claims. We conclude with a discussion on the alleged feebleness of the conjunction fallacy, and suggest directions that future research on this topic might pursue.
DeCaro et al. [DeCaro, M. S., Thomas, R. D., & Beilock, S. L. (2008). Individual differences in category learning: Sometimes less working memory capacity is better than more. Cognition, 107(1), 284-294] explored how individual differences in working memory capacity differentially mediate the learning of distinct category structures. Specifically, their results showed that greater working memory capacity facilitates the learning of novel category structures that are verbalisable and discoverable through logical reasoning processes. Conversely, however, greater working memory was shown to impede the learning of novel category structures thought to be non-verbalisable, inaccessible to conscious reasoning and discoverable only through implicit (procedural) learning of appropriate stimulus-category responses. The present paper calls into question the specific nature of the category learning tasks used, in particular their ability to discriminate between different modes of category learning.
de Hevia and Spelke (de Hevia and Spelke (2009). Spontaneous mapping of number and space in adults and young children, Cognition, 110, 198-207) investigated the mapping of number onto space. To this end, they introduced a non-symbolic flanker task. Here subjects have to bisect a line that is flanked by a 2-dot and a 9-dot array. Similar to the symbolic line bisection task, a bias towards the larger numerosity was observed. We re-investigated these results both by creating new flanker stimuli that controlled for different (non-numerical) stimulus properties and by developing a new measurement tool. We demonstrate that the bisection bias was caused by the larger area subtended by the 9-dot array compared to the 2-dot array and not numerosity. Our study puts constraints on the results of the study by de Hevia and Spelke. The role of visual cues in numerosity processing in general is discussed.
In three experimental tasks Stephen and Mirman (2010) measured gaze steps, the distance in pixels between gaze positions on successive samples from an eyetracker. They argued that the distribution of gaze steps is best fit by the lognormal distribution, and based on this analysis they concluded that interactive cognitive processes underlie eye movement control in these tasks. The present comment argues that the gaze step distribution is predictable based on the fact that the eyes alternate between a fixation state in which gaze is steady and a saccade state in which gaze position changes rapidly. By fitting a simple mixture model to Stephen and Mirman's gaze step data we reveal a fixation distribution and a saccade distribution. This mixture model captures the shape of the gaze step distribution in detail, unlike the lognormal model, and provides a better quantitative fit to the data. We conclude that the gaze step distribution does not directly suggest processing interaction, and we emphasize some important limits on the utility of fitting theoretical distributions to data.