[Show abstract][Hide abstract] ABSTRACT: Signed languages exhibit iconicity (resemblance between form and meaning) across their vocabulary, and many non-Indo-European spoken languages feature sizable classes of iconic words known as ideophones. In comparison, Indo-European languages like English and Spanish are believed to be arbitrary outside of a small number of onomatopoeic words. In three experiments with English and two with Spanish, we asked native speakers to rate the iconicity of ~600 words from the English and Spanish MacArthur-Bates Communicative Developmental Inventories. We found that iconicity in the words of both languages varied in a theoretically meaningful way with lexical category. In both languages, adjectives were rated as more iconic than nouns and function words, and corresponding to typological differences between English and Spanish in verb semantics, English verbs were rated as relatively iconic compared to Spanish verbs. We also found that both languages exhibited a negative relationship between iconicity ratings and age of acquisition. Words learned earlier tended to be more iconic, suggesting that iconicity in early vocabulary may aid word learning. Altogether these findings show that iconicity is a graded quality that pervades vocabularies of even the most " arbitrary " spoken languages. The findings provide compelling evidence that iconicity is an important property of all languages, signed and spoken, including Indo-European languages.
PLoS ONE 09/2015; 10(9). DOI:10.1371/journal.pone.0137147 · 3.23 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Visual imagery and the making of visual judgments involves activation of cortical regions that underlie visual perception (e.g., reporting that taxicabs are yellow recruits color sensitive regions of cortex, Simmons et al., 2007). Such results, however, leave open the critical question of whether perceptual representations are constitutive of visual knowledge (Barsalou, Simmons, Barbey, & Wilson, 2003; Mahon & Caramazza, 2008). We report evidence that visual interference disrupts the activation of visual and only visual knowledge. Recognizing an upright object next to a rotated picture of the same object is known to be aided by cueing; hearing word cues that match the subsequently presented pictures (e.g., hearing "alligator" prior to seeing pictures of alligators) improves performance, whereas hearing invalid cues (e.g., hearing "alligator" prior to seeing pictures of dogs) impairs it. We show that we can reduce this cueing effect by 46% by presenting a visual mask during or after the auditory word cue. The mask did not affect performance on no-cue trials, showing that the effect of the visual interference disrupts the knowledge activated by the word. In subsequent studies we show that the same type of visual interference affects knowledge probed by verbal propositions. For example, hearing the word "table" while viewing visual noise patterns made participants 1.4 times more likely to make an error in affirming the visual property that tables have flat surfaces but not the more general (and equally difficult) property that tables are furniture. These results provide a convincing resolution of a longstanding debate in cognitive psychology and neuroscience about the format of visual knowledge. Although much of our knowledge abstracts away from perceptual details, knowledge of what things look like appears to be represented in a visual format. Meeting abstract presented at VSS 2015.
Journal of Vision 09/2015; 15(12):10. DOI:10.1167/15.12.10 · 2.39 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Studies of gestural communication systems find that they originate from spontaneously created iconic gestures. Yet, we know little about how people create vocal communication systems, and many have suggested that vocalizations do not afford iconicity beyond trivial instances of onomatopoeia. It is unknown whether people can generate vocal communication systems through a process of iconic creation similar to gestural systems. Here, we examine the creation and development of a rudimentary vocal symbol system in a laboratory setting. Pairs of participants generated novel vocalizations for 18 different meanings in an iterative 'vocal' charades communication game. The communicators quickly converged on stable vocalizations, and naive listeners could correctly infer their meanings in subsequent playback experiments. People's ability to guess the meanings of these novel vocalizations was predicted by how close the vocalization was to an iconic 'meaning template' we derived from the production data. These results strongly suggest that the meaningfulness of these vocalizations derived from iconicity. Our findings illuminate a mechanism by which iconicity can ground the creation of vocal symbols, analogous to the function of iconicity in gestural communication systems.
Royal Society Open Science 08/2015; 2(8). DOI:10.1098/rsos.150152
[Show abstract][Hide abstract] ABSTRACT: My reply to Macpherson begins by addressing whether it is effects of cognition on early vision or perceptual performance that I am interested in. I proceed to address Macpherson’s comments on evidence from cross-modal effects, interpretations of linguistic effects on image detection, evidence from illusions, and the usefulness of predictive coding for understanding cognitive penetration. By stressing the interactive and distributed nature of neural processing, I am committing to a collapse between perception and cognition. Following such a collapse, the very question of whether cognition affects perception becomes ill-posed, but this may be for the best.
[Show abstract][Hide abstract] ABSTRACT: Scholars have documented substantial classes of iconic vocabulary in many non-Indo-European languages. In comparison, Indo-European languages like English are assumed to be arbitrary outside of a small number of onomatopoeic words. In three experiments, we asked English speakers to rate the iconicity of words from the MacArthur-Bates Communicative Developmental Inventory. We found English—contrary to common belief—exhibits iconicity that correlates with age of acquisition and differs across lexical classes. Words judged as most iconic are learned earlier, in accord with findings that iconic words are easier to learn. We also find that adjectives and verbs are more iconic than nouns, supporting the idea that iconicity provides an extra cue in learning more difficult abstract meanings. Our results provide new evidence for a relationship between iconicity and word learning and suggest iconicity may be a more pervasive property of spoken languages than previously thought.
Proceedings of The 37th annual meeting of the Cognitive Science Society (CogSci 2015); 01/2015
[Show abstract][Hide abstract] ABSTRACT: From an early age, people exhibit strong links between certain visual (e.g. size) and acoustic (e.g. duration) dimensions. Do people instinctively extend these crossmodal correspondences to vocalization? We examine the ability of congenitally deaf Chinese children and young adults (age M = 12.4 years, SD = 3.7 years) to generate iconic vocalizations to distinguish items with contrasting magnitude (e.g., big vs. small ball). Both deaf and hearing (M = 10.1 years, SD = 0.83 years) participants produced longer, louder vocalizations for greater magnitude items. However, only hearing participants used pitch—higher pitch for greater magnitude – which counters the hypothesized, innate size " frequency code " , but fits with Mandarin language and culture. Thus our results show that the translation of visible magnitude into the duration and intensity of vocalization transcends auditory experience, whereas the use of pitch appears more malleable to linguistic and cultural influence.
Proceedings of The 37th annual meeting of the Cognitive Science Society (CogSci 2015); 01/2015
[Show abstract][Hide abstract] ABSTRACT: Human concepts differ in their dimensionality. Some, like green-things, require representing one dimension while abstracting over many others. Others, like bird, have higher dimensionality due to numerous category-relevant properties (feathers, two-legs). Converging evidence points to the importance of verbal labels for forming low-dimensional categories. We examined the role of verbal labels in categorization by (1) using transcranial direct current stimulation over Wernicke's area (2) providing explicit verbal labels during a category learning task. We trained participants on a novel perceptual categorization task in which categories could be distinguished by either a uni- or bi-dimensional criterion. Cathodal stimulation over Wernicke's area reduced reliance on single-dimensional solutions, while presenting informationally redundant novel labels reduced reliance on the dimension that is normally incidental in the real world. These results provide further evidence that implicit and explicit verbal labels support the process of human categorization.
Brain and Language 06/2014; 135C:66-72. DOI:10.1016/j.bandl.2014.05.005 · 3.22 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: On traditional accounts, word meanings are entries in a mental lexicon. Nonsense words lack such entries, and are therefore meaningless. Here, we show that
under some circumstances
nonsense words function indistinguishably from conventional words. The ‘nonsense’ words
led participants to select systematically different clusters of adjectives and were reliably matched to different species of alien creatures (e.g., ‘crelches’ were pointy and narrow and ‘fooves’ were large and fat). In a categorization task in which participants learned to group two species of aliens primarily on the basis of roundness/pointiness, these novel labels facilitated performance as much as conventional words (e.g.,
). The results expand the scope of research on sound symbolism and support a non-traditional view of word meaning according to which words do not have meanings by virtue of a conventionalized form−meaning pairing. Rather, the ‘meaning’ of a word is the effect that the word form has on the user’s mental activity.
Language and Cognition 06/2014; 7(02):1-27. DOI:10.1017/langcog.2014.21
[Show abstract][Hide abstract] ABSTRACT: We investigate the effect of spatial categories on visual perception. In three experiments, participants made same/different judgments on pairs of simultaneously presented dot-cross configurations. For different trials, the position of the dot within each cross could differ with respect to either categorical spatial relations (the dots occupied different quadrants) or coordinate spatial relations (the dots occupied different positions within the same quadrant). The dot-cross configurations also varied in how readily the dot position could be lexicalized. In harder-to-name trials, crosses formed a "+" shape such that each quadrant was associated with two discrete lexicalized spatial categories (e.g., "above" and "left"). In easier-to-name trials, both crosses were rotated 45° to form an "×" shape such that quadrants were unambiguously associated with a single lexicalized spatial category (e.g., "above" or "left"). In Experiment 1, participants were more accurate when discriminating categorical information between easier-to-name categories and more accurate at discriminating coordinate spatial information within harder-to-name categories. Subsequent experiments attempted to down-regulate or up-regulate the involvement of language in task performance. Results from Experiment 2 (verbal interference) and Experiment 3 (verbal training) suggest that the observed spatial relation type-by-nameability interaction is resistant to online language manipulations previously shown to affect color and object-based perceptual processing. The results across all three experiments suggest that robust biases in the visual perception of spatial relations correlate with patterns of lexicalization, but do not appear to be modulated by language online.
PLoS ONE 05/2014; 9(5):e98604. DOI:10.1371/journal.pone.0098604 · 3.23 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: In any act of communication, a sender's signal influences the behavior of the receiver. In linguistic communication, signals have a productive capacity for communicating about new topics in new ways, allowing senders to influence receivers in potentially limitless ways. The central question addressed by this work is: What kinds of signals are words? By examining some of the design features of words as distinct from other cues, we can better understand how linguistic signals shape human behavior, shedding light on possible causes for the evolutionary divergence between language and nonverbal communication systems…
[Show abstract][Hide abstract] ABSTRACT: In recent years, a number of studies have looked at the properties of emerging structure in novel communication systems. These studies use different paradigms and experimental designs (see Scott-Phillips & Kirby, 2010), and have made important contributions to identify and analyze the factors responsible for communicative change. One emphasis in these studies has been the emergence of combinatorial structure. For example, using entropy measures, Verhoef et al. (2013) showed that combinatoriality emerges in iterated learning of an artificial whistled language. In the current paper, we explore a large dataset of communicative symbols using an array of methods that reveal different structural and statistical properties of the communication system. We find that these measures are sensitive to the environment that the communication system is being used in. This lends further support to the Linguistic Niche Hypothesis, which supposes that language structure is dependent on the social environment in which it is learned (Lupyan & Dale, 2010)…
[Show abstract][Hide abstract] ABSTRACT: Evidence suggests that signed languages emerge from communication through spontaneously created, motivated gestures. Yet it is often argued that the vocal modality does not afford this same opportunity, and thus it is reasoned that language must have evolved from manual gestures. This paper presents findings from an iterative vocal charades game that shows that under some circumstances, nonverbal vocalizations can convey sufficiently precise information without prior conventionalization to ground the emergence of a spoken communication system.
[Show abstract][Hide abstract] ABSTRACT: Human concepts differ in their dimensionality. Some, like green-things, require representing one dimension while abstracting over many others. Others, like bird, have higher dimensionality due to numerous category-relevant properties (feathers, two-legs). Converging evidence points to the importance of verbal labels for forming low-dimensional categories. We examined the role of verbal labels in categorization by (1) using transcranial direct current stimulation over Wernicke’s area (2) providing explicit verbal labels during a category learning task. We trained participants on a novel perceptual categorization task in which categories could be distinguished by either a uni- or bi-dimensional criterion. Cathodal stimulation over Wernicke’s area reduced reliance on single-dimensional solutions, while presenting informationally redundant novel labels reduced reliance on the dimension that is normally incidental in the real world. These results provide further evidence that implicit and explicit verbal labels support the process of human categorization.
Brain and Language 01/2014; 135:66–72. · 3.22 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: This issue of the AMD Newsletter is very special: we are celebrating its 10 years of biannual publication! It has progressively become a place of lively and inspiring scientific dialogues spanning an incredible network of topics, with the contributions of many key actors of computational developmental sciences. In this volume, a new dialog has been initiated by Katerina Pastra: "Autonomous Acquisition of Sensorimotor Experiences: Any Role for Language?". This new dialog formulates a bold hypothesis: language as a communication system may have evolved as a byproduct of language as a tool for (self-)organizing conceptual structures. A number of replies have been generated by R. Dale, K. Rohlfing et al., G. Lupyan, C. Silvey, K. Fischer, and E. Dupoux. The replies are synthesised in K. Pastra's "Beware of the...Label" opinion piece.
[Show abstract][Hide abstract] ABSTRACT: It is shown that educated adults routinely make errors in placing stimuli into familiar, well-defined categories such as triangle and odd number. Scalene triangles are often rejected as instances of triangles and 798 is categorized by some as an odd number. These patterns are observed both in timed and untimed tasks, hold for people who can fully express the necessary and sufficient conditions for category membership, and for individuals with varying levels of education. A sizeable minority of people believe that 400 is more even than 798 and that an equilateral triangle is the most "trianglest" of triangles. Such beliefs predict how people instantiate other categories with necessary and sufficient conditions, e.g., grandmother. I argue that the distributed and graded nature of mental representations means that human algorithms, unlike conventional computer algorithms, only approximate rule-based classification and never fully abstract from the specifics of the input. This input-sensitivity is critical to obtaining the kind of cognitive flexibility at which humans excel, but comes at the cost of generally poor abilities to perform context-free computations. If human algorithms cannot be trusted to produce unfuzzy representations of odd numbers, triangles, and grandmothers, the idea that they can be trusted to do the heavy lifting of moment-to-moment cognition that is inherent in the metaphor of mind as digital computer still common in cognitive science, needs to be seriously reconsidered.