[Show abstract][Hide abstract] ABSTRACT: Looking at another person's facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals' facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual-and not just conceptual-processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities.
[Show abstract][Hide abstract] ABSTRACT: DOWNLOAD: http://is.gd/tics_arbicosys
The notion that the form of a word bears an arbitrary relation to its meaning accounts only partly for the attested relations between form and meaning in the languages of the world. Recent research suggests a more textured view of vocabulary structure, in which arbitrariness is complemented by iconicity (aspects of form resemble aspects of meaning) and systematicity (statistical regularities in forms predict function). Experimental evidence suggests these form-to-meaning correspondences serve different functions in language processing, development, and communication: systematicity facilitates category learning by means of phonological cues, iconicity facilitates word learning and communication by means of perceptuomotor analogies, and arbitrariness facilitates meaning individuation through distinctive forms. Processes of cultural evolution help to explain how these competing motivations shape vocabulary structure.
[Show abstract][Hide abstract] ABSTRACT: According to predictive coding models of perception, what we see is determined jointly by the current input and the priors established by previous experience, expectations, and other contextual factors. The same input can thus be perceived differently depending on the priors that are brought to bear during viewing. Here, I show that expected (diagnostic) colors are perceived more vividly than arbitrary or unexpected colors, particularly when color input is unreliable. Participants were tested on a version of the 'Spanish Castle Illusion' in which viewing a hue-inverted image renders a subsequently shown achromatic version of the image in vivid color. Adapting to objects with intrinsic colors (e.g., a pumpkin) led to stronger afterimages than adapting to arbitrarily colored objects (e.g., a pumpkin-colored car). Considerably stronger afterimages were also produced by scenes containing intrinsically colored elements (grass, sky) compared to scenes with arbitrarily colored objects (books). The differences between images with diagnostic and arbitrary colors disappeared when the association between the image and color priors was weakened by, e.g., presenting the image upside-down, consistent with the prediction that color appearance is being modulated by color knowledge. Visual inputs that conflict with prior knowledge appear to be phenomenologically discounted, but this discounting is moderated by input certainty, as shown by the final study which uses conventional images rather than afterimages. As input certainty is increased, unexpected colors can become easier to detect than expected ones, a result consistent with predictive-coding models.
[Show abstract][Hide abstract] ABSTRACT: Signed languages exhibit iconicity (resemblance between form and meaning) across their vocabulary, and many non-Indo-European spoken languages feature sizable classes of iconic words known as ideophones. In comparison, Indo-European languages like English and Spanish are believed to be arbitrary outside of a small number of onomatopoeic words. In three experiments with English and two with Spanish, we asked native speakers to rate the iconicity of ~600 words from the English and Spanish MacArthur-Bates Communicative Developmental Inventories. We found that iconicity in the words of both languages varied in a theoretically meaningful way with lexical category. In both languages, adjectives were rated as more iconic than nouns and function words, and corresponding to typological differences between English and Spanish in verb semantics, English verbs were rated as relatively iconic compared to Spanish verbs. We also found that both languages exhibited a negative relationship between iconicity ratings and age of acquisition. Words learned earlier tended to be more iconic, suggesting that iconicity in early vocabulary may aid word learning. Altogether these findings show that iconicity is a graded quality that pervades vocabularies of even the most " arbitrary " spoken languages. The findings provide compelling evidence that iconicity is an important property of all languages, signed and spoken, including Indo-European languages.
PLoS ONE 09/2015; 10(9). DOI:10.1371/journal.pone.0137147 · 3.23 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Visual imagery and the making of visual judgments involves activation of cortical regions that underlie visual perception (e.g., reporting that taxicabs are yellow recruits color sensitive regions of cortex, Simmons et al., 2007). Such results, however, leave open the critical question of whether perceptual representations are constitutive of visual knowledge (Barsalou, Simmons, Barbey, & Wilson, 2003; Mahon & Caramazza, 2008). We report evidence that visual interference disrupts the activation of visual and only visual knowledge. Recognizing an upright object next to a rotated picture of the same object is known to be aided by cueing; hearing word cues that match the subsequently presented pictures (e.g., hearing "alligator" prior to seeing pictures of alligators) improves performance, whereas hearing invalid cues (e.g., hearing "alligator" prior to seeing pictures of dogs) impairs it. We show that we can reduce this cueing effect by 46% by presenting a visual mask during or after the auditory word cue. The mask did not affect performance on no-cue trials, showing that the effect of the visual interference disrupts the knowledge activated by the word. In subsequent studies we show that the same type of visual interference affects knowledge probed by verbal propositions. For example, hearing the word "table" while viewing visual noise patterns made participants 1.4 times more likely to make an error in affirming the visual property that tables have flat surfaces but not the more general (and equally difficult) property that tables are furniture. These results provide a convincing resolution of a longstanding debate in cognitive psychology and neuroscience about the format of visual knowledge. Although much of our knowledge abstracts away from perceptual details, knowledge of what things look like appears to be represented in a visual format. Meeting abstract presented at VSS 2015.
Journal of Vision 09/2015; 15(12):10. DOI:10.1167/15.12.10 · 2.39 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Can what we know change what we see? Does language affect cognition and perception? The last few years have seen increased attention to these seemingly disparate questions, but with little theoretical advance. We argue that substantial clarity can be gained by considering these questions through the lens of predictive processing, a framework in which mental representations—from the perceptual to the cognitive—reflect an interplay between downward-flowing predictions and upward-flowing sensory signals. This framework provides a parsimonious account of how (and when) what we know ought to change what we see and helps us understand how a putatively high-level trait such as language can impact putatively low-level processes such as perception. Within this framework, language begins to take on a surprisingly central role in cognition by providing a uniquely focused and flexible means of constructing predictions against which sensory signals can be evaluated. Predictive processing thus provides a plausible mechanism for many of the reported effects of language on perception, thought, and action, and new insights on how and when speakers of different languages construct the same “reality” in alternate ways.
Current Directions in Psychological Science 08/2015; 24(4):279-284. DOI:10.1177/0963721415570732 · 3.93 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Studies of gestural communication systems find that they originate from spontaneously created iconic gestures. Yet, we know little about how people create vocal communication systems, and many have suggested that vocalizations do not afford iconicity beyond trivial instances of onomatopoeia. It is unknown whether people can generate vocal communication systems through a process of iconic creation similar to gestural systems. Here, we examine the creation and development of a rudimentary vocal symbol system in a laboratory setting. Pairs of participants generated novel vocalizations for 18 different meanings in an iterative 'vocal' charades communication game. The communicators quickly converged on stable vocalizations, and naive listeners could correctly infer their meanings in subsequent playback experiments. People's ability to guess the meanings of these novel vocalizations was predicted by how close the vocalization was to an iconic 'meaning template' we derived from the production data. These results strongly suggest that the meaningfulness of these vocalizations derived from iconicity. Our findings illuminate a mechanism by which iconicity can ground the creation of vocal symbols, analogous to the function of iconicity in gestural communication systems.
Royal Society Open Science 08/2015; 2(8). DOI:10.1098/rsos.150152
[Show abstract][Hide abstract] ABSTRACT: My reply to Macpherson begins by addressing whether it is effects of cognition on early vision or perceptual performance that I am interested in. I proceed to address Macpherson’s comments on evidence from cross-modal effects, interpretations of linguistic effects on image detection, evidence from illusions, and the usefulness of predictive coding for understanding cognitive penetration. By stressing the interactive and distributed nature of neural processing, I am committing to a collapse between perception and cognition. Following such a collapse, the very question of whether cognition affects perception becomes ill-posed, but this may be for the best.
[Show abstract][Hide abstract] ABSTRACT: From an early age, people exhibit strong links between certain visual (e.g. size) and acoustic (e.g. duration) dimensions. Do people instinctively extend these crossmodal correspondences to vocalization? We examine the ability of congenitally deaf Chinese children and young adults (age M = 12.4 years, SD = 3.7 years) to generate iconic vocalizations to distinguish items with contrasting magnitude (e.g., big vs. small ball). Both deaf and hearing (M = 10.1 years, SD = 0.83 years) participants produced longer, louder vocalizations for greater magnitude items. However, only hearing participants used pitch—higher pitch for greater magnitude – which counters the hypothesized, innate size " frequency code " , but fits with Mandarin language and culture. Thus our results show that the translation of visible magnitude into the duration and intensity of vocalization transcends auditory experience, whereas the use of pitch appears more malleable to linguistic and cultural influence.
Proceedings of The 37th annual meeting of the Cognitive Science Society (CogSci 2015); 01/2015
[Show abstract][Hide abstract] ABSTRACT: Scholars have documented substantial classes of iconic vocabulary in many non-Indo-European languages. In comparison, Indo-European languages like English are assumed to be arbitrary outside of a small number of onomatopoeic words. In three experiments, we asked English speakers to rate the iconicity of words from the MacArthur-Bates Communicative Developmental Inventory. We found English—contrary to common belief—exhibits iconicity that correlates with age of acquisition and differs across lexical classes. Words judged as most iconic are learned earlier, in accord with findings that iconic words are easier to learn. We also find that adjectives and verbs are more iconic than nouns, supporting the idea that iconicity provides an extra cue in learning more difficult abstract meanings. Our results provide new evidence for a relationship between iconicity and word learning and suggest iconicity may be a more pervasive property of spoken languages than previously thought.
Proceedings of The 37th annual meeting of the Cognitive Science Society (CogSci 2015); 01/2015
[Show abstract][Hide abstract] ABSTRACT: Human concepts differ in their dimensionality. Some, like green-things, require representing one dimension while abstracting over many others. Others, like bird, have higher dimensionality due to numerous category-relevant properties (feathers, two-legs). Converging evidence points to the importance of verbal labels for forming low-dimensional categories. We examined the role of verbal labels in categorization by (1) using transcranial direct current stimulation over Wernicke's area (2) providing explicit verbal labels during a category learning task. We trained participants on a novel perceptual categorization task in which categories could be distinguished by either a uni- or bi-dimensional criterion. Cathodal stimulation over Wernicke's area reduced reliance on single-dimensional solutions, while presenting informationally redundant novel labels reduced reliance on the dimension that is normally incidental in the real world. These results provide further evidence that implicit and explicit verbal labels support the process of human categorization.
Brain and Language 06/2014; 135C:66-72. DOI:10.1016/j.bandl.2014.05.005 · 3.22 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: On traditional accounts, word meanings are entries in a mental lexicon. Nonsense words lack such entries, and are therefore meaningless. Here, we show that
under some circumstances
nonsense words function indistinguishably from conventional words. The ‘nonsense’ words
led participants to select systematically different clusters of adjectives and were reliably matched to different species of alien creatures (e.g., ‘crelches’ were pointy and narrow and ‘fooves’ were large and fat). In a categorization task in which participants learned to group two species of aliens primarily on the basis of roundness/pointiness, these novel labels facilitated performance as much as conventional words (e.g.,
). The results expand the scope of research on sound symbolism and support a non-traditional view of word meaning according to which words do not have meanings by virtue of a conventionalized form−meaning pairing. Rather, the ‘meaning’ of a word is the effect that the word form has on the user’s mental activity.
Language and Cognition 06/2014; 7(02):1-27. DOI:10.1017/langcog.2014.21
[Show abstract][Hide abstract] ABSTRACT: We investigate the effect of spatial categories on visual perception. In three experiments, participants made same/different judgments on pairs of simultaneously presented dot-cross configurations. For different trials, the position of the dot within each cross could differ with respect to either categorical spatial relations (the dots occupied different quadrants) or coordinate spatial relations (the dots occupied different positions within the same quadrant). The dot-cross configurations also varied in how readily the dot position could be lexicalized. In harder-to-name trials, crosses formed a "+" shape such that each quadrant was associated with two discrete lexicalized spatial categories (e.g., "above" and "left"). In easier-to-name trials, both crosses were rotated 45° to form an "×" shape such that quadrants were unambiguously associated with a single lexicalized spatial category (e.g., "above" or "left"). In Experiment 1, participants were more accurate when discriminating categorical information between easier-to-name categories and more accurate at discriminating coordinate spatial information within harder-to-name categories. Subsequent experiments attempted to down-regulate or up-regulate the involvement of language in task performance. Results from Experiment 2 (verbal interference) and Experiment 3 (verbal training) suggest that the observed spatial relation type-by-nameability interaction is resistant to online language manipulations previously shown to affect color and object-based perceptual processing. The results across all three experiments suggest that robust biases in the visual perception of spatial relations correlate with patterns of lexicalization, but do not appear to be modulated by language online.
PLoS ONE 05/2014; 9(5):e98604. DOI:10.1371/journal.pone.0098604 · 3.23 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: In any act of communication, a sender's signal influences the behavior of the receiver. In linguistic communication, signals have a productive capacity for communicating about new topics in new ways, allowing senders to influence receivers in potentially limitless ways. The central question addressed by this work is: What kinds of signals are words? By examining some of the design features of words as distinct from other cues, we can better understand how linguistic signals shape human behavior, shedding light on possible causes for the evolutionary divergence between language and nonverbal communication systems…
[Show abstract][Hide abstract] ABSTRACT: In recent years, a number of studies have looked at the properties of emerging structure in novel communication systems. These studies use different paradigms and experimental designs (see Scott-Phillips & Kirby, 2010), and have made important contributions to identify and analyze the factors responsible for communicative change. One emphasis in these studies has been the emergence of combinatorial structure. For example, using entropy measures, Verhoef et al. (2013) showed that combinatoriality emerges in iterated learning of an artificial whistled language. In the current paper, we explore a large dataset of communicative symbols using an array of methods that reveal different structural and statistical properties of the communication system. We find that these measures are sensitive to the environment that the communication system is being used in. This lends further support to the Linguistic Niche Hypothesis, which supposes that language structure is dependent on the social environment in which it is learned (Lupyan & Dale, 2010)…