Gary Lupyan

University of Wisconsin–Madison, Madison, Wisconsin, United States

Are you Gary Lupyan?

Claim your profile

Publications (68)243.51 Total impact

  • Gary Lupyan
    [Show abstract] [Hide abstract]
    ABSTRACT: The emergence of language-a productive and combinatorial system of communication-has been hailed as one of the major transitions in evolution. By enabling symbolic culture, language allows humans to draw on and expand on the knowledge of their ancestors and peers. A common assumption among linguists and psychologists is that although language is critical to our ability to share our thoughts, it plays a minor, if any, role in generating, controlling, and structuring them. I examine some assumptions that led to this view of language and discuss an alternative according to which normal human cognition is language-augmented cognition. I focus on one of the fundamental design features of language-the use of words as symbolic cues-and argue that language acts as a high-level control system for the mind, allowing individuals to sculpt mental representations of others as well as their own. © 2015 Language Learning Research Club, University of Michigan.
    No preview · Article · Dec 2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Looking at another person's facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals' facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual-and not just conceptual-processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities.
    Full-text · Article · Nov 2015 · Psychonomic Bulletin & Review
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: DOWNLOAD: http://is.gd/tics_arbicosys The notion that the form of a word bears an arbitrary relation to its meaning accounts only partly for the attested relations between form and meaning in the languages of the world. Recent research suggests a more textured view of vocabulary structure, in which arbitrariness is complemented by iconicity (aspects of form resemble aspects of meaning) and systematicity (statistical regularities in forms predict function). Experimental evidence suggests these form-to-meaning correspondences serve different functions in language processing, development, and communication: systematicity facilitates category learning by means of phonological cues, iconicity facilitates word learning and communication by means of perceptuomotor analogies, and arbitrariness facilitates meaning individuation through distinctive forms. Processes of cultural evolution help to explain how these competing motivations shape vocabulary structure.
    Full-text · Article · Oct 2015 · Trends in Cognitive Sciences
  • Gary Lupyan
    [Show abstract] [Hide abstract]
    ABSTRACT: According to predictive coding models of perception, what we see is determined jointly by the current input and the priors established by previous experience, expectations, and other contextual factors. The same input can thus be perceived differently depending on the priors that are brought to bear during viewing. Here, I show that expected (diagnostic) colors are perceived more vividly than arbitrary or unexpected colors, particularly when color input is unreliable. Participants were tested on a version of the 'Spanish Castle Illusion' in which viewing a hue-inverted image renders a subsequently shown achromatic version of the image in vivid color. Adapting to objects with intrinsic colors (e.g., a pumpkin) led to stronger afterimages than adapting to arbitrarily colored objects (e.g., a pumpkin-colored car). Considerably stronger afterimages were also produced by scenes containing intrinsically colored elements (grass, sky) compared to scenes with arbitrarily colored objects (books). The differences between images with diagnostic and arbitrary colors disappeared when the association between the image and color priors was weakened by, e.g., presenting the image upside-down, consistent with the prediction that color appearance is being modulated by color knowledge. Visual inputs that conflict with prior knowledge appear to be phenomenologically discounted, but this discounting is moderated by input certainty, as shown by the final study which uses conventional images rather than afterimages. As input certainty is increased, unexpected colors can become easier to detect than expected ones, a result consistent with predictive-coding models.
    No preview · Article · Sep 2015 · Acta psychologica
  • Source
    Lynn K Perry 1☯ · Marcus Perlman · Gary Lupyan
    [Show abstract] [Hide abstract]
    ABSTRACT: Signed languages exhibit iconicity (resemblance between form and meaning) across their vocabulary, and many non-Indo-European spoken languages feature sizable classes of iconic words known as ideophones. In comparison, Indo-European languages like English and Spanish are believed to be arbitrary outside of a small number of onomatopoeic words. In three experiments with English and two with Spanish, we asked native speakers to rate the iconicity of ~600 words from the English and Spanish MacArthur-Bates Communicative Developmental Inventories. We found that iconicity in the words of both languages varied in a theoretically meaningful way with lexical category. In both languages, adjectives were rated as more iconic than nouns and function words, and corresponding to typological differences between English and Spanish in verb semantics, English verbs were rated as relatively iconic compared to Spanish verbs. We also found that both languages exhibited a negative relationship between iconicity ratings and age of acquisition. Words learned earlier tended to be more iconic, suggesting that iconicity in early vocabulary may aid word learning. Altogether these findings show that iconicity is a graded quality that pervades vocabularies of even the most " arbitrary " spoken languages. The findings provide compelling evidence that iconicity is an important property of all languages, signed and spoken, including Indo-European languages.
    Full-text · Article · Sep 2015 · PLoS ONE
  • Pierce Edmiston · Gary Lupyan
    [Show abstract] [Hide abstract]
    ABSTRACT: Visual imagery and the making of visual judgments involves activation of cortical regions that underlie visual perception (e.g., reporting that taxicabs are yellow recruits color sensitive regions of cortex, Simmons et al., 2007). Such results, however, leave open the critical question of whether perceptual representations are constitutive of visual knowledge (Barsalou, Simmons, Barbey, & Wilson, 2003; Mahon & Caramazza, 2008). We report evidence that visual interference disrupts the activation of visual and only visual knowledge. Recognizing an upright object next to a rotated picture of the same object is known to be aided by cueing; hearing word cues that match the subsequently presented pictures (e.g., hearing "alligator" prior to seeing pictures of alligators) improves performance, whereas hearing invalid cues (e.g., hearing "alligator" prior to seeing pictures of dogs) impairs it. We show that we can reduce this cueing effect by 46% by presenting a visual mask during or after the auditory word cue. The mask did not affect performance on no-cue trials, showing that the effect of the visual interference disrupts the knowledge activated by the word. In subsequent studies we show that the same type of visual interference affects knowledge probed by verbal propositions. For example, hearing the word "table" while viewing visual noise patterns made participants 1.4 times more likely to make an error in affirming the visual property that tables have flat surfaces but not the more general (and equally difficult) property that tables are furniture. These results provide a convincing resolution of a longstanding debate in cognitive psychology and neuroscience about the format of visual knowledge. Although much of our knowledge abstracts away from perceptual details, knowledge of what things look like appears to be represented in a visual format. Meeting abstract presented at VSS 2015.
    No preview · Article · Sep 2015 · Journal of Vision
  • Gary Lupyan · Andy Clark
    [Show abstract] [Hide abstract]
    ABSTRACT: Can what we know change what we see? Does language affect cognition and perception? The last few years have seen increased attention to these seemingly disparate questions, but with little theoretical advance. We argue that substantial clarity can be gained by considering these questions through the lens of predictive processing, a framework in which mental representations—from the perceptual to the cognitive—reflect an interplay between downward-flowing predictions and upward-flowing sensory signals. This framework provides a parsimonious account of how (and when) what we know ought to change what we see and helps us understand how a putatively high-level trait such as language can impact putatively low-level processes such as perception. Within this framework, language begins to take on a surprisingly central role in cognition by providing a uniquely focused and flexible means of constructing predictions against which sensory signals can be evaluated. Predictive processing thus provides a plausible mechanism for many of the reported effects of language on perception, thought, and action, and new insights on how and when speakers of different languages construct the same “reality” in alternate ways.
    No preview · Article · Aug 2015 · Current Directions in Psychological Science
  • Source
    Marcus Perlman · Rick Dale · Gary Lupyan
    [Show abstract] [Hide abstract]
    ABSTRACT: Studies of gestural communication systems find that they originate from spontaneously created iconic gestures. Yet, we know little about how people create vocal communication systems, and many have suggested that vocalizations do not afford iconicity beyond trivial instances of onomatopoeia. It is unknown whether people can generate vocal communication systems through a process of iconic creation similar to gestural systems. Here, we examine the creation and development of a rudimentary vocal symbol system in a laboratory setting. Pairs of participants generated novel vocalizations for 18 different meanings in an iterative 'vocal' charades communication game. The communicators quickly converged on stable vocalizations, and naive listeners could correctly infer their meanings in subsequent playback experiments. People's ability to guess the meanings of these novel vocalizations was predicted by how close the vocalization was to an iconic 'meaning template' we derived from the production data. These results strongly suggest that the meaningfulness of these vocalizations derived from iconicity. Our findings illuminate a mechanism by which iconicity can ground the creation of vocal symbols, analogous to the function of iconicity in gestural communication systems.
    Full-text · Article · Aug 2015 · Royal Society Open Science
  • Source
    Gary Lupyan · Benjamin Bergen
    [Show abstract] [Hide abstract]
    ABSTRACT: Many animals can be trained to perform novel tasks. People, too, can be trained, but sometime in early childhood people transition from being trainable to something qualitatively more powerful-being programmable. We argue that such programmability constitutes a leap in the way that organisms learn, interact, and transmit knowledge, and that what facilitates or enables this programmability is the learning and use of language. We then examine how language programs the mind and argue that it does so through the manipulation of embodied, sensorimotor representations. The role language plays in controlling mental representations offers important insights for understanding its origin and evolution. Copyright © 2015 Cognitive Science Society, Inc.
    Full-text · Article · Jul 2015 · Topics in Cognitive Science
  • Gary Lupyan
    [Show abstract] [Hide abstract]
    ABSTRACT: The goal of perceptual systems is to allow organisms to adaptively respond to ecologically relevant stimuli. Because all perceptual inputs are ambiguous, perception needs to rely on prior knowledge accumulated over evolutionary and developmental time to turn sensory energy into information useful for guiding behavior. It remains controversial whether the guidance of perception extends to cognitive states or is locked up in a “cognitively impenetrable” part of perception. I argue that expectations, knowledge, and task demands can shape perception at multiple levels, leaving no part untouched. The position advocated here is broadly consistent with the notion that perceptual systems strive to minimize prediction error en route to globally optimal solutions (Clark Behavioral and Brain Sciences 36(3):181–204, 2013). On this view, penetrability should be expected whenever constraining lower-level processes by higher level knowledge is minimizes global prediction error. Just as Fodor feared (e.g., Fodor Philosophy of Science 51:23–43, 1984, Philosophy of Science 51:23–43, 1988) cognitive penetration of perception threatens theory-neutral observation and the distinction between observation and inference. However, because theories themselves are constrained by the task of minimizing prediction error, theory-laden observation turns out to be superior to theory-free observation in turning sensory energy into useful information.
    No preview · Article · Jul 2015
  • Source
    Bastien Boutonnet · Gary Lupyan
    [Show abstract] [Hide abstract]
    ABSTRACT: People use language to shape each other's behavior in highly flexible ways. Effects of language are often assumed to be "high-level" in that, whereas language clearly influences reasoning, decision making, and memory, it does not influence low-level visual processes. Here, we test the prediction that words are able to provide top-down guidance at the very earliest stages of visual processing by acting as powerful categorical cues. We investigated whether visual processing of images of familiar animals and artifacts was enhanced after hearing their name (e.g., "dog") compared with hearing an equally familiar and unambiguous nonverbal sound (e.g., a dog bark) in 14 English monolingual speakers. Because the relationship between words and their referents is categorical, we expected words to deploy more effective categorical templates, allowing for more rapid visual recognition. By recording EEGs, we were able to determine whether this label advantage stemmed from changes to early visual processing or later semantic decision processes. The results showed that hearing a word affected early visual processes and that this modulation was specific to the named category. An analysis of ERPs showed that the P1 was larger when people were cued by labels compared with equally informative nonverbal cues-an enhancement occurring within 100 ms of image onset, which also predicted behavioral responses occurring almost 500 ms later. Hearing labels modulated the P1 such that it distinguished between target and nontarget images, showing that words rapidly guide early visual processing. Copyright © 2015 the authors 0270-6474/15/359329-07$15.00/0.
    Full-text · Article · Jun 2015 · The Journal of Neuroscience : The Official Journal of the Society for Neuroscience
  • Pierce Edmiston · Gary Lupyan
    [Show abstract] [Hide abstract]
    ABSTRACT: Verbal labels, such as the words "dog" and "guitar," activate conceptual knowledge more effectively than corresponding environmental sounds, such as a dog bark or a guitar strum, even though both are unambiguous cues to the categories of dogs and guitars (Lupyan & Thompson-Schill, 2012). We hypothesize that this advantage of labels emerges because word-forms, unlike other cues, do not vary in a motivated way with their referent. The sound of a guitar cannot help but inform a listener to the type of guitar making it (electric, acoustic, etc.). The word "guitar" on the other hand, can leave the type of guitar unspecified. We argue that as a result, labels gain the ability to cue a more abstract mental representation, promoting efficient processing of category members. In contrast, environmental sounds activate representations that are more tightly linked to the specific cause of the sound. Our results show that upon hearing environmental sounds such as a dog bark or guitar strum, people cannot help but activate a particular instance of a category, in a particular state, at a particular time, as measured by patterns of response times on cue-picture matching tasks (Exps. 1-2) and eye-movements in a task where the cues are task-irrelevant (Exp. 3). In comparison, labels activate concepts in a more abstract, decontextualized way-a difference that we argue can be explained by labels acting as "unmotivated cues". Copyright © 2015 Elsevier B.V. All rights reserved.
    No preview · Article · Jun 2015 · Cognition
  • Gary Lupyan
    [Show abstract] [Hide abstract]
    ABSTRACT: My reply to Macpherson begins by addressing whether it is effects of cognition on early vision or perceptual performance that I am interested in. I proceed to address Macpherson’s comments on evidence from cross-modal effects, interpretations of linguistic effects on image detection, evidence from illusions, and the usefulness of predictive coding for understanding cognitive penetration. By stressing the interactive and distributed nature of neural processing, I am committing to a collapse between perception and cognition. Following such a collapse, the very question of whether cognition affects perception becomes ill-posed, but this may be for the best.
    No preview · Article · Jun 2015
  • Source
    Bastien Boutonnet · Gary Lupyan

    Full-text · Conference Paper · May 2015
  • Source
    Marcus Perlman · Jing Z. Paul · Gary Lupyan
    [Show abstract] [Hide abstract]
    ABSTRACT: From an early age, people exhibit strong links between certain visual (e.g. size) and acoustic (e.g. duration) dimensions. Do people instinctively extend these crossmodal correspondences to vocalization? We examine the ability of congenitally deaf Chinese children and young adults (age M = 12.4 years, SD = 3.7 years) to generate iconic vocalizations to distinguish items with contrasting magnitude (e.g., big vs. small ball). Both deaf and hearing (M = 10.1 years, SD = 0.83 years) participants produced longer, louder vocalizations for greater magnitude items. However, only hearing participants used pitch—higher pitch for greater magnitude – which counters the hypothesized, innate size " frequency code " , but fits with Mandarin language and culture. Thus our results show that the translation of visible magnitude into the duration and intensity of vocalization transcends auditory experience, whereas the use of pitch appears more malleable to linguistic and cultural influence.
    Full-text · Conference Paper · Jan 2015
  • Source
    Lynn K Perry · Marcus Perlman · Gary Lupyan
    [Show abstract] [Hide abstract]
    ABSTRACT: Scholars have documented substantial classes of iconic vocabulary in many non-Indo-European languages. In comparison, Indo-European languages like English are assumed to be arbitrary outside of a small number of onomatopoeic words. In three experiments, we asked English speakers to rate the iconicity of words from the MacArthur-Bates Communicative Developmental Inventory. We found English—contrary to common belief—exhibits iconicity that correlates with age of acquisition and differs across lexical classes. Words judged as most iconic are learned earlier, in accord with findings that iconic words are easier to learn. We also find that adjectives and verbs are more iconic than nouns, supporting the idea that iconicity provides an extra cue in learning more difficult abstract meanings. Our results provide new evidence for a relationship between iconicity and word learning and suggest iconicity may be a more pervasive property of spoken languages than previously thought.
    Full-text · Conference Paper · Jan 2015
  • Lynn K Perry · Gary Lupyan
    [Show abstract] [Hide abstract]
    ABSTRACT: Human concepts differ in their dimensionality. Some, like green-things, require representing one dimension while abstracting over many others. Others, like bird, have higher dimensionality due to numerous category-relevant properties (feathers, two-legs). Converging evidence points to the importance of verbal labels for forming low-dimensional categories. We examined the role of verbal labels in categorization by (1) using transcranial direct current stimulation over Wernicke's area (2) providing explicit verbal labels during a category learning task. We trained participants on a novel perceptual categorization task in which categories could be distinguished by either a uni- or bi-dimensional criterion. Cathodal stimulation over Wernicke's area reduced reliance on single-dimensional solutions, while presenting informationally redundant novel labels reduced reliance on the dimension that is normally incidental in the real world. These results provide further evidence that implicit and explicit verbal labels support the process of human categorization.
    No preview · Article · Jun 2014 · Brain and Language
  • Gary Lupyan · Daniel Casasanto
    [Show abstract] [Hide abstract]
    ABSTRACT: On traditional accounts, word meanings are entries in a mental lexicon. Nonsense words lack such entries, and are therefore meaningless. Here, we show that under some circumstances nonsense words function indistinguishably from conventional words. The ‘nonsense’ words foove and crelch led participants to select systematically different clusters of adjectives and were reliably matched to different species of alien creatures (e.g., ‘crelches’ were pointy and narrow and ‘fooves’ were large and fat). In a categorization task in which participants learned to group two species of aliens primarily on the basis of roundness/pointiness, these novel labels facilitated performance as much as conventional words (e.g., round , pointy ). The results expand the scope of research on sound symbolism and support a non-traditional view of word meaning according to which words do not have meanings by virtue of a conventionalized form−meaning pairing. Rather, the ‘meaning’ of a word is the effect that the word form has on the user’s mental activity.
    No preview · Article · Jun 2014 · Language and Cognition
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We investigate the effect of spatial categories on visual perception. In three experiments, participants made same/different judgments on pairs of simultaneously presented dot-cross configurations. For different trials, the position of the dot within each cross could differ with respect to either categorical spatial relations (the dots occupied different quadrants) or coordinate spatial relations (the dots occupied different positions within the same quadrant). The dot-cross configurations also varied in how readily the dot position could be lexicalized. In harder-to-name trials, crosses formed a "+" shape such that each quadrant was associated with two discrete lexicalized spatial categories (e.g., "above" and "left"). In easier-to-name trials, both crosses were rotated 45° to form an "×" shape such that quadrants were unambiguously associated with a single lexicalized spatial category (e.g., "above" or "left"). In Experiment 1, participants were more accurate when discriminating categorical information between easier-to-name categories and more accurate at discriminating coordinate spatial information within harder-to-name categories. Subsequent experiments attempted to down-regulate or up-regulate the involvement of language in task performance. Results from Experiment 2 (verbal interference) and Experiment 3 (verbal training) suggest that the observed spatial relation type-by-nameability interaction is resistant to online language manipulations previously shown to affect color and object-based perceptual processing. The results across all three experiments suggest that robust biases in the visual perception of spatial relations correlate with patterns of lexicalization, but do not appear to be modulated by language online.
    Full-text · Article · May 2014 · PLoS ONE
  • Conference Paper: WORDS AS UNMOTIVATED CUES
    Pierce Edmiston · Gary Lupyan
    [Show abstract] [Hide abstract]
    ABSTRACT: In any act of communication, a sender's signal influences the behavior of the receiver. In linguistic communication, signals have a productive capacity for communicating about new topics in new ways, allowing senders to influence receivers in potentially limitless ways. The central question addressed by this work is: What kinds of signals are words? By examining some of the design features of words as distinct from other cues, we can better understand how linguistic signals shape human behavior, shedding light on possible causes for the evolutionary divergence between language and nonverbal communication systems…
    No preview · Conference Paper · May 2014

Publication Stats

766 Citations
243.51 Total Impact Points

Institutions

  • 2010-2015
    • University of Wisconsin–Madison
      • Department of Psychology
      Madison, Wisconsin, United States
  • 2009-2010
    • University of Pennsylvania
      • Department of Psychology
      Philadelphia, Pennsylvania, United States
  • 2008
    • Cornell University
      • Department of Psychology
      Итак, New York, United States
  • 2002-2008
    • Carnegie Mellon University
      • • Department of Psychology
      • • Center for the Neural Basis of Cognition
      Pittsburgh, Pennsylvania, United States