Gary Lupyan

University of Wisconsin–Madison, Madison, Wisconsin, United States

Are you Gary Lupyan?

Claim your profile

Publications (42)177.82 Total impact

  • Lynn K Perry, Gary Lupyan
    [Show abstract] [Hide abstract]
    ABSTRACT: Human concepts differ in their dimensionality. Some, like green-things, require representing one dimension while abstracting over many others. Others, like bird, have higher dimensionality due to numerous category-relevant properties (feathers, two-legs). Converging evidence points to the importance of verbal labels for forming low-dimensional categories. We examined the role of verbal labels in categorization by (1) using transcranial direct current stimulation over Wernicke's area (2) providing explicit verbal labels during a category learning task. We trained participants on a novel perceptual categorization task in which categories could be distinguished by either a uni- or bi-dimensional criterion. Cathodal stimulation over Wernicke's area reduced reliance on single-dimensional solutions, while presenting informationally redundant novel labels reduced reliance on the dimension that is normally incidental in the real world. These results provide further evidence that implicit and explicit verbal labels support the process of human categorization.
    Brain and Language 06/2014; 135C:66-72. · 3.39 Impact Factor
  • Source
    Frontiers in Psychology 01/2014; 5:187. · 2.80 Impact Factor
  • Lynn K. Perry, Gary Lupyan
    [Show abstract] [Hide abstract]
    ABSTRACT: Human concepts differ in their dimensionality. Some, like green-things, require representing one dimension while abstracting over many others. Others, like bird, have higher dimensionality due to numerous category-relevant properties (feathers, two-legs). Converging evidence points to the importance of verbal labels for forming low-dimensional categories. We examined the role of verbal labels in categorization by (1) using transcranial direct current stimulation over Wernicke’s area (2) providing explicit verbal labels during a category learning task. We trained participants on a novel perceptual categorization task in which categories could be distinguished by either a uni- or bi-dimensional criterion. Cathodal stimulation over Wernicke’s area reduced reliance on single-dimensional solutions, while presenting informationally redundant novel labels reduced reliance on the dimension that is normally incidental in the real world. These results provide further evidence that implicit and explicit verbal labels support the process of human categorization.
    Brain and Language 01/2014; 135:66–72. · 3.39 Impact Factor
  • Source
    Marcus Perlman, Rick Dale, Gary Lupyan
    [Show abstract] [Hide abstract]
    ABSTRACT: Evidence suggests that signed languages emerge from communication through spontaneously created, motivated gestures. Yet it is often argued that the vocal modality does not afford this same opportunity, and thus it is reasoned that language must have evolved from manual gestures. This paper presents findings from an iterative vocal charades game that shows that under some circumstances, nonverbal vocalizations can convey sufficiently precise information without prior conventionalization to ground the emergence of a spoken communication system.
    EVOLANG10; 01/2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We investigate the effect of spatial categories on visual perception. In three experiments, participants made same/different judgments on pairs of simultaneously presented dot-cross configurations. For different trials, the position of the dot within each cross could differ with respect to either categorical spatial relations (the dots occupied different quadrants) or coordinate spatial relations (the dots occupied different positions within the same quadrant). The dot-cross configurations also varied in how readily the dot position could be lexicalized. In harder-to-name trials, crosses formed a "+" shape such that each quadrant was associated with two discrete lexicalized spatial categories (e.g., "above" and "left"). In easier-to-name trials, both crosses were rotated 45° to form an "×" shape such that quadrants were unambiguously associated with a single lexicalized spatial category (e.g., "above" or "left"). In Experiment 1, participants were more accurate when discriminating categorical information between easier-to-name categories and more accurate at discriminating coordinate spatial information within harder-to-name categories. Subsequent experiments attempted to down-regulate or up-regulate the involvement of language in task performance. Results from Experiment 2 (verbal interference) and Experiment 3 (verbal training) suggest that the observed spatial relation type-by-nameability interaction is resistant to online language manipulations previously shown to affect color and object-based perceptual processing. The results across all three experiments suggest that robust biases in the visual perception of spatial relations correlate with patterns of lexicalization, but do not appear to be modulated by language online.
    PLoS ONE 01/2014; 9(5):e98604. · 3.73 Impact Factor
  • Gary Lupyan
    [Show abstract] [Hide abstract]
    ABSTRACT: It is shown that educated adults routinely make errors in placing stimuli into familiar, well-defined categories such as triangle and odd number. Scalene triangles are often rejected as instances of triangles and 798 is categorized by some as an odd number. These patterns are observed both in timed and untimed tasks, hold for people who can fully express the necessary and sufficient conditions for category membership, and for individuals with varying levels of education. A sizeable minority of people believe that 400 is more even than 798 and that an equilateral triangle is the most "trianglest" of triangles. Such beliefs predict how people instantiate other categories with necessary and sufficient conditions, e.g., grandmother. I argue that the distributed and graded nature of mental representations means that human algorithms, unlike conventional computer algorithms, only approximate rule-based classification and never fully abstract from the specifics of the input. This input-sensitivity is critical to obtaining the kind of cognitive flexibility at which humans excel, but comes at the cost of generally poor abilities to perform context-free computations. If human algorithms cannot be trusted to produce unfuzzy representations of odd numbers, triangles, and grandmothers, the idea that they can be trusted to do the heavy lifting of moment-to-moment cognition that is inherent in the metaphor of mind as digital computer still common in cognitive science, needs to be seriously reconsidered.
    Cognition 12/2013; 129(3):615-36. · 3.16 Impact Factor
  • Gary Lupyan, Emily J Ward
    [Show abstract] [Hide abstract]
    ABSTRACT: Linguistic labels (e.g., "chair") seem to activate visual properties of the objects to which they refer. Here we investigated whether language-based activation of visual representations can affect the ability to simply detect the presence of an object. We used continuous flash suppression to suppress visual awareness of familiar objects while they were continuously presented to one eye. Participants made simple detection decisions, indicating whether they saw any image. Hearing a verbal label before the simple detection task changed performance relative to an uninformative cue baseline. Valid labels improved performance relative to no-label baseline trials. Invalid labels decreased performance. Labels affected both sensitivity (d') and response times. In addition, we found that the effectiveness of labels varied predictably as a function of the match between the shape of the stimulus and the shape denoted by the label. Together, the findings suggest that facilitated detection of invisible objects due to language occurs at a perceptual rather than semantic locus. We hypothesize that when information associated with verbal labels matches stimulus-driven activity, language can provide a boost to perception, propelling an otherwise invisible image into awareness.
    Proceedings of the National Academy of Sciences 08/2013; · 9.74 Impact Factor
  • Source
    Lynn K Perry, Gary Lupyan
    Frontiers in Behavioral Neuroscience 01/2013; 7:122. · 4.76 Impact Factor
  • Source
    Gary Lupyan, Daniel Mirman
    [Show abstract] [Hide abstract]
    ABSTRACT: In addition to its use in communication, language appears to have a variety of extra-communicative functions; disrupting language disrupts performance in seemingly non-linguistic tasks. Previous work has specifically linked linguistic impairments to categorization impairments. Here, we systematically tested this link by comparing categorization performance in a group of 12 participants with aphasia and 12 age- and education-matched control participants. Participants were asked to choose all of the objects that fit a specified criterion from sets of 20 pictured objects. The criterion was either "high-dimensional" (i.e., the objects shared many features, such as "farm animals") or "low-dimensional" (i.e., the objects shared one or a few features, such as "things that are green"). Participants with aphasia were selectively impaired on low-dimensional categorization. This selective impairment was correlated with the severity of their naming impairment and not with the overall severity of their aphasia, semantic impairment, lesion size, or lesion location. These results indicate that linguistic impairment impacts categorization specifically when that categorization requires focusing attention and isolating individual features - a task that requires a larger degree of cognitive control than high-dimensional categorization. The results offer some support for the hypothesis that language supports cognitive functioning, particularly the ability to select task-relevant stimulus features.
    Cortex 06/2012; · 6.16 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Humans have an unparalleled ability to represent objects as members of multiple categories. A given object, such as a pillow may be-depending on current task demands-represented as an instance of something that is soft, as something that contains feathers, as something that is found in bedrooms, or something that is larger than a toaster. This type of processing requires the individual to dynamically highlight task-relevant properties and abstract over or suppress object properties that, although salient, are not relevant to the task at hand. Neuroimaging and neuropsychological evidence suggests that this ability may depend on cognitive control processes associated with the left inferior prefrontal gyrus. Here, we show that stimulating the left inferior frontal cortex using transcranial direct current stimulation alters performance of healthy subjects on a simple categorization task. Our task required subjects to select pictures matching a description, e.g., "click on all the round things." Cathodal stimulation led to poorer performance on classification trials requiring attention to specific dimensions such as color or shape as opposed to trials that required selecting items belonging to a more thematic category such as objects that hold water. A polarity reversal (anodal stimulation) lowered the threshold for selecting items that were more weakly associated with the target category. These results illustrate the role of frontally-mediated control processes in categorization and suggest potential interactions between categorization, cognitive control, and language.
    Cognition 05/2012; 124(1):36-49. · 3.16 Impact Factor
  • Source
    Gary Lupyan
    [Show abstract] [Hide abstract]
    ABSTRACT: How does language impact cognition and perception? A growing number of studies show that language, and specifically the practice of labeling, can exert extremely rapid and pervasive effects on putatively non-verbal processes such as categorization, visual discrimination, and even simply detecting the presence of a stimulus. Progress on the empirical front, however, has not been accompanied by progress in understanding the mechanisms by which language affects these processes. One puzzle is how effects of language can be both deep, in the sense of affecting even basic visual processes, and yet vulnerable to manipulations such as verbal interference, which can sometimes nullify effects of language. In this paper, I review some of the evidence for effects of language on cognition and perception, showing that performance on tasks that have been presumed to be non-verbal is rapidly modulated by language. I argue that a clearer understanding of the relationship between language and cognition can be achieved by rejecting the distinction between verbal and non-verbal representations and by adopting a framework in which language modulates ongoing cognitive and perceptual processing in a flexible and task-dependent manner.
    Frontiers in Psychology 01/2012; 3:54. · 2.80 Impact Factor
  • Source
    Gary Lupyan
    Frontiers in Psychology 01/2012; 3:422. · 2.80 Impact Factor
  • Source
    RICK DALE, GARY LUPYAN
    [Show abstract] [Hide abstract]
    ABSTRACT: Human language is unparalleled in both its expressive capacity and its diversity. What accounts for the enormous diversity of human languages [13]? Recent evidence suggests that the structure of languages may be shaped by the social and demographic environment in which the languages are learned and used. In an analysis of over 2000 languages Lupyan and Dale [25] demonstrated that socio-demographic variables, such as population size, significantly predicted the complexity of inflectional morphology. Languages spoken by smaller populations tend to employ more complex inflectional systems. Languages spoken by larger populations tend to avoid complex morphological paradigms, employing lexical constructions instead. This relationship may exist because of how language learning takes place in these different social contexts [44, 45]. In a smaller population, a tightly-knit social group combined with exclusive or almost exclusive language acquisition by infants permits accumulation of complex inflectional forms. In larger populations, adult language learning and more extensive cross-group interactions produce pressures that lead to morphological simplification. In the current paper, we explore this learning-based hypothesis in two ways. First, we develop an agent-based simulation that serves as a simple existence proof: As adult interaction increases, languages lose inflections. Second, we carry out a correlational study showing that English-speaking adults who had more interaction with non-native speakers as children showed a relative preference for over-regularized (i.e. morphologically simpler) forms. The results of the simulation and experiment lend support to the linguistic niche hypothesis: Languages may vary in the ways they do in part due to different social environments in which they are learned and used. In short, languages adapt to the learning constraints and biases of their learners.
    Advances in Complex Systems 01/2012; 15(03n04). · 0.65 Impact Factor
  • Source
    Gary Lupyan, Daniel Swingley
    [Show abstract] [Hide abstract]
    ABSTRACT: People often talk to themselves, yet very little is known about the functions of this self-directed speech. We explore effects of self-directed speech on visual processing by using a visual search task. According to the label feedback hypothesis (Lupyan, 2007a), verbal labels can change ongoing perceptual processing-for example, actually hearing "chair" compared to simply thinking about a chair can temporarily make the visual system a better "chair detector". Participants searched for common objects, while being sometimes asked to speak the target's name aloud. Speaking facilitated search, particularly when there was a strong association between the name and the visual target. As the discrepancy between the name and the target increased, speaking began to impair performance. Together, these results speak to the power of words to modulate ongoing visual processing.
    Quarterly journal of experimental psychology (2006) 12/2011; 65(6):1068-85. · 1.82 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A major part of learning a language is learning to map spoken words onto objects in the environment. An open question is what are the consequences of this learning for cognition and perception? Here, we present a series of experiments that examine effects of verbal labels on the activation of conceptual information as measured through picture verification tasks. We find that verbal cues, such as the word "cat," lead to faster and more accurate verification of congruent objects and rejection of incongruent objects than do either nonverbal cues, such as the sound of a cat meowing, or words that do not directly refer to the object, such as the word "meowing." This label advantage does not arise from verbal labels being more familiar or easier to process than other cues, and it does extends to newly learned labels and sounds. Despite having equivalent facility in learning associations between novel objects and labels or sounds, conceptual information is activated more effectively through verbal means than through nonverbal means. Thus, rather than simply accessing nonverbal concepts, language activates aspects of a conceptual representation in a particularly effective way. We offer preliminary support that representations activated via verbal means are more categorical and show greater consistency between subjects. These results inform the understanding of how human cognition is shaped by language and hint at effects that different patterns of naming can have on conceptual structure.
    Journal of Experimental Psychology General 09/2011; 141(1):170-86. · 3.99 Impact Factor
  • Source
    Gary Lupyan, Michael J Spivey
    [Show abstract] [Hide abstract]
    ABSTRACT: Because of the strong associations between verbal labels and the visual objects that they denote, hearing a word may quickly guide the deployment of visual attention to the named objects. We report six experiments in which we investigated the effect of hearing redundant (noninformative) object labels on the visual processing of multiple objects from the named category. Even though the word cues did not provide additional information to the participants, hearing a label resulted in faster detection of attention probes appearing near the objects denoted by the label. For example, hearing the word chair resulted in more effective visual processing of all of the chairs in a scene relative to trials in which the participants attended to the chairs without actually hearing the label. This facilitation was mediated by stimulus typicality. Transformations of the stimuli that disrupted their association with the label while preserving the low-level visual features eliminated the facilitative effect of the labels. In the final experiment, we show that hearing a label improves the accuracy of locating multiple items matching the label, even when eye movements are restricted. We posit that verbal labels dynamically modulate visual processing via top-down feedback--an instance of linguistic labels greasing the wheels of perception.
    Attention Perception & Psychophysics 11/2010; 72(8):2236-53. · 1.97 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Why are people more irritated by nearby cell-phone conversations than by conversations between two people who are physically present? Overhearing someone on a cell phone means hearing only half of a conversation--a "halfalogue." We show that merely overhearing a halfalogue results in decreased performance on cognitive tasks designed to reflect the attentional demands of daily activities. By contrast, overhearing both sides of a cell-phone conversation or a monologue does not result in decreased performance. This may be because the content of a halfalogue is less predictable than both sides of a conversation. In a second experiment, we controlled for differences in acoustic factors between these types of overheard speech, establishing that it is the unpredictable informational content of halfalogues that results in distraction. Thus, we provide a cognitive explanation for why overheard cell-phone conversations are especially irritating: Less-predictable speech results in more distraction for a listener engaged in other tasks.
    Psychological Science 10/2010; 21(10):1383-8. · 4.43 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In traditional hierarchical models of information processing, visual representations feed into conceptual systems, but conceptual categories do not exert an influence on visual processing. We provide evidence, across four experiments, that conceptual information can in fact penetrate early visual processing, rather than merely biasing the output of perceptual systems. Participants performed physical-identity judgments on visually equidistant pairs of letter stimuli that were either in the same conceptual category (Bb) or in different categories (Bp). In the case of nonidentical letters, response times were longer when the stimuli were from the same conceptual category, but only when the letters were presented sequentially. The differences in effect size between simultaneous and sequential trials rules out a decision-level account. An additional experiment using animal silhouettes replicated the major effects found with letters. Thus, performance on an explicitly visual task was influenced by conceptual categories. This effect depended on processing time, immediately preceding experience, and stimulus typicality, which suggests that it was produced by the direct influence of category knowledge on perception, rather than by a postperceptual decision bias.
    Psychological Science 05/2010; 21(5):682-91. · 4.43 Impact Factor
  • Source
    Gary Lupyan, Michael J Spivey
    [Show abstract] [Hide abstract]
    ABSTRACT: Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.
    PLoS ONE 01/2010; 5(7):e11452. · 3.73 Impact Factor
  • Journal of Vision - J VISION. 01/2010; 9(8):802-802.

Publication Stats

328 Citations
177.82 Total Impact Points

Institutions

  • 2010–2014
    • University of Wisconsin–Madison
      • Department of Psychology
      Madison, Wisconsin, United States
  • 2009–2010
    • University of Pennsylvania
      Philadelphia, Pennsylvania, United States
  • 2008–2010
    • Cornell University
      • Department of Psychology
      Ithaca, NY, United States
  • 2002–2008
    • Carnegie Mellon University
      • • Department of Psychology
      • • Center for the Neural Basis of Cognition
      Pittsburgh, PA, United States