ArticlePDF Available

Abstract

Questions about the relationship between language and thought have long fascinated psychologists, philosophers, and the general public. One specific question is the extent to which verbal labels causally impact cognitive processes—how does calling an object by a particular name influence the way people categorize it; how does knowing words for mental states influence our reasoning about the minds of others; how does learning and using words like left influence our navigation behavior? One way to learn how the words we use to label objects, mental states, or locations affect our thoughts is to increase or decrease the ease with which we can use these words and observe outcomes of these manipulations on “non-linguistic” tasks. For example, if the word left enables us to remember which way to turn, preventing its activation might be expected to disrupt navigation. Manipulating the labeling process (and the engagement of language more broadly) is therefore very useful in exploring how language influences cognition. In this paper, we review two methodologies for implementing linguistic manipulations: verbal interference and transcranial direct current stimulation (tDCS), and discuss what we can learn about the role of language in cognitive processes from this line of research.
A preview of the PDF is not available
... These classic questions, debated in philosophy and psychology for more than a century (Fodor, 1975;Müller, 1978;Sokolov, 1968;Vygotsky, 1962;Watson, 1913), have been increasingly tackled using various empirical and modelling methods (Baldo et al., 2005;Coetzee et al., 2019;Feinmann, 2020;Gilbert et al., 2006;Luo et al., 2021;Romano et al., 2018). One widely used method is verbal interference or articulatory suppression (Perry & Lupyan, 2013). In studies using this method, participants are asked to perform some task that may or may not require linguistic processing while at the same time performing a clearly linguistic task, such as repeating a word. ...
... Second, it is often difficult to assess performance on the interference task. Third, the verbal and non-verbal interference tasks do not always live up to the dual constraints of being (a) equally demanding and (b) different in only the presence or absence of "verbality" (Perry & Lupyan, 2013). We address these issues here. ...
... Interfering with language reduced categorical biases in color memory even when interference did not target color words. Converging evidence for effects of language on color memory comes from a study by Souza and Skóra (2017), who had participants remember colors while doing several tasks, among them, verbal interference and explicit color labeling (a form of upregulation of language, see Perry & Lupyan, 2013). Unlike Roberson and Davidoff (2000), Souza and Skóra tested color memory by having participants select colors from a continuous distribution rather than through two-alternative forced choice. ...
Article
This paper presents a systematic review of the empirical literature that uses dual-task interference methods for investigating the on-line involvement of language in various cognitive tasks. In these studies, participants perform some primary task X putatively recruiting linguistic resources while also engaging in a secondary, concurrent task. If performance on the primary task decreases under interference, there is evidence for language involvement in the primary task. We assessed studies (N = 101) reporting at least one experiment with verbal interference and at least one control task (either primary or secondary). We excluded papers with an explicitly clinical, neurological, or developmental focus. The primary tasks identified include categorization, memory, mental arithmetic, motor control, reasoning (verbal and visuospatial), task switching, theory of mind, visual change, and visuospatial integration and wayfinding. Overall, the present review found that covert language is likely to play a facilitative role in memory and categorization when items to be remembered or categorized have readily available labels, when inner speech can act as a form of behavioral self-cuing (inhibitory control, task set reminders, verbal strategy), and when inner speech is plausibly useful as “workspace,” for example, for mental arithmetic. There is less evidence for the role of covert language in cross-modal integration, reasoning relying on a high degree of visual detail or items low on nameability, and theory of mind. We discuss potential pitfalls and suggestions for streamlining and improving the methodology.
... L. Gilbert et al., 2006;Luo et al., 2021;Romano et al., 2018). One widely used method is verbal interference or articulatory suppression (Perry & Lupyan, 2013). In studies using this method, participants are asked to perform some task that may or may not require linguistic processing while at the same time performing a clearly linguistic task, such as repeating a word. ...
... Second, it is often difficult to assess performance on the interference task. Third, the verbal and non-verbal interference tasks do not always live up to the dual constraints of being a) equally demanding and b) different in only the presence or absence of 'verbality' (Perry & Lupyan, 2013). We will address these issues here. ...
... It is difficult to choose a third concurrent task to validate the equivalence of the interference tasks on because the literature is so divided on which tasks involve internal language and which do not. Another approach is to find a verbal and a nonverbal interference task that are in theory equivalent in every respect but their 'verbality' (Perry & Lupyan, 2013), including performance. This approach faces challenges because tasks that are equivalent in everything but their verbality may yet place different demands on attention and executive function. ...
Preprint
Full-text available
This paper presents a systematic review of the empirical literature that uses dual-task interference methods for investigating the on-line involvement of language in various cognitive tasks. In these studies, participants perform some primary task X putatively recruiting linguistic resources while also engaging in a secondary, concurrent task. If performance on the primary task decreases under interference, there is evidence for language involvement in the primary task. We assessed studies (N = 101) reporting at least one experiment with verbal interference and at least one control task (either primary or secondary). We excluded papers with an explicitly clinical, neurological, or developmental focus. The primary tasks identified include categorization, memory, mental arithmetic, motor control, reasoning (verbal and visuospatial), task switching, theory of mind, visual change, and visuospatial integration and wayfinding. Overall, the present review found that internal language is likely to play a facilitative role in memory and categorization when items to be remembered or categorized have readily available labels, when inner speech can act as a form of behavioral self-cuing (inhibitory control, task set reminders, verbal strategy), and when inner speech is plausibly useful as “workspace”, e.g., for mental arithmetic. There is less evidence for the role of internal language in cross-modal integration, reasoning relying on a high degree of visual detail or items low on nameability, and theory of mind. We discuss potential pitfalls and suggestions for streamlining and improving the methodology.
... To better understand the mechanisms underlying categorical perception effects in color discrimination or memory, researchers usually rely on behavioral manipulations, such as verbal interference (e.g., Winawer et al., 2007), and neuroscientific methods, such as functional neuroimaging (e.g., Bird, Berens, Horner, & Franklin, 2014;Siok et al., 2009), electrophysiological recordings (e.g., He, Witzel, Forder, Clifford, & Franklin, 2014Thierry, Athanasopoulos, Wiggett, Dering, & Kuipers, 2009), or neuromodulation (Akbiyik et al., 2020;Lupyan, 2008;Lupyan, Mirman, Hamilton, & Thompson-Schill, 2012;Perry & Lupyan, 2013). There are task-dependent mixed findings regarding the behavioral effects of color categories on categorical color perception. ...
Article
Cross‐category hues are differentiated easier than otherwise equidistant hues that belong to the same linguistic category. This effect is typically manifested through both accuracy and response time gains in tasks with a memory component, whereas only response times are affected when there is no memory component. This raises the question of whether there is a common generative process underlying the differential behavioral manifestations of category advantage in color perception. For instance, within the framework of noisy evidence accumulation models, changes in accuracy can be readily attributed to an increase in the efficacy of perceptual evidence integration (after controlling for threshold setting), whereas changes in response time can also be attributed to shorter nondecisional delays (e.g., due to facilitated signal detection). To address the latent decision processes underlying category advantage across different behavioral demands, we introduce a decision‐theoretic perspective (i.e., diffusion decision model) to categorical color perception in three complementary experiments. In Experiment 1, we collected data from a binary color naming task (1) to determine the green–blue boundary in our sample and (2) to trace how parameter estimates of interest in the model output change as a function of color typicality. In Experiments 2 and 3, we used same‐different task paradigms (with and without a memory component, respectively) and traced the category advantage in color discrimination in two parameters of the diffusion decision model: nondecision time and drift rate. An increase in drift rate predominantly characterized the category advantage in both tasks. Our results show that improved efficiency in perceptual evidence integration is a common driving force behind different manifestations of category advantage.
... We cannot pause being English speakers or switch on-and-off our knowledge that a face being shown is our mother's. As experimenters, we can manipulate top-down influences somewhat, for example, by testing bilinguals in different languages [45], by downregulating impacts of language through verbal interference or noninvasive neural stimulation, or by upregulating them through overt presentation of labels [107]. When these manipulations do lead to differences in behavior, we can infer that our normal experience must be, to some extent, influenced by the linguistic factor being manipulated. ...
Article
Does language change what we perceive? Does speaking different languages cause us to perceive things differently? We review the behavioral and electrophysiological evidence for the influence of language on perception, with an emphasis on the visual modality. Effects of language on perception can be observed both in higher-level processes such as recognition and in lower-level processes such as discrimination and detection. A consistent finding is that language causes us to perceive in a more categorical way. Rather than being fringe or exotic, as they are sometimes portrayed, we discuss how effects of language on perception naturally arise from the interactive and predictive nature of perception.
... Neuromodulation, on the other hand, offers a better alternative by manipulating behavior without the need of a dual-task paradigm and allows one to ask more specific questions regarding the neural and cognitive mechanisms underlying behaviors of interest (for a similar discussion, see [23]). For instance, to test whether the categorical perception of color arises from a top-down modulation of perceptual decisions by semantic processing, one can modulate the processing of regions with known importance in such processes and test the effects on performance. ...
Article
The linguistic category-advantage in color perception refers to better discrimination performance for stimuli that belong to different categories (e.g., green vs. blue) than equidistant stimuli from the same category (e.g., blue). Despite the robust nature of category-advantage in color perception, the related cognitive and neural mechanisms are not fully understood. Some views attribute this effect to early alteration of visual processing of color while others attribute it to post-perceptual conceptual processing. The current study investigated the causal role of the left anterior temporal lobe (ATL), as a post-perceptual semantic hub, in categorical color perception. We modulated the activity of the left ATL via cathodal tDCS or sham stimulation (within-subject) while participants were discriminating between successive presentations of color patches. Without stimulation, we found a category-advantage effect in both accuracy and response times. The inhibition of left ATL eliminated the category-advantage effect in terms of RTs but not accuracies. Our results point at the causal role of ATL in categorical color perception and provide indirect support for a post-perceptual processing account of this robust phenomenon.
... These findings indicate that language processes can mediate performance on perceptual tasks that are ostensibly not linguistic in nature, and a secondary verbal task that prevents task-incidental language use can disrupt the mediating influence of language. Similar influences of language on ostensibly non-linguistic processes, and the disruption thereof by verbal interference tasks, have been found for spatial memory (Hermer-Vazquez, Spelke, & Katsnelson, 1999), event perception (Trueswell & Papafragou, 2010), categorization (Lupyan, 2009), and numerical representations (Frank, Fedorenko, Lai, Saxe, & Gibson, 2012), to name a few (see Lupyan, 2012;Perry & Lupyan, 2013;Ünal & Papafragou, 2016 for discussion). ...
Article
Full-text available
The complexity of the visual world requires that we constrain visual attention and prioritize some regions of the scene for attention over others. The current study investigated whether verbal encoding processes influence how attention is allocated in scenes. Specifically, we asked whether the advantage of scene meaning over image salience in attentional guidance is modulated by verbal encoding, given that we often use language to process information. In two experiments, 60 subjects studied scenes (N1 = 30 and N2 = 60) for 12 s each in preparation for a scene-recognition task. Half of the time, subjects engaged in a secondary articulatory suppression task concurrent with scene viewing. Meaning and saliency maps were quantified for each of the experimental scenes. In both experiments, we found that meaning explained more of the variance in visual attention than image salience did, particularly when we controlled for the overlap between meaning and salience, with and without the suppression task. Based on these results, verbal encoding processes do not appear to modulate the relationship between scene meaning and visual attention. Our findings suggest that semantic information in the scene steers the attentional ship, consistent with cognitive guidance theory.
... We cannot pause being English speakers or switch on-and-off our knowledge that a face being shown is our mother's. As experimenters, we can manipulate top-down influences somewhat, for example, by testing bilinguals in different languages [45], by downregulating impacts of language through verbal interference or noninvasive neural stimulation, or by upregulating them through overt presentation of labels [107]. When these manipulations do lead to differences in behavior, we can infer that our normal experience must be, to some extent, influenced by the linguistic factor being manipulated. ...
Preprint
Does language “reach into” perception to change what we perceive? Does speak- ing different languages cause us to perceive things differently? We review the behavioral and electrophysiological evidence that visual perception is shaped by both long-term experience with language and its rapid involvement in-the- moment. These effects can be observed both in higher-level processes such as recognition, and lower-level processes such as discrimination and detection. A consistent finding is that language causes us to perceive in a more categorical way. Rather than being fringe or exotic, as they are sometimes portrayed, we discuss how effects of language on perception naturally arise from the interactive and predictive nature of perception.
Article
Many languages assign nouns to grammatical gender categories (e.g., masculine and feminine), and inanimate objects often have different genders in different languages. In a seminal study, Phillips and Boroditsky (2003) provided evidence that such “quirks of grammar” influence how people conceptualize objects. Spanish and German speakers judged person-object picture pairs as more similar when their biological and grammatical genders matched than when they did not, and English speakers showed the same pattern of similarity judgments after learning gender-like categories. These widely cited findings were instrumental in vindicating the Whorfian hypothesis that language shapes thought, yet neither the original study nor any direct replications have appeared in a peer-reviewed journal. To examine the reliability of Phillips and Boroditsky’s findings, we conducted a high-powered replication of two of their key experiments (total N = 375). Our results only partially replicated the original findings: Spanish and German speakers’ similarity judgments exhibited no effect of grammatical gender when accounting for key sources of error variance, but English speakers trained on gender-like categories rated same-gender pairs more similar than different-gender pairs. These results provide insight into the contexts in which grammatical gender effects occur and the mechanisms driving them.
Article
Images depict specific objects (e.g., a specific dog), yet are named with categorical labels (e.g., "dog"). We examined how semantic representations activated by images may be influenced by implicit labelling. Participants saw images of familiar objects and generated words associated with each image while undergoing transcranial direct current stimulation over the posterior superior temporal gyrus. Additional participants judged how representative generated associates were of the picture category and guessed the category based on the associates. Anodal stimulation was predicted to up-regulate labelling and thereby increase the extent to which participants produced associate that were more representative of the pictured category. Associates generated by anodally stimulated subjects were found to be more representative and enabled more accurate guessing of the category from which they were generated. The general pattern of results was replicated in a follow-up study using words rather than picture cues. Together these results suggest labelling may help stabilise semantic representations, leading to more robust representation of category-relevant information.
Article
Full-text available
Ambiguity resolution is a central problem in language comprehension. Lexical and syntactic ambiguities are standardly assumed to involve different types of knowledge representations and be resolved by different mechanisms. An alternative account is provided in which both types of ambiguity derive from aspects of lexical representation and are resolved by the same processing mechanisms. Reinterpreting syntactic ambiguity resolution as a form of lexical ambiguity resolution obviates the need for special parsing principles to account for syntactic interpretation preferences, reconciles a number of apparently conflicting results concerning the roles of lexical and contextual information in sentence processing, explains differences among ambiguities in terms of ease of resolution, and provides a more unified account of language comprehension than was previously available.
Article
A series of 7 experiments used dual-task methodology to investigate the role of working memory in the operation of a simple action-control plan or program involving regular switching between addition and subtraction. Lists requiring switching were slower than blocked lists and showed 2 concurrent task effects. Demanding executive tasks impaired performance on both blocked and switched lists, whereas articulatory suppression impaired principally the switched condition. Implications for models of task switching and working memory and for the Vygotskian concept of verbal control of action are discussed.
Article
Color perception can be categorical: Between-category discriminations are more accurate than equivalent within-category discriminations. The effects could be inherited, learned, or both. The authors provide evidence that supports the possibility of learned categorical perception (CP). Experiment 1 demonstrated that observers' color discrimination is flexible and improves through repeated practice. Experiment 2 demonstrated that category learning simulates effects of "natural" color categories on color discrimination. Experiment 3 investigated the time course of acquired CP. Experiment 4 found that CP effects are acquired through hue- and lightness-based category learning and obtained interesting data on the dimensional perception of color. The data are consistent with the possibility that language may shape color perception and suggest a plausible mechanism for the linguistic relativity hypothesis.
Chapter
Much of human communication involves language-a system of communication qualitatively different from those used by other animals. In this chapter, I focus on a fundamental property of language: referring to objects with labels (e.g., using the word " chair" to refer to a chair). What consequences does such labeling have on cognitive and perceptual processes? I review evidence indicating that verbal labels do not simply point or refer to nonlinguistic concepts, but rather actively modulate object representations that are brought on-line during " nonverbal" tasks. Using words to refer to concrete objects affects the learning of new categories, memory for and reasoning about familiar object categories, and even basic visual processing. Object representations activated by verbal means appear to be different, and specifically, more categorical, than ostensibly the same object representations activated by nonverbal means. A connectionist model of " language augmented thought" provides a computational account of how labels may augment cognitive and perceptual processing.
Article
Ten expert abacus operators were given various restrictions and distractions during addition of ten numbers of 3-5 figures. All subjects except one could calculate very rapidly without an abacus, probably relying upon its mental representation. Some of those at an intermediate level of mastery moved their fingers as if they had been manipulating a real abacus, and prohibition of this movement or interfering finger-tapping reduced their performance. All the subjects could answer simple non-mathematical questions during abacus calculation without increasing time or errors, but answering extraneous mathematical questions was very hard.
Article
The idea that language shapes the way we think, often associated with Benjamin Whorf, has long been decried as not only wrong but also fundamentally wrong-headed. Yet, experimental evidence has reopened debate about the extent to which language influences nonlinguistic cognition, particularly in the domain of time. In this article, I will first analyze an influential argument against the Whorfian hypothesis and show that its anti-Whorfian conclusion is in part an artifact of conflating two distinct questions: Do we think in language? and Does language shape thought? Next, I will discuss crosslinguistic differences in spatial metaphors for time and describe experiments that demonstrate corresponding differences in nonlinguistic mental representations. Finally, I will sketch a simple learning mechanism by which some linguistic relativity effects appear to arise. Although people may not think in language, speakers of different languages develop distinctive conceptual repertoires as a consequence of ordinary and presumably universal neural and cognitive processes.
Article
Humans have an unparalleled ability to represent objects as members of multiple categories. A given object, such as a pillow may be-depending on current task demands-represented as an instance of something that is soft, as something that contains feathers, as something that is found in bedrooms, or something that is larger than a toaster. This type of processing requires the individual to dynamically highlight task-relevant properties and abstract over or suppress object properties that, although salient, are not relevant to the task at hand. Neuroimaging and neuropsychological evidence suggests that this ability may depend on cognitive control processes associated with the left inferior prefrontal gyrus. Here, we show that stimulating the left inferior frontal cortex using transcranial direct current stimulation alters performance of healthy subjects on a simple categorization task. Our task required subjects to select pictures matching a description, e.g., "click on all the round things." Cathodal stimulation led to poorer performance on classification trials requiring attention to specific dimensions such as color or shape as opposed to trials that required selecting items belonging to a more thematic category such as objects that hold water. A polarity reversal (anodal stimulation) lowered the threshold for selecting items that were more weakly associated with the target category. These results illustrate the role of frontally-mediated control processes in categorization and suggest potential interactions between categorization, cognitive control, and language.
Article
People often talk to themselves, yet very little is known about the functions of this self-directed speech. We explore effects of self-directed speech on visual processing by using a visual search task. According to the label feedback hypothesis (Lupyan, 2007a), verbal labels can change ongoing perceptual processing-for example, actually hearing "chair" compared to simply thinking about a chair can temporarily make the visual system a better "chair detector". Participants searched for common objects, while being sometimes asked to speak the target's name aloud. Speaking facilitated search, particularly when there was a strong association between the name and the visual target. As the discrepancy between the name and the target increased, speaking began to impair performance. Together, these results speak to the power of words to modulate ongoing visual processing.
Article
Language has been shown to play a key role in the development of a child's theory of mind, but its role in adult belief reasoning remains unclear. One recent study used verbal and nonverbal interference during a false-belief task to show that accurate belief reasoning in adults necessarily requires language (Newton & de Villiers, 2007). The strength of this inference depends on the cognitive processes that are matched between the verbal and nonverbal inference tasks. Here, we matched the two interference tasks in terms of their effects on spatial working memory. We found equal success on false-belief reasoning during both verbal and nonverbal interference, suggesting that language is not specifically necessary for adult theory of mind.