RetractedArticlePublisher preview available

Local and Global Cross-Modal Influences Between Vision and Hearing, Tasting, Smelling, or Touching

American Psychological Association
Journal of Experimental Psychology: General
Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

It is suggested that the distinction between global versus local processing styles exists across sensory modalities. Activation of one-way of processing in one modality should affect processing styles in a different modality. In 12 studies, auditory, haptic, gustatory or olfactory global versus local processing was induced, and participants were tested with a measure of their global versus local visual attention; the content of this measure was unrelated to the inductions. In a different set of 4 studies, the effect of local versus global visual processing on the way people listen to a poem or touch, taste, and smell objects was examined. In all experiments, global/local processing in 1 modality shifted to global/local processing in the other modality. A final study found more pronounced shifts when compatible processing styles were induced in 2 rather than 1 modality. Moreover, the study explored mediation by relative right versus left hemisphere activation as measured with the line bisection task and accessibility of semantic associations. It is concluded that the effects reflect procedural rather than semantic priming effects that occurred out of participants' awareness. Because global/local processing has been shown to affect higher order processing, future research may activate processing styles in other sensory modalities to produce similar effects. Furthermore, because global/local processing is triggered by a variety of real world variables, one may explore effects on other sensory modalities than vision. The results are consistent with the global versus local processing model, a systems account (GLOMOsys; Förster & Dannenberg, 2010).
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Local and Global Cross-Modal Influences Between Vision and Hearing,
Tasting, Smelling, or Touching
Jens Förster
University of Amsterdam
It is suggested that the distinction between global versus local processing styles exists across sensory
modalities. Activation of one-way of processing in one modality should affect processing styles in a
different modality. In 12 studies, auditory, haptic, gustatory or olfactory global versus local processing
was induced, and participants were tested with a measure of their global versus local visual attention; the
content of this measure was unrelated to the inductions. In a different set of 4 studies, the effect of local
versus global visual processing on the way people listen to a poem or touch, taste, and smell objects was
examined. In all experiments, global/local processing in 1 modality shifted to global/local processing in
the other modality. A final study found more pronounced shifts when compatible processing styles were
induced in 2 rather than 1 modality. Moreover, the study explored mediation by relative right versus left
hemisphere activation as measured with the line bisection task and accessibility of semantic associations.
It is concluded that the effects reflect procedural rather than semantic priming effects that occurred out
of participants’ awareness. Because global/local processing has been shown to affect higher order
processing, future research may activate processing styles in other sensory modalities to produce similar
effects. Furthermore, because global/local processing is triggered by a variety of real world variables, one
may explore effects on other sensory modalities than vision. The results are consistent with the global
versus local processing model, a systems account (GLOMO
sys
; Förster & Dannenberg, 2010).
Keywords: processing styles, procedural priming, sense modalities, multisensory interactions
People can attend to the same event in different ways. They can,
for example, attend to an object by zooming out and paying
attention to its entire figure or by zooming in and paying attention
to its details. As the proverb says, people can look at the forest or
at the trees. In psychological terms, people can use different
processing styles (Schooler, 2002). When using a global process-
ing style, people attend to the whole, or the gestalt, of a stimulus
set, whereas when using a local processing style, they attend to its
details (Navon, 1977). The distinction between holistic (i.e.,
global) and elemental (i.e., local) approaches was captured long
ago in philosophy (Kant, 1781/1969) and psychology (N. H. An-
derson, 1981; Asch, 1946; Witkin, Dyk, & Faterson, 1962).
In recent times, a more systematic investigation of global versus
local perception started with Navon’s (1977) letter task. To exam-
ine the global dominance hypothesis (predicting that people, by
default, look for the forest rather than the trees), he presented
participants with large letters that were made up of small letters
(see Figure 1) and showed that participants were faster to identify
the global target letters than the local target letters. While research-
ers have challenged the global dominance hypothesis and offered
various theoretical models explaining the effect (Kimchi, 1992;
Kinchla & Wolfe, 1979; Lamb & Robertson, 1990; Love, Rouder,
& Wisniewski, 1999), global versus local processing has generated
an abundance of research investigating both its moderators and its
effects (see Förster & Dannenberg, 2010, for a recent overview).
Global and local processing are basic types of perception that
can be triggered by a variety of variables. Those variables have
been identified across disciplines. Within personality and clinical
psychology, for example, pronounced global processing (as, e.g.,
measured with Navon’s, 1977, letter task) has been shown when
people have lower levels of chronic obsessionality (Yovel, Rev-
elle, & Mineka, 2005), lower levels of autism (Wang, Mottron,
Peng, Berthiaume, & Dawson, 2007), or lower levels of anxiety
(Mikulincer, Kedem, & Paz, 1990). Brain research suggests that
global processing is enhanced when participants’ right cerebral
hemisphere is more activated than their left hemisphere (Derry-
berry & Tucker, 1994; Mihov, Denzler, & Förster, 2010; Tucker &
Williamson, 1984; see also Ivry & Robertson, 1998). Social psy-
chologists focusing on self-regulation have shown pronounced
global processing when people think of psychologically distal
versus proximal events (Liberman & Förster, 2009), when they
think about their ideals compared with their duties or security
related issues (Förster & Higgins, 2005), when they are exposed to
unfamiliar as opposed to familiar events (Förster, 2009a; Förster,
Liberman, & Shapira, 2009), or when obstacles are standing in the
way of pursuing a goal (Marguc, Förster, & van Kleef, 2010).
This article was published Online First April 11, 2011.
I thank Jasmien Khattab, Radboud Dam, Pieter Verhoeven, Aga Bojar-
ska, Alexandra Vulpe, Anna Rebecca Sukkau, Basia Pietrawska, Elena
Tsankova, Gosia Skorek, Hana Fleissigova, Inga Schulte-Bahrenberg, Kira
Grabner, Konstantin Mihov, Laura Dannenberg, Maria Kordonowska,
Nika Yugay, Regina Bode, Rodica Damian, Rytis Vitkauskas, Janina
Marguc, Kevin Meier, and Sarah Horn, who served as experimenters and
raters in the different experiments. Special thanks go to Markus Denzler for
invaluable discussions and to Sarah Horn for editing a draft of this article.
Correspondence concerning this article should be addressed to Jens
Förster, Department of Social Psychology, University of Amsterdam,
Roetersstraat 15, 1018 WB Amsterdam, The Netherlands. E-mail:
j.a.forster@uva.nl
Journal of Experimental Psychology: General © 2011 American Psychological Association
2011, Vol. 140, No. 3, 364–389 0096-3445/11/$12.00 DOI: 10.1037/a0023175
364
RETRACTED
THIS ARTICLE HAS BEEN RETRACTED. SEE LAST PAGE
... Because affect or emotion is at the core of aesthetic appraisal, perceiving aesthetics or beauty of an object or scene perhaps requires deployment of attention to the selective local features which appear to be pleasing or displeasing at a glance. Research has shown that people can pay attention to the same object or stimulus in two different ways: (1) by zooming out and deploying attention to the whole or (2) by zooming in and deploying attention to the details (Förster, 2011). The former way is known as global-to-local (or simply global) processing strategy and the latter way is known as local-to-global (or simply local) processing strategy (Love et al., 1999). ...
... We conjecture that the proposed dualchannel model for visual aesthetics also can be generalized to these nonvisual modalities to some extent as research has suggested that humans are likely to use similar cognitive styles, global and local, across sensory modalities (Bouvet et al., 2011). Although this suggestion was based on the findings of visual and auditory modalities, a second study systematically examined global versus local processing across all major sensory modalities (visual, auditory, tactile, gustatory, and olfactory) and demonstrated crossmodal processing shifts not only between visual, tactile, and auditory modalities, but between visual and other nonvisual modalities as well (Förster, 2011). For example, in a set of experiments Förster's (2011) study showed that global visual perception was enhanced when participants focused on the global composition of food or aromas, and local visual perception was enhanced when they focused on the ingredients of food or aromas. ...
... Although this suggestion was based on the findings of visual and auditory modalities, a second study systematically examined global versus local processing across all major sensory modalities (visual, auditory, tactile, gustatory, and olfactory) and demonstrated crossmodal processing shifts not only between visual, tactile, and auditory modalities, but between visual and other nonvisual modalities as well (Förster, 2011). For example, in a set of experiments Förster's (2011) study showed that global visual perception was enhanced when participants focused on the global composition of food or aromas, and local visual perception was enhanced when they focused on the ingredients of food or aromas. In a different set of experiments, the same study further demonstrated that visually priming the global/local processing styles carried over to respective tasting and smelling. ...
Article
Full-text available
This integrative review rearticulates the notion of human aesthetics by critically appraising the conventional definitions, offerring a new, more comprehensive definition, and identifying the fundamental components associated with it. It intends to advance holistic understanding of the notion by differentiating aesthetic perception from basic perceptual recognition, and by characterizing these concepts from the perspective of information processing in both visual and nonvisual modalities. To this end, we analyze the dissociative nature of information processing in the brain, introducing a novel local-global integrative model that differentiates aesthetic processing from basic perceptual processing. This model builds on the current state of the art in visual aesthetics as well as newer propositions about nonvisual aesthetics. This model comprises two analytic channels: aesthetics-only channel and perception-to-aesthetics channel. The aesthetics-only channel primarily involves restricted local processing for quality or richness (e.g., attractiveness, beauty/prettiness, elegance, sublimeness, catchiness, hedonic value) analysis, whereas the perception-to-aesthetics channel involves global/extended local processing for basic feature analysis, followed by restricted local processing for quality or richness analysis. We contend that aesthetic processing operates independently of basic perceptual processing, but not independently of cognitive processing. We further conjecture that there might be a common faculty, labeled as aesthetic cognition faculty, in the human brain for all sensory aesthetics albeit other parts of the brain can also be activated because of basic sensory processing prior to aesthetic processing, particularly during the operation of the second channel. This generalized model can account not only for simple and pure aesthetic experiences but for partial and complex aesthetic experiences as well.
... Moreover, neurological studies on cranial nerves indirectly support the relationship between temporal framing and processing style. Studies on the brain's neural mechanisms reveal that temporal framing and processing style activate the same regions of the brain (Addis et al., 2007;Förster, 2011). Addis et al. (2007) performed brain imaging on sixteen participants using functional magnetic resonance imaging while they described past and future events. ...
... Other studies also found that when participants' right brain hemisphere was activated, the effect of global processing was enhanced (Tucker and Williamson, 1984;Derryberry and Tucker, 1994;Mihov et al., 2010). Through the line bisection task, Förster (2011) andFörster et al. (2008) found that global (local) processing was more associated with right (left) hemisphere activity. As the nerve conduction velocity in the same hemisphere of the brain is higher (Gazzaniga et al., 2013), temporal framing is likely to affect the selection of processing styles. ...
... Based on the above discussion, this research posits that, when presented with campaigns within different temporal frames, the global processing style triggered by temporal framing influences the type of sponsorship information people focus on. According to Förster (2011), when applying global (local) processing style, individuals tend to adopt convergent (divergent) thinking to seek (discover) similarities (differences) across (between) objects. This notion is justified by the fact that global (local) processing and broad (narrow) conceptual domains are associated with the acceptance (rejection) of information Bless, 1992, 2007;Förster, 2011;Liu et al., 2018). ...
Article
Full-text available
Time, an important, yet scarce resource in daily living, affects cognition, decision-making, and behavior in various ways. For instance, in marketing practice, time-bound strategies are often employed to influence consumer behavior. Thus, understanding and mastering a target market from a temporal perspective can contribute to the ease with which marketers and businesses formulate marketing strategies. Accordingly, this research conducts three studies to explore the influence of temporal framing as an external time cue on the evaluation of sponsorship-linked marketing campaigns. The studies show that future-framed participants adopted a global processing style. In this context, providing information about the sponsoring brand and sponsored event induced a more positive evaluation of future campaigns. However, in a past-frame context, participants were less likely to adopt a global processing style. Here, providing brand sponsor information alone increased the likelihood of a positive evaluation of past campaigns. Ultimately, the findings provide a theoretical basis for decision-making utilizing the influence of activities and events to enhance brand image.
... Studies comparing global and local processing suggest that psychological distance might interact with processing style (Förster, 2011;Förster & Dannenberg, 2010;Fujita, Henderson, Eng, Trope, & Liberman, 2006;Liberman & Förster, 2009b). In the spatial domain, participants primed to process globally tended to estimate distances as farther than those primed to process locally (Förster, 2011;Liberman & Förster, 2009b). ...
... Studies comparing global and local processing suggest that psychological distance might interact with processing style (Förster, 2011;Förster & Dannenberg, 2010;Fujita, Henderson, Eng, Trope, & Liberman, 2006;Liberman & Förster, 2009b). In the spatial domain, participants primed to process globally tended to estimate distances as farther than those primed to process locally (Förster, 2011;Liberman & Förster, 2009b). This is consistent with the more general finding that people represent psychologically distant (e.g., between-category) events using a more abstract and global perspective (Trope & Liberman, 2003;Fujita et al., 2007). ...
... Local and global processing approaches may not be limited to the task at hand. Recent work suggests that these approaches can carry over to subsequent, even unrelated, tasks (Förster, 2011;Förster & Dannenberg, 2010;Marguc, Förster, & van Kleef, 2011). Förster (2011) found that a global or local focus carried over to a later task in a different sensory modality. ...
Article
Full-text available
People use spatial and nonspatial information to structure memory for an environment. Two experiments explored interactions between spatial and social categories on map memory when mediated by retrieval (Experiment 1) and encoding (Experiment 2) demands. Participants studied a map depicting business locations (including proprietors' race). In Experiment 1, participants completed two memory tasks, one globally focused and the other locally focused. The global task compressed, while the local task expanded, within-category similarity. Furthermore, processing styles carried over to the subsequent task. Experiment 2 emphasized either the spatial or social category during encoding, which increased that category's weighting in memory. These results extend the work of Maddox, Rapp, Brion, and Taylor, suggesting that retrieval and encoding demands can shift how these categories affect spatial memory.
... H UMAN perception is multi-dimensional including vision, hearing, touching, tasting, and smell [2]. Among them, audio and visual are very important perceptual modalities in our daily life and their correspondences provide rich semantics for humans [3]. ...
Preprint
Full-text available
Given an audio-visual pair, audio-visual segmentation (AVS) aims to locate sounding sources by predicting pixel-wise maps. Previous methods assume that each sound component in an audio signal always has a visual counterpart in the image. However, this assumption overlooks that off-screen sounds and background noise often contaminate the audio recordings in real-world scenarios. They impose significant challenges on building a consistent semantic mapping between audio and visual signals for AVS models and thus impede precise sound localization. In this work, we propose a two-stage bootstrapping audio-visual segmentation framework by incorporating multi-modal foundation knowledge. In a nutshell, our BAVS is designed to eliminate the interference of background noise or off-screen sounds in segmentation by establishing the audio-visual correspondences in an explicit manner. In the first stage, we employ a segmentation model to localize potential sounding objects from visual data without being affected by contaminated audio signals. Meanwhile, we also utilize a foundation audio classification model to discern audio semantics. Considering the audio tags provided by the audio foundation model are noisy, associating object masks with audio tags is not trivial. Thus, in the second stage, we develop an audio-visual semantic integration strategy (AVIS) to localize the authentic-sounding objects. Here, we construct an audio-visual tree based on the hierarchical correspondence between sounds and object categories. We then examine the label concurrency between the localized objects and classified audio tags by tracing the audio-visual tree. With AVIS, we can effectively segment real-sounding objects. Extensive experiments demonstrate the superiority of our method on AVS datasets, particularly in scenarios involving background noise. Our project website is https://yenanliu.github.io/AVSS.github.io/.
... Taken together we present here theory that humans and maybe also other vertebrates have for not completely clear reasons very good access to visual and auditory memories while very limited access to the taste and olfactory memories. This is also true for the sensitive memory (pain, heat, cold, touch, and vibrations) [12]. It is almost impossible to recall feeling of pain voluntarily (except for pathological states or recent trauma). ...
Article
Full-text available
We present here significant difference in the evocation capability between sensory memories (visual, taste, and olfactory) throughout certain categories of the population. As object for this memory recall we selected French fries that are simple and generally known. From daily life we may intuitively feel that there is much better recall of the visual and auditory memory compared to the taste and olfactory ones. Our results in young (age 12–21 years) mostly females and some males show low capacity for smell and taste memory recall compared to far greater visual memory recall. This situation raises question whether we could train smell and taste memory recall so that it could become similar to visual or auditory ones. In our article we design technique of the volunteers training that could potentially lead to an increase in the capacity of their taste and olfactory memory recollection.
Article
Given an audio-visual pair, audio-visual segmentation (AVS) aims to locate sounding sources by predicting pixel-wise maps. Previous methods assume that each sound component in an audio signal always has a visual counterpart in the image. However, this assumption overlooks that off-screen sounds and background noise often contaminate the audio recordings in real-world scenarios. They impose significant challenges on building a consistent semantic mapping between audio and visual signals for AVS models and thus impede precise sound localization. In this work, we propose a two-stage bootstrapping audio-visual segmentation framework by incorporating multi-modal foundation knowledge 1^{1} In a nutshell, our BAVS is designed to eliminate the interference of background noise or off-screen sounds in segmentation by establishing the audio-visual correspondences in an explicit manner. In the first stage, we employ a segmentation model to localize potential sounding objects from visual data without being affected by contaminated audio signals. Meanwhile, we also utilize a foundation audio classification model to discern audio semantics. Considering the audio tags provided by the audio foundation model are noisy, associating object masks with audio tags is not trivial. Thus, in the second stage, we develop an audio-visual semantic integration strategy (AVIS) to localize the authentic-sounding objects. Here, we construct an audio-visual tree based on the hierarchical correspondence between sounds and object categories. We then examine the label concurrency between the localized objects and classified audio tags by tracing the audio-visual tree. With AVIS, we can effectively segment real-sounding objects. Extensive experiments demonstrate the superiority of our method on AVS datasets, particularly in scenarios involving background noise. Our project website is https://yenanliu.github.io/AVSS.github.io/ .
Article
New food product development involves explicit, sensory test phases, though these tests cannot identify consumers’ automatic processes and do not consider influential links between perception and memory, so they often lead to biased responses. Rigorous implicit testing can reflect consumer decision making more accurately, by assessing automatic reactions, especially in the case of new food product development where affective reactions are one of the main drivers. Two studies demonstrate the feasibility (Study 1) and accuracy (Study 2) of an implicit sensory test involving the gustatory modality. A gustative priming protocol with a lexical decision task demonstrates that different textures are associated with different flavours in memory. An investigation of consumers’ preferences for products that match the strongest associations further reveals that implicit protocols can inform new product development. Implicit measures of associations are more predictive than explicit measures and can be used upstream during new product development.
Article
Full-text available
Quickly and accurately recognizing emotional cues in a collective, referred to as emotional aperture, has been posited to be important for navigating social contexts. This ability, therefore, may be particularly strong among those who live within culturally situated collectivist contexts. In this research, we examined evidence for this variability in recognizing collective emotions across cultures by comparing Chinese and Americans’ performance on an emotional aperture task. We found that Chinese were indeed more accurate in recognizing collective emotions as compared with Americans. This was mediated by cultural variability in global (vs. local) processing. We discuss how these findings contribute to our understanding of culture and collective emotion perception.
Research
Full-text available
Issu de ma partie théorique de thèse dirigée par A. Didierjean et F. Maquestiaux (Université Franche-Comté)
Conference Paper
If you want to find something, you need to know what you are looking for and where it was last located. Successful visuospatial working memory (VSWM) requires that a stimulus identity be combined with information about its location. How identity and location information interact during binding presents an interesting question because of 1) asymmetries in cognitive demands required by location and identity processing and 2) the fact that the two types of information are processed in different neural streams. The current studies explore how global and local processing approaches impact binding in VSWM. Experiment 1 explores effects of global spatial organization. Experiment 2 induces local processing demands through memory updating. Results show better location memory with both global and local processing, but also suggest that the processing focus (global or local) affects the interaction of location and identity processing in VSWM.
Article
Full-text available
Six experiments with 53 college students explored the effect of a previous level of processing on current processing of visual scenes. A robust level-readiness effect supported the contention that processing at a given level of detail biases the distribution of attention so that more is allocated to that level for future processing. This effect, whereby processing is faster at a given level if previous processing has been at that level, occurred regardless of the conspicuity of features at the various levels. Conflict between levels depended on the imbalance of conspicuity and attention allocation between levels, with the more conspicuous level interfering more. A model of 2-state attention switching similar to that of G. Sperling and M. J. Melchnor (1978) is proposed. It is concluded that processing under focused-attention conditions can sometimes be slower than that under divided attention. (27 ref)
Chapter
Full-text available
Presents an integrative model of the emergence, direction (assimilation vs. contrast), and size of context effects in social judgment.
Book
Anatomically, the central nervous system looks remarkably symmetrical—from the relatively simple structures of the spinal cord to the extensively convoluted folds of the cerebral hemispheres. At the functional level, however, there are striking differences between the left and right hemispheres. Although popular writings attribute language abilities to the left hemisphere and spatial abilities to the right, differences in hemispheric function appear to be more subtle. According to Ivry and Robertson, asymmetries over a wide range of perceptual tasks reflect a difference in strength rather than kind, with both hemispheres contributing to the performance of complex tasks, whether linguistic or spatial. After an historical introduction, the authors offer a cognitive neuroscience perspective on hemispheric specialization in perception. They propose that the two hemispheres differ in how they filter task-relevant sensory information. Building on the idea that the hemispheres construct asymmetric representations, the hypothesis provides a novel account of many laterality effects. A notable feature of the authors' work is their attempt to incorporate hemispheric specialization in vision, audition, music, and language within a common framework. In support of their theory, they review studies involving both healthy and neurologically impaired individuals. They also provide a series of simulations to demonstrate the underlying computational principles of their theory. Their work thus describes both the cognitive and neurological architecture of hemispheric asymmetries in perception. Bradford Books imprint
Book
A biologically oriented introduction to synesthesia by the leading authority on the subject. For decades, scientists who heard about synesthesia hearing colors, tasting words, seeing colored pain just shrugged their shoulders or rolled their eyes. Now, as irrefutable evidence mounts that some healthy brains really do this, we are forced to ask how this squares with some cherished conceptions of neuroscience. These include binding, modularity, functionalism, blindsight, and consciousness. The good news is that when old theoretical structures fall, new light may flood in. Far from a mere curiosity, synesthesia illuminates a wide swath of mental life. In this classic text, Richard Cytowic quickly disposes of earlier criticisms that the phenomenon cannot be "real," demonstrating that it is indeed brain-based. Following a historical introduction, he lays out the phenomenology of synesthesia in detail and gives criteria for clinical diagnosis and an objective "test of genuineness." He reviews theories and experimental procedures to localize the plausible level of the neuraxis at which synesthesia operates. In a discussion of brain development and neural plasticity, he addresses the possible ubiquity of neonatal synesthesia, the construction of metaphor, and whether everyone is unconsciously synesthetic. In the closing chapters, Cytowic considers synesthetes' personalities, the apparent frequency of the trait among artists, and the subjective and illusory nature of what we take to be objective reality, particularly in the visual realm. The second edition has been extensively revised, reflecting the recent flood of interest in synesthesia and new knowledge of human brain function and development. More than two-thirds of the material is new. Bradford Books imprint