Conference Paper

How do warm colors affect visual attention?

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Red, orange and yellow are termed as warm colors. A study regarding role of warm colors on visual attention is reported in this paper. Distributions of chromatic features (hue and saturation) are found to be different for warm colors that draw attention and those do not. It is observed that likelihood of drawing attention by a warm color depends on both of its hue and saturation component. Interestingly, this dependency is not related to the absolute hue and saturation, but relative to these chromatic components of other warm color pixels. Warm colors with hue relatively closer to red and/or higher saturation are more likely to guide attention.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Several studies have shown that warm colors such as red, and cool colors such as blue, may affect attention and arousal to different degrees (Yoto, Katsuura, Iwanaga, & Shimomura, 2007). Warm colors in particular have been found to increase attention, specifically red, even when saturation and brightness are held constant between warm and cool colors (Aziz & Mertsching, 2008;Osberger & Maeder, 1998;Pal, Mukherjee, & Mitra, 2012). If red has higher attention saliency, then if red stimuli enable better memory performance than blue stimuli, this could support the attention and arousal theory because higher attention saliency would have had a more beneficial effect. ...
... The increased memory performance of red objects in a neutral context supports the theory of attention and arousal. Many studies discussed earlier found that red increased attention (Fortier-Gauthier, Dell'Acqua, & Jolicoer, 2013;Pal, Mukherjee, & Mitra, 2012) and arousal (Jacobs & Suess, 1975;Stone, 2003) over other colors such as blue. These studies indicated that red would have a higher memory performance than blue if the attention and arousal theory was correct. ...
Thesis
Full-text available
Color has been shown to have a beneficial effect on memory in the general population and in some clinical populations, such as those with dementia and autism. Many factors within the effect of color on memory have been tested, however the differences among colors has been largely ignored. This study aimed to close this gap by comparing the effects of red, blue, and black on recognition memory. In addition, this study was designed in a way that could help test one of the main theories on how color enhances memory, the attention and arousal theory. This theory posits that the effect of color is due to increased attention and arousal, and would predict a larger beneficial effect when using red due to higher amounts of attention and arousal. The experiment included 48 male and female participants, who viewed object pictures in luminance-matched red, blue, and black. The participants were later tested on their memory for the objects in a recognition memory test. Results indicated that participants had significantly higher accuracy for red objects than for blue objects, and also had a more conservative response bias for red objects than for blue or black objects. These results indicate that there is a difference in the effect on memory depending on the color. The results also supported the theory of attention and arousal, providing further understanding on how this process works. This knowledge can help researchers and clinicians by enabling them to take better advantage of the effect of color on memory.
... To bring people closer to the interactive relationship with the waterscape, we designed viewing platforms with varying sight heights and angles (Figure 4). Warm colors with greater saturation are used on the platform to draw the eye [74] and to help reduce stress and emotions [75]. The platform is arranged in a "line shape" alongside the riverbank. ...
Article
Full-text available
The formal beauty of “objects” is the main focus of modern rural landscapes, ignoring human interaction with the environment and the emotional reflection in this behavioral process. It is unable to satisfy the emotional needs of younger people who aspire to a high-quality life in the rural environment. The research idea of this paper is ‘first assessment—then design—then validation’. First, A 5-point Likert scale was used to investigate differences in contemporary young people’s emotional perceptions of the four rural natural landscapes in terms of instinct, behavior, and reflection. Then, using architectural design methods, a visual attraction element (viewing platform) was added by selecting samples that varied in all three dimensions (visual richness, behavioral attraction, and depth of thought). After that, a desktop eye tracker was used to record the eyeball characteristics of participants viewing the current images of natural landscapes and images of modified natural landscapes (pupil diameter, fixation duration, gaze point, etc.), and these data were combined with the subjective psychological perception scale score to determine whether or not the subjects’ positive emotions are evoked by the modified natural environment. The findings indicate that placing visually attractive elements between people and the natural world can cause subjects to feel good, think deeply, and feel more a part of the surroundings. Furthermore, we confirmed that subjects’ emotions can be evoked by 2D natural environment pictures and that the length of time subjects gaze at a picture is unaffected by the size of any individual element.
... According to the extent element of restoration theory, environmental components can easily occupy an occupant's mind (Herzog et al., 2003). Warm colors exhibited excellent attention-guiding qualities (Pal et al., 2012), which may explain their use in learning materials designed to evoke positive emotions and enhance understanding during media learning (Plass et al., 2014;Münchow et al., 2017). Color and mood are interconnected concepts (Valdez and Mehrabian, 1994), and even colors within the same scheme can elicit different emotional responses (Naz and Epps, 2004). ...
Article
Full-text available
As society and the economy have advanced, the focus of architectural and interior environment design has shifted from practicality to eliciting emotional responses, such as stimulating environments and innovative inclusive designs. Of particular interest is the home environment, as it is best suited for achieving restorative effects, leading to a debate between interior qualities and restorative impact. This study explored the relationships between home characteristics, restorative potential, and neural activities using the Neu-VR. The results of the regression analysis revealed statistically significant relationships between interior properties and restorative potential. We examined each potential characteristic of the home environment that could have a restorative impact and elucidated the environmental characteristics that should be emphasized in residential interior design. These findings contribute evidence-based knowledge for designing therapeutic indoor environments. And combining different restorative potential environments with neural activity, discussed new neuro activities which may predict restorativeness, decoded the new indicators of neuro activity for environmental design.
... Different colors elicited different amounts of arousal and attention (Mukherjee, J. & Mitra, 2012), and color presentation had a systematic effect on a subject's emotional state (Wilms & Oberfeld, 2018). ...
Article
Full-text available
Will the emotional context in which a painting is placed impact how well adolescents remember the colors in it? Will there be differences in color perception and recognition based on age and gender? Specific materials had to be designed to answer this question: (a) an abstract, themeless painting composed of twelve colors, each covering the same area with the same number of pixels, and (b) four texts with different emotional connotations (fear, anger, happiness, and sadness) describing the lives of fictitious painters. After being pretested, the material was proposed to 142 seventh-grade students and 71 tenth-grade students. Each subject studied a painting and heard one of the four texts. Next, the painting was taken away; after this, the subjects ordered the twelve colors based on how much area they thought each one covered the painting. It was hypothesized that the subjects would give greater importance to specific colors depending on the emotional context induced by the associated text. The results confirmed this hypothesis, although the tenth graders were less affected by the emotional context than the seventh graders. There was no statistically significant effect of gender in either population.
... It is, however, possible that if there were fewer coloured icons, colour might be a more effective landmark -in studies of artificial landmarks, for example, having only grey-coloured obvious landmarks significantly improved performance [78] in a similar selection task. It is also possible that colour interfered with participants' ability to see differences in the abstract shapes used in Study 1; that is, the colours used in the Abstract+Colour condition may have reduced contrast and thus reduced any potential effect of shape distinctiveness [44,55]. ...
Conference Paper
Full-text available
Learnability is important in graphical interfaces because it supports the user's transition to expertise. One aspect of GUI learnability is the degree to which the icons in toolbars and ribbons are identifiable and memorable-but current "flat" and "subtle" designs that promote strong visual consistency could hinder learning by reducing visual distinctiveness within a set of icons. There is little known, however, about the effects of visual distinctiveness of icons on selection performance and memorability. To address this gap, we carried out two studies using several icon sets with different degrees of visual distinctiveness, and compared how quickly people could learn and retrieve the icons. Our first study found no evidence that increasing colour or shape distinctiveness improved learning, but found that icons with concrete imagery were easier to learn. Our second study found similar results: there was no effect of increasing either colour or shape distinctiveness, but there was again a clear improvement for icons with recognizable imagery. Our results show that visual characteristics appear to affect UI learnability much less than the meaning of the icons' representations.
... Colors with longer wavelength such as color 7 (orange color) and color 8 (red color) appear to have higher mean compared to other colors, although the differences are not significant. Since looking at a flickering stimulus and generating SSVEPs requires high visual attention, it is recommended to use colors that may [16] are coherent with our results that warm colors such as red and orange are more likely to guide attention. It is no doubt that red color grabs attention and stimulates visual sense considering its usage on symbolic sign like "dangerous" and "attention" to draw people attention. ...
Conference Paper
Full-text available
Due to the low training time and high time resolution, steady state visual evoked potential (SSVEP)-based brain computer interfaces (BCIs) have been largely studied in recent years. The stimulus properties such as frequency, color and shape can greatly affect the performance, comfort and safety of the brain computer interfaces. Despite this fact, stimulation properties have received fairly little attention. This study aims to investigate the influence of stimulus color on SSVEPs. We tested 10 colors and did evaluation by using three different methods: the power amplitude, multivariate synchronization index, and canonical correlation analysis. The results showed that violet color had the least influence to SSVEPs while red color tended to have stronger impact.
Chapter
Even the enormous processing capacity of the human brain is not enough to handle all the visual sensory information that falls upon the retina. Still human beings can efficiently respond to the external stimuli. Selective attention plays an important role here. It helps to select only the pertinent portions of the scene being viewed for further processing at the deeper brain. Computational modeling of this neuro-psychological phenomenon has the potential to enrich many computer vision tasks. Enormous amounts of research involving psychovisual experiments and computational models of attention have been and are being carried out all within the past few decades. This article compiles a good volume of these research efforts. It also discusses various aspects related to computational modeling of attention— such as, choice of features, evaluation of these models, and so forth.
Article
Even the enormous processing capacity of the human brain is not enough to handle all the visual sensory information that falls upon the retina. Still human beings can efficiently respond to the external stimuli. Selective attention plays an important role here. It helps to select only the pertinent portions of the scene being viewed for further processing at the deeper brain. Computational modeling of this neuro-psychological phenomenon has the potential to enrich many computer vision tasks. Enormous amounts of research involving psychovisual experiments and computational models of attention have been and are being carried out all within the past few decades. This article compiles a good volume of these research efforts. It also discusses various aspects related to computational modeling of attention- such as, choice of features, evaluation of these models, and so forth.
Conference Paper
Visual attention is an indispensable component of complex vision tasks. When looking at a complex scene, our ocular perception is confronted with a large amount of data that needs to be broken down for processing by our psychovisual system. Selective visual attention provides a mechanism for serializing the visual data, allowing for sequential processing of the content of the scene. A Bottom-Up computational model is described that simulates the psycho-visual model of saliency based on features of intensity and color. The method gives sequential priorities to objects which other computational models cannot account for. The results demonstrate a fast execution time, full resolution maps and high detection accuracy. The model is applicable on both natural and artificial images.
Article
Full-text available
Bottom-up or saliency-based visual attention allows pri- mates to detect nonspecific conspicuous targets in cluttered scenes. A classical metaphor, derived from electrophysiological and psycho- physical studies, describes attention as a rapidly shiftable ''spot- light.'' We use a model that reproduces the attentional scan paths of this spotlight. Simple multi-scale ''feature maps'' detect local spatial discontinuities in intensity, color, and orientation, and are combined into a unique ''master'' or ''saliency'' map. The saliency map is se- quentially scanned, in order of decreasing saliency, by the focus of attention. We here study the problem of combining feature maps, from different visual modalities (such as color and orientation), into a unique saliency map. Four combination strategies are compared us- ing three databases of natural color images: (1) Simple normalized summation, (2) linear combination with learned weights, (3) global nonlinear normalization followed by summation, and (4) local non- linear competition between salient locations followed by summation. Performance was measured as the number of false detections be- fore the most salient target was found. Strategy (1) always yielded poorest performance and (2) best performance, with a threefold to eightfold improvement in time to find a salient target. However, (2) yielded specialized systems with poor generalization. Interestingly, strategy (4) and its simplified, computationally efficient approxima- tion (3) yielded significantly better performance than (1), with up to fourfold improvement, while preserving generality. © 2001 SPIE and IS&T. (DOI: 10.1117/1.1333677)
Conference Paper
Full-text available
Visual attention has been a hot research point for many years and many new applications are emerging especially for wireless multimedia services. In this paper a novel region-based visual attention is proposed to detect the Regions of Interest (ROI) of images. In the proposed method, density based image segmentation is first performed by regarding region as the perceptive unit, which makes the model robust to the scale of ROIs and contains more perceptive information. To generate region saliency map to detect ROI, global effect and contextual difference are covered in the form of distance factor and adjacency factor respectively. Since different ROIs may have different importance for different purposes, a ROI ranking algorithm is designed for browsing large images on small displays. Experimental results and evaluation reveal that our method works effectively to detect ROIs from images and the users are satisfied with the browsing sequence on small displays.
Article
Full-text available
A proposal for saliency computation within the visual cortex is put forth based on the premise that localized saliency computation serves to maximize information sampled from one's environment. The model is built entirely on computational constraints but nevertheless results in an architecture with cells and connectivity reminiscent of that appearing in the visual cortex. It is demonstrated that a variety of visual search behaviors appear as emergent properties of the model and therefore basic principles of coding and information transmission. Experimental results demonstrate greater efficacy in predicting fixation patterns across two different data sets as compared with competing models.
Article
Full-text available
When processing complex visual input, human observers sequentially allocate their attention to different subsets of the stimulus. What are the mechanisms and strategies that guide this selection process? We investigated the influence of various stimulus features on human overt attention--that is, attention related to shifts of gaze with natural color images and modified versions thereof. Our experimental modifications, systematic changes of hue across the entire image, influenced only the global appearance of the stimuli, leaving the local features under investigation unaffected. We demonstrated that these modifications consistently reduce the subjective interpretation of a stimulus as "natural" across observers. By analyzing fixations, we found that first-order features, such as luminance contrast, saturation, and color contrast along either of the cardinal axes, correlated to overt attention in the modified images. In contrast, no such correlation was found in unmodified outdoor images. Second-order luminance contrast ("texture contrast") correlated to overt attention in all conditions. However, although none of the second-order color contrasts were correlated to overt attention in unmodified images, one of the second-order color contrasts did exhibit a significant correlation in the modified images. These findings imply, on the one hand, that higher-order bottom-up effects--namely, those of second-order luminance contrast--may partially account for human overt attention. On the other hand, these results also demonstrate that global image properties, which correlate to the subjective impression of a scene being "natural," affect the guidance of human overt attention.
Article
Full-text available
A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing saliency. The system breaks down the complex problem of scene understanding by rapidly selecting, in a computationally efficient manner, conspicuous locations to be analyzed in detail. Index terms: Visual attention, scene analysis, feature extraction, target detection, visual search. \Pi I. Introduction Primates have a remarkable ability to interpret complex scenes in real time, despite the limited speed of the neuronal hardware available for such tasks. Intermediate and higher visual processes appear to select a subset of the available sensory information before further processing [1], most likely to reduce the complexity of scene analysis [2]. This selection appears to be implemented in the ...
Article
Full-text available
We present a method for automatically determining the perceptual importance of different regions in an image. The algorithm is based on human visual attention and eye movement characteristics. Several features known to influence human visual attention are evaluated for each region of a segmented image to produce an importance value for each factor and region. These are combined to produce an Importance Map, which classifies each region of the image in relation to its perceptual importance. Results shown indicate that the calculated Importance Maps correlate well with human perception of visually important regions. The Importance Maps can be used in a variety of applications, including compression, machine vision, and image databases. Our technique is computationally efficient and flexible, and can easily be extended to specific applications. 1.
Conference Paper
In evaluating a model of a system, one should have the ideal response of the system as a reference. For evaluation of visual saliency models, one requires to obtain salient locations/objects in an image according to human visual system. In many experiments, eye-trackers are used for this purpose. But this technology is unavailable to many researchers. This paper deals with an alternate method of collecting groundtruth data. Salient locations are identified using a hand-eye coordination based approach. It does not require any instruments other than the usual computing resources. Then image segment based analysis generates groundtruth data which serves the reference for evaluating visual saliency models. The generated groundtruth data fits to the size and shape of the underlying objects.
Conference Paper
Visual attention region determination simulates the behavior of the human visual system and determines visual attention regions in an image. In this study, a visual attention region determination approach using low-level features, including luminance, color, and region information, is proposed. First, the contrast map is attained by computing the contrast between each pixel and its ldquothresholdingrdquo neighboring pixels in eight directions, based on the just noticeable difference (JND) model. Then, the block-based edge map and standard deviation representation are used to generate the region mask. The saliency map is obtained by using the contrast map and the region mask. Finally, a visual attention region determination scheme is presented to determine visual attention region(s) in the image. Based on the experimental results obtained in this study, the proposed approach has better performance than two comparison approaches.
Article
In the macaque monkey, a dozen distinct visual areas have been identified in the cerebral cortex. These areas can be arranged in a well-defined hierarchy on the basis of their pattern of interconnections. Physiological recordings suggest that there are at least two major functional streams in this hierarchy, one related to the analysis of motion and the other to the analysis of form and color.
Conference Paper
Evaluation is a key part while proposing a new model. To evaluate models of visual saliency, one needs to compare the model’s output with salient locations in an image. This paper proposes an approach to find out the salient locations, i.e., groundtruth for experiments with visual saliency models. It is found that the proposed human hand-eye coordination based technique can be an alternative to costly human pupil-tracking based systems. Moreover, an evaluation metric is also proposed that suits the necessity of the saliency models.
Conference Paper
In this paper, we propose an algorithm for identifying regions of interest (ROIs) in video, particularly for the keyframes extracted from a home video. The camera motion is introduced as a new factor that can influence the visual saliency. The global motion parameters are used to generate location-based importance maps. These maps can be combined with other saliency maps calculated using other visual and high-level features. Here, we employed the contrast-based saliency as an important low level factor along with face detection as a high level feature in our approach.
Conference Paper
Standardised image databases or rather the lack of them are one of the main weaknesses in the field of content based image retrieval (CBIR). Authors often use their own images or do not specify the source of their datasets. Naturally this makes comparison of results somewhat difficult. While a first approach towards a common colour image set has been taken by the MPEG 7 committee their database does not cater for all strands of research in the CBIR community. In particular as the MPEG-7 images only exist in compressed form it does not allow for an objective evaluation of image retrieval algorithms that operate in the compressed domain or to judge the influence image compression has on the performance of CBIR algorithms. In this paper we introduce a new dataset, UCID (pronounced "use it") - an Uncompressed Colour Image Dataset which tries to bridge this gap. The UCID dataset currently consists of 1338 uncompressed images together with a ground truth of a series of query images with corresponding models that an ideal CBIR algorithm would retrieve. While its initial intention was to provide a dataset for the evaluation of compressed domain algorithms, the UCID database also represents a good benchmark set for the evaluation of any kind of CBIR method as well as an image set that can be used to evaluate image compression and colour quantisation algorithms.
Article
The mixture of a few horizontal and vertical line segments embedded in an aggregate of diagonal line segments can be rapidly counted and their positions rapidly determined by a parallel (preattentive) process. However, the discrimination between horizontal and vertical orientation (that is, discrimination of a single conspicuous feature) requires serial search by focal attention. Under recent theories of attention, focal attention has been assumed to be required for the recognition of different combinations of features. According to the findings of this experiment, knowing "what" even a single feature is requires time-consuming search by focal attention. Only knowing "where" a target it is mediated by a parallel process.
Article
A new hypothesis about the role of focused attention is proposed. The feature-integration theory of attention suggests that attention must be directed serially to each stimulus in a display whenever conjunctions of more than one separable feature are needed to characterize or distinguish the possible objects presented. A number of predictions were tested in a variety of paradigms including visual search, texture segregation, identification and localization, and using both separable dimensions (shape and color) and local elements or parts of figures (lines, curves, etc. in letters) as the features to be integrated into complex wholes. The results were in general consistent with the hypothesis. They offer a new set of criteria for distinguishing separable from integral features and a new rationale for predicting which tasks will show attention limits and which will not.
Article
Visual attention is one of the important phenomena in biological vision which can be followed to achieve more efficiency, intelligence, and robustness in artificial vision systems. This paper investigates a region-based approach that performs pixel clustering prior to the processes of attention in contrast to late clustering as done by contemporary methods. The foundation steps of feature map construction for the region-based attention model are proposed here. The color contrast map is generated based upon the extended findings from the color theory, the symmetry map is constructed using a novel scanning-based method, and a new algorithm is proposed to compute a size contrast map as a formal feature channel. Eccentricity and orientation are computed using the moments of obtained regions and then saliency is evaluated using the rarity criteria. The efficient design of the proposed algorithms allows incorporating five feature channels while maintaining a processing rate of multiple frames per second. Another salient advantage over the existing techniques is the reusability of the salient regions in the high-level machine vision procedures due to preservation of their shapes and precise locations. The results indicate that the proposed model has the potential to efficiently integrate the phenomenon of attention into the main stream of machine vision and systems with restricted computing resources such as mobile robots can benefit from its advantages.
Conference Paper
Measuring perceptual saliency of regions in a scene is important for determining regions of interest. Color, texture and shape cues are good low-level features for detecting saliency. While self saliency refers to intrinsic attributes of a region, relative saliency as used to measure how salient a region is relative to its surrounding and, thus, needs to be defined within a spatial context. A few spatial context models are investigated in this study. In particular, we propose an auto-scaled, amorphous neighborhood as the context model to obtain reliable measurements of relative saliency features. Comparison of three context models has shown that the proposed model is capable of generating predicates more consistent with perceived saliency
Conference Paper
Active and selective perception seeks regions of interest in an image in order to reduce the computational complexity associated with time-consuming processes such as object recognition. We describe in this paper a visual attention system that extracts regions of interest by integrating multiple image cues. Bottom-up cues are detected by decomposing the image into a number: of feature and conspicuity maps, while a-priori knowledge (i.e. models) about objects is used to generate top-down attention cues. Bottom-up and top-down information is combined through a non-linear relaxation process using energy minimization-like procedures. The functionality of the attention system is expanded by the introduction of an alerting (motion-based) system able to explore and avoid obstacles. Experimental results are reported, using cluttered and noisy scenes
Article
Models of visual attention provide a general approach to control the activities of active vision systems. We introduce a new model of attentional control that differs in important aspects from conventional ones. We divide the selection into two stages, which is more suitable for the system as well as explaining different phenomena found in natural visual attention, such as the dispute between early and late selection. The proposed model is especially designed for use in dynamic scenes. Our approach aims at modeling as much of a general active vision system as possible and designing clean interfaces for the integration of the remaining specific aspects needed in order to solve specific problems
Article
this paper were based on the 69 natural images; analysis of the remaining 8 synthetic images is not included.