Article

Changing Faces: A Detection Advantage in the Flicker Paradigm

Authors:
Article

Changing Faces: A Detection Advantage in the Flicker Paradigm

If you want to read the PDF, try requesting it from the authors.

Abstract

Observers seem surprisingly poor at detecting changes in images following a large transient or flicker. In this study, we compared this change blindness phenomenon between human faces and other common objects (e.g., clothes). We found that changes were detected far more rapidly and accurately in faces than in other objects. This advantage for faces, however, was found only for upright faces in multiple-object arrays, and was completely eliminated when displays showed one photograph only or when the pictures were inverted. These results suggest a special role for faces in competition for visual attention, and provide support for previous claims that human faces are processed differently than stimuli that may be of less biological significance.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... We further incorporate theory surrounding change detection to explain that how frequently entrepreneurs change from one facial expression to another influences funding. This work argues that people automatically detect changes in others' facial expressions at a preattentive level (Ro et al., 2001). Changes in expression increase observer attention (Eastwood et al., 2001;Frischen et al., 2008). ...
... Our contribution enhances the generalizability of the dual threshold model itself and opens a new line of research concerning display rules in entrepreneurship. Finally, by incorporating insights from basic emotion theory (e.g., Ekman, 1992;Keltner et al., 2019) and theory surrounding change detection (Ro et al., 2001), we explain why the frequency of changes in entrepreneurs' facial expressions of emotion promotes funding. Our theorizing and findings suggest potential boundary conditions for emotional expressiveness in funding pitches, such that expressiveness is more likely to be beneficial if an entrepreneur expresses a variety of emotions. ...
... Changes without emotional significance require substantial attentional resources to be noticed (Rensink, 2002), whereas those with emotional significance-like emotionally expressive human faces-are processed quickly and preattentively. 1 As such, people are generally well-equipped to quickly detect changes in others' facial expressions (Ro et al., 2001), which has been established in multiple lab experiments (e.g., Kovarski et al., 2017;Niedenthal et al., 2001). ...
Article
Full-text available
We build upon theory from evolutionary psychology and emotional expression, including basic emotion theory and the dual threshold model of anger in organizations, to extend knowledge about the influence of facial expressions of emotion in entrepreneurial fundraising. First, we conduct a qualitative analysis to understand the objects of entrepreneurs' facial expressions of four basic emotions in their pitches: happiness, anger, fear, and sadness. This provides a base for our theorizing that the frequency of entrepreneurs' facial expression of each of these emotions exhibits an inverted U-shaped relationship with funding. We also argue that the frequency of changes in entrepreneurs' facial expressions is positively related to funding. We test our predictions with a sample of 489 funding pitches using computer-aided facial expression analysis. Results support inverted U-shaped relationships of the frequency of facial expression of happiness, anger, and fear with funding, but show a negative relationship of sadness with funding. Results further support that the frequency of change in entrepreneurs' facial expressions promotes funding.
... Newborns will also visually track a schematic face farther into the periphery than a scrambled face (Goren et al., 1975), and prefer to look at upright rather than inverted schematic faces (Mondloch et al., 1999). Because human faces are biologically and socially significant, the brain may have evolved to be more responsive to faces which can preferentially capture and engage attention compared to other stimuli (Bindemann et al., 2005;Langton et al., 2008;Palermo & Rhodes, 2007;Ro et al., 2001). ...
... In these paradigms, both types of stimuli should capture attention and lead to an IOR response. However, faces should initially capture more reflexive attention compared to other stimuli (Bindemann et al., 2005;Langton et al., 2008;Palermo & Rhodes, 2007;Ro et al., 2001) since faces are processed faster and more in depth than other stimuli. This means that after attention is disengaged from the stimulus cue (and brought back to fixation), there should be a stronger inhibition to return to that previously attended area where the face was, compared to if another stimulus was there, since the faces were more thoroughly attended to, and there should be a stronger bias to search for novel locations (perhaps for new faces). ...
... That is, an IOR response (longer RTs to cued than uncued targets) is still expected to occur at the 150 ms SOA, but overall RTs should be faster for face trials compared to house trials, regardless of cued or uncued positions. Since faces should initially capture a faster shift of attention compared to houses (Bindemann et al., 2005;Langton et al., 2008;Palermo & Rhodes, 2007;Ro et al., 2001), it is then predicted that at the later SOAs, a greater IOR response should be seen for faces in general compared to houses. ...
... Attention is a critical gateway to information processing, but various demonstrations have accumulated to suggest that faces may have prioritized access to attention, allowing them to be perceived even when not intentionally attended. For example, faces have been shown to be less prone to inattentional blindness and change blindness than other common objects such as clothes (Mack & Rock, 1998;Devue et al, 2009a;Ro et al., 2001). Face distractors have shown enhanced ability to to capture attention when appearing as a singleton distractor during a task of visual search (Ro, Friggel & Lavie, 2007), and disrupt target detection in an attentional blink task, relative to other abruptly onsetting distractors (Sato & Kawahara, 2015). ...
... Face distractors have shown enhanced ability to to capture attention when appearing as a singleton distractor during a task of visual search (Ro, Friggel & Lavie, 2007), and disrupt target detection in an attentional blink task, relative to other abruptly onsetting distractors (Sato & Kawahara, 2015). However, with the exception of change blindness (Ro et al., 2001), all these lines of evidence have not considered perceptual load -a major determinant of selective attention. This research has thus left open the possibility that the special attentional status of faces could be limited to situations when the attended task does not demand full attention. ...
... Thus while both face and non face distractors received the 'spill over' of attentional resources in conditions of low perceptual load, only face distractors could capture attention in conditions of high perceptual load. Indeed the aforementioned reduced vulnerability to change blindness for faces compared to other non face objects (Ro et al., 2001), has also been shown to hold when the displays were of high load including multiple competing objects for attention, while no difference was found in conditions of low load, further confirming the superior ability of faces to capture attention when competing with other objects. ...
Article
Research over the past 25 years indicates that stimulus processing is diminished when attention is engaged in a perceptually demanding task of high ‘perceptual load’. These results have generalized across a variety of stimulus categories, but a controversy evolved over the question of whether perception of distractor faces (or other categories of perceptual expertise) can proceed irrespective of the level of perceptual load in the attended task. Here we identify task-relevance, and in particular identity-relevance, as a potentially important factor in explaining prior inconsistencies. In four experiments, we tested whether perceptual load in an attended letter or word task modulates the processing of famous face distractors, while varying their task-relevance. Distractor interference effects on task RTs was reduced by perceptual load not only when the faces were entirely task-irrelevant, but also when the face gender was task relevant, within a name gender classification response-competition task, using famous female or male distractor faces. However, when the identity associated with the famous faces was primed by the task using their names, as in prior demonstrations that face distractors are immune to the effects of perceptual load, we were able to replicate these prior findings. Our findings demonstrate a role for identity-priming by the relevant task in determining attentional capture by faces under high perceptual load. Our results also highlight the importance of considering even relatively subtle forms of task-relevance in selective attention research.
... Even before the foundation of empirical aesthetics as an academic discipline, the English painter William Hogarth argued that S-shaped, or serpentine, lines, which he called "the lines of beauty", are more productive of beauty and lively ornamentation, because they can vary both in length and in degree of curvature, whereas straight lines vary only in length (Corradi & Munar, 2019;Hogarth, 1753). The study of how more complex stimuli (composed of combinations of straight lines, curved lines and angles) are perceived from an affective point of view, has been deepened two and a half centuries after Hogarth: several evidence have shown that biological and affective cues, such as emotional faces, (Gronau et al., 2003;Sui & Liu, 2009;Vuilleumier, 2005) capture attention more than do most common stimuli without a biological or affective relevance (e.g., Ro et al., 2001). For example, emotionallycharged expressions and baby faces draw attention more than neutral faces (Brosch et al., 2007;Palermo & Rhodes, 2007), whereas other stimuli can also strongly grasp our attention when present in a crowd, including knives, guns, syringes and dangerous animals (e.g., snakes, spiders), namely negative/threatening stimuli which require a rapid response. ...
... We point out that we preferred to use an upward-pointing equilateral triangle instead of a downward-pointing one, to be consistent with the other 2 classical geometric shapes, since also the typicality of a form can play a role in its associated valence (Reber et al., 2004). Despite that, previous research has often demonstrated that, although angular shapes are liked less than curved ones, the downward-pointing triangle, maybe due to the link with the shape drawn by the eyebrows when we are angry, is further disliked and associated with threat (e.g., Larson et al., 2012;Ro et al., 2001). Hence the equilateral triangle, although negatively valenced, could be not so "threatening" to the point of becoming associated with the self-reported aggressivity of our sample. ...
Article
Full-text available
For more than a century, psychologists have been interested in how visual information can arouse emotions. Several studies have shown that rounded shapes evoke positive feelings due to their link with happy/baby-like expressions, compared with sharp angular shapes, usually associated with anger and threatening objects having negative valence. However, to date, no-one has investigated the preference to associate simple geometric shapes to personal identities, including one’s own, that of a close acquainted, or that of a stranger. Through 2 online surveys we asked participants to associate a geometric shape, chosen among a circle, a square and a triangle, to each of three identities, namely “you” (the self), “your best friend” or “a stranger”. We hypothesized that the circle would be more associated with the self, the square with the friend and the triangle with the stranger. Moreover, we investigated whether these associations are modulated by 3 personality traits: aggressivity, social fear and empathy. As predicted, we found that participants associate more often the circle with the self, both the circle and the square with the best friend, whereas they matched angular shapes (both the triangle and the square) to the stranger. On the other hand, the possibility that personality traits can modulate such associations was not confirmed. The study of how people associate geometric figures with the self or with other identities giving them an implicit socio-affective connotation, is interesting for all the disciplines interested in the automatic affective processes activated by visual stimuli.
... While the presented literature centers on depictions of people as a whole, important characteristics, needed to recognize a person, relate to their faces. They are more likely to attract our attention than common objects (Ro et al. 2001). The preference to look at human faces is already established in infants (Johnson et al. 1991). ...
... Our findings are in line with research on depictions of people in marketing. The attention-attracting influence which is caused by faces (Haist and Anzures 2017;Ju and Johnson 2010;Ro et al. 2001) can also be linked to our study, which provides initial evidence for this kind of effect. The portrayed effect does furthermore seem to be a factor in consumer music selection behavior. ...
Conference Paper
Full-text available
Streaming services are becoming the primary source for media consumption. Particularly platforms like SoundCloud, where users can disseminate user-generated content (UGC), are gaining relevance. To shed light into the drivers which positively influence the number of listeners, we draw from marketing literature related to depictions of people, which suggests that human faces can contribute to a higher degree of brand liking or brand identification. Thereupon, we propose a hypothesis which suggests that human faces on cover arts likewise generate more plays. We follow a data science approach using 1754 observations from SoundCloud and apply Google's facial recognition API (Vision AI) to examine the impact of human faces on music's success. We provide initial evidence that tracks with a human-face cover art yield in a higher number of plays compared to tracks with a cover art without a human face.
... Several evidences have shown that positive stimuli with biological relevance as food and faces (Gronau et al., 2003;Sui & Liu, 2009;Vuilleumier, 2005), capture attention more than do most common objects with a lower salience (e.g., Ro et al., 2001). Furthermore, emotionally-charged expressions and baby faces draw attention more than neutral faces (Brosch et al., 2007;Palermo & Rhodes, 2007), whereas other stimuli can also strongly grasp our attention, including knives, guns, syringes and dangerous animals (e.g., snakes, spiders), namely threatening stimuli which require a rapid response. ...
... We point out that we preferred to use an upward-pointing equilateral triangle instead of a downward-pointing one, to be consistent with the other 2 classical geometrical shapes, since also the typicality of a gure can play a role in its associated valence (Reber et al., 2004). Despite that, the research has often demonstrated that, although the angular gures are liked less than curved ones, the downward-pointing triangle, maybe due to the link with the shape drawn by the eyebrows when we are angry, is further disliked and associated with threat (e.g., Larson et al., 2012;Ro et al., 2001). Hence the equilateral triangle, although negatively valenced, could be not so "threatening" to the point of becoming associated with aggressivity. ...
Preprint
Full-text available
Since more than a century, psychologists have been interested in how visual information can arouse emotion. Several studies have shown that rounded figures evoke positive feelings due to their link with happy/baby-like expressions, compared with sharp angular figures, usually associated with anger and threatening objects having negative valence. However, to date, no-one has investigated the preference to associate a simple geometrical shape to one’s own identity, to a close and positive person like the best friend, or to a potentially dangerous one as a stranger. Through 2 online surveys we asked participants to associate a geometric shape, chosen among a circle, a square and a triangle, to each of three identities, namely “you” (the self), “a friend” or “a stranger”. We hypothesized that the circle would be more associated with the self, the square with the friend and the triangle with the stranger. Moreover, we investigated whether these associations are modulated by 3 personality traits: aggressivity, empathy and social fear. How predicted, we found that participants associate more often the circle with the self, the circle and the square with the best friend, whereas matched the angular shapes (both the triangle and the square) to the stranger. On the other hand, the possibility that personality traits can modulate such associations was not confirmed. The study of how people associate geometrical figures with the self or with other identities giving them an implicit socio-affective connotation, is interesting for all the disciplines interested in the automatic affective processes activated by visual stimuli.
... The question whether persons with autism spectrum disorder (ASD) experience social stimuli differently than control persons without ASD (CON) has been widely debated. In non-autistic persons there is a very stable preference for processing social stimuli (depicting humans, i.e. faces and body parts), either reflected in shorter reaction times in a detection task for faces and body parts (Ro et al., 2007) or a better detection or discrimination of social stimuli (Bruce et al., 1991;Kikuchi et al., 2009;Lehky, 2000;Ro et al., 2001). Briefly presented faces are detected faster and more accurately than objects (Purcell & Stewart, 1988) and salient social stimuli (i.e. ...
... Social stimuli were recognized significantly faster than non-social stimuli by both groups, as described by previous reports of preferential processing of social over non-social stimuli (Bruce et al., 1991;Kikuchi et al., 2009;Lehky, 2000;Ro et al., , 2001Ro et al., , , 2007. This is also in accordance with neuroscientific evidence of pathways specialized for the processing of socially relevant information (Alcalá-López et al., 2018;Nummenmaa & Calder, 2009). ...
Article
Full-text available
In this study we investigate whether persons with autism spectrum disorder (ASD) perceive social images differently than control participants (CON) in a graded perception task in which stimuli emerged from noise before dissipating into noise again. We presented either social stimuli (humans) or non-social stimuli (objects or animals). ASD were slower to recognize images during their emergence, but as fast as CON when indicating the dissipation of the image irrespective of its content. Social stimuli were recognized faster and remained discernable longer in both diagnostic groups. Thus, ASD participants show a largely intact preference for the processing of social images. An exploratory analysis of response subsets reveals subtle differences between groups that could be investigated in future studies.
... 1) In terms of generalized IQA, GFIQA can be used to improve model predictions. It has been demonstrated that the human visual system (HVS) is extremely sensitive to faces [9], [10]. An accurate face IQA metric could benefit the generalized IQA task. ...
Preprint
Full-text available
Computer vision models for image quality assessment (IQA) predict the subjective effect of generic image degradation, such as artefacts, blurs, bad exposure, or colors. The scarcity of face images in existing IQA datasets (below 10\%) is limiting the precision of IQA required for accurately filtering low-quality face images or guiding CV models for face image processing, such as super-resolution, image enhancement, and generation. In this paper, we first introduce the largest annotated IQA database to date that contains 20,000 human faces (an order of magnitude larger than all existing rated datasets of faces), of diverse individuals, in highly varied circumstances, quality levels, and distortion types. Based on the database, we further propose a novel deep learning model, which re-purposes generative prior features for predicting subjective face quality. By exploiting rich statistics encoded in well-trained generative models, we obtain generative prior information of the images and serve them as latent references to facilitate the blind IQA task. Experimental results demonstrate the superior prediction accuracy of the proposed model on the face IQA task.
... We numbered the images in random order and displayed the even-numbered images on the left side of the screen, and odd-numbered images on the right side of the screen. This was to minimize bias caused by change detection in the flicker paradigm [18][19][20]. ...
Article
Full-text available
The performance of deep learning algorithm (DLA) to that of radiologists was compared in detecting low contrast objects in CT phantom images under various imaging conditions. For training, 10,000 images were created using American College of Radiology CT phantom as the background. In half of the images, objects of 3–20 mm size and 5–30 HU contrast difference were generated in random locations. Binary responses were used as the ground truth. For testing, 640 images of Catphan® phantom were used, half of which had objects of either 5 or 9 mm size with 10 HU contrast difference. Twelve radiologists evaluated the presence of objects on a five-point scale. The performances of the DLA and radiologists were compared across different imaging conditions in terms of area under receiver operating characteristics curve (AUC). Multi-reader multi-case AUC and Hanley and McNeil tests were used. We performed post-hoc analysis using bootstrapping and verified that the DLA is less affected by the changing imaging conditions. The AUC of DLA was consistently higher than those of the radiologists across different imaging conditions (p < 0.0001), and it was less affected by varying imaging conditions. The DLA outperformed the radiologists and showed more robust performance under varying imaging conditions.
... We suggest that this interaction stems from the relative attentional distribution pattern among the two paired categories. As faces are laden with social and biological significance, and as they may easily bias and/or engage attentional resources (e.g., Langton et al., 2008;Ro et al., 2007Ro et al., , 2001, it is not surprising that a large mixed-category advantage is observed among such type of stimuli when paired with other non-facial categories. The fact that highway images additionally demonstrated a mixedcategory advantage when paired with non-facial categories further lends support for this hypothesis. ...
Article
Full-text available
The mixed-category advantage in visual working memory refers to improved memory for an image in a display containing two different categories relative to a display containing only one category (Cohen et al., 2014). Jiang, Remington, et al. (2016) found that this advantage characterizes mainly faces and suggested that face-only displays suffer from enhanced interference due to the unique configural nature of faces. Faces, however, possess social and emotional significance that may bias attention toward them in mixed-category displays at the expense of their counterpart category. Consequently, the counterpart category may suffer from little/no advantage, or even an inversed effect. Using a change-detection task, we showed that a category that demonstrated a mixed-category disadvantage when paired with faces, demonstrated a mixed-category advantage when paired with other non-facial categories. Furthermore, manipulating the likelihood of testing a specific category (i.e., changing its task-relevance) in mixed-category trials, altered its advantaged/disadvantaged status, suggesting that the effect may be mediated by attention. Finally, to control for perceptual exposure factors, a sequential presentation experimental version was conducted. Whereas faces showed a typical mixed-category advantage, this pattern was again modulated (yielding an advantage for a non-facial category) when inserting a task-relevance manipulation. Taken together, our findings support a central resource allocation account, according to which the asymmetric mixed-category effect likely stems from an attentional bias to one of the two categories. This attentional bias is not necessarily spatial in its nature, and it presumably affects processing stages subsequent to the initial perceptual encoding phase in working memory.
... A third possibility is that, at least in some contexts, neutral faces communicate sufficient emotion as to serve the same function as more robustly defined faces. More specifically, it may be that individuals with AUD are more susceptible to failures in inhibitory functions when confronted with high attention capture stimuli conferring emotional information such as faces (e.g., Theeuwes & Van der Stigchel, 2006;Ro, Russell, & Lavie, 2001), regardless of specific emotion or intensity. We should note that in the current context, the utility of the neutral faces lies primarily in providing a comparator for evaluating emotion-specific effects. ...
Article
Background Individuals with alcohol use disorder (AUD) often display compromise in emotional processing and non-affective neurocognitive functions. However, relatively little empirical work explores their intersection. In this study, we examined working memory performance when attending to and ignoring facial stimuli among adults with and without AUD. We anticipated poorer performance in the AUD group, particularly when task demands involved ignoring facial stimuli. Whether this relationship was moderated by facial emotion or participant sex were explored as empirical questions. Methods Fifty-six controls (30 women) and 56 treatment-seekers with AUD (14 women) completed task conditions in which performance was advantaged by either attending to or ignoring facial stimuli, including happy, neutral, or fearful faces. Group, sex, and their interaction were independent factors in all models. Efficiency (accuracy/response time) was the primary outcome of interest. Results An interaction between group and condition (F1,107 = 6.03, p < .02) was detected. Individual comparisons suggested this interaction was driven by AUD-associated performance deficits when ignoring faces, whereas performance was equivalent between groups when faces were attended. Secondary analyses suggested little influence of specific facial emotions on these effects. Conclusions These data provide partial support for initial hypotheses, with the AUD group demonstrating poorer working memory performance conditioned on the inability to ignore irrelevant emotional face stimuli. The absence of group differences when scenes were to be ignored (faces remembered) suggests the AUD-associated inability to ignore irrelevance is influenced by specific stimulus qualities.
... Compared to other visual objects, faces more easily capture our attention and humans are experts at face perception (Ro et al., 2001;Vuilleumier, 2000;Young & Burton, 2018). Different expressions have different processing advantages Xu et al., 2020); for example, angry faces convey threatening information that is particularly efficient at drawing attention (Fox et al., 2000;Hansen & Hansen, 1988;Pinkham et al., 2010). ...
Book
Visual working memory (VWM) is a system to actively maintain visual information to meet the needs of ongoing cognitive tasks. There is a trade-off between the precision of each representation stored in VWM and the number of representations due to the VWM resource limit. VWM resource allocation can be studied in two ways: one way is to investigate the ability to voluntarily trade off VWM precision and representation number stored in VWM; the other way is to investigate the ability to filter task-irrelevant information. The factors that influence these two aspects remain unclear. I investigated the influence of stimulus presentation time, VWM capacity, and emotional state on this trade-off ability attributed to VWM (Study I and Study II). In addition, I investigated the influence of facial expression of distractor stimuli, VWM capacity, and depressive symptoms on filtering ability (Study III and Study IV). Study I demonstrated that there is a positive relationship between VWM capacity and voluntary trade-off ability only when stimulus presentation time is long. Study II found that participants can improve VWM precision in a negative emotional state by reducing the number of representations stored in VWM when the stimulus presentation time is long. Study III found that face distractors could be filtered by participants with high VWM capacity, while low capacity participants had difficulties in filtering both angry and neutral face distractors. Study IV found that dysphoric participants could filter both sad and fearful face distractors. In contrast, non-dysphoric participants failed to filter fearful face distractors, but they could filter sad face distractors efficiently. Overall, the results of these studies suggest that VWM resource allocation is affected by stimulus-related and individuals' state- and trait-related factors (i.e., stimulus presentation time, VWM capacity, emotional state, facial expression, and depressive symptoms). These findings provide a better understanding of VWM resource allocation, which can possibly be applied in the future when developing methods for cognitive training and clinical purposes.
... While numerous studies (e.g. Bindemann et al., 2005;Gamer & Büchel, 2009;Mack et al., 2002;Ro et al., 2001;Shelley-Tremblay & Mack, 1999;Theeuwes & Van der Stigchel, 2006;Vuilleumier, 2000) have shown that humans display an attentional bias towards faces or other human features, these studies typically employ a highly controlled design consisting of simplified social stimuli (e.g., schematic or isolated real faces). In the past decade, an increasing number of researchers began questioning the assumption that we can generalize findings from such controlled settings to gaze behavior in real social situations (Kingstone, 2009;Risko et al., 2012). ...
Thesis
Full-text available
Humans in our environment are of special importance to us. Even if our minds are fixated on tasks unrelated to their presence, our attention will likely be drawn towards other people’s appearances and their actions. While we might remain unaware of this attentional bias at times, various studies have demonstrated the preferred visual scanning of other humans by recording eye movements in laboratory settings. The present thesis aims to investigate the circumstances under and the mechanisms by which this so-called social attention operates. The first study demonstrates that social features in complex naturalistic scenes are prioritized in an automatic fashion. After 200 milliseconds of stimulus presentation, which is too brief for top-down processing to intervene, participants targeted image areas depicting humans significantly more often than would be expected from a chance distribution of saccades. Additionally, saccades towards these areas occurred earlier in time than saccades towards non-social image regions. In the second study, we show that human features receive most fixations even when bottom-up information is restricted; that is, even when only the fixated region was visible and the remaining parts of the image masked, participants still fixated on social image regions longer than on regions without social cues. The third study compares the influence of real and artificial faces on gaze patterns during the observation of dynamic naturalistic videos. Here we find that artificial faces, belonging to humanlike statues or machines, significantly predicted gaze allocation but to a lesser extent than real faces. In the fourth study, we employed functional magnetic resonance imaging to investigate the neural correlates of reflexive social attention. Analyses of the evoked blood-oxygenation level dependent responses pointed to an involvement of striate and extrastriate visual cortices in the encoding of social feature space. Collectively, these studies help to elucidate under which circumstances social features are prioritized in a laboratory setting and how this prioritization might be achieved on a neuronal level. The final experimental chapter addresses the question whether these laboratory findings can be generalized to the real world. In this study, participants were introduced to a waiting room scenario in which they interacted with a confederate. Eye movement analyses revealed that gaze behavior heavily depended on the social context and were influenced by whether an interaction is currently desired. We further did not find any evidence for altered gaze behavior in socially anxious participants. Alleged gaze avoidance or hypervigilance in social anxiety might thus represent a laboratory phenomenon that occurs only under very specific real-life conditions. Altogether the experiments described in the present thesis thus refine our understanding of social attention and simultaneously challenge the inferences we can draw from laboratory research.
... Our results are consistent with previous studies that report that participants are faster to attend target probes on the side of previously presented human face cues compared with target probes on the side of previously presented object cues (Bindemann et al., 2007;Ro, Russell, & Lavie, 2001). Interestingly, this was the case for only the 100-ms and 1,000-ms cue displays. ...
Article
Humans demonstrate enhanced processing of human faces compared with animal faces, known as own-species bias. This bias is important for identifying people who may cause harm, as well as for recognizing friends and kin. However, growing evidence also indicates a more general face bias. Faces have high evolutionary importance beyond conspecific interactions, as they aid in detecting predators and prey. Few studies have explored the interaction of these biases together. In three experiments, we explored processing of human and animal faces, compared with each other and to nonface objects, which allowed us to examine both own-species and broader face biases. We used a dot-probe paradigm to examine human adults’ covert attentional biases for task-irrelevant human faces, animal faces, and objects.We replicated the own-species attentional bias for human faces relative to animal faces. We also found an attentional bias for animal faces relative to objects, consistent with the proposal that faces broadly receive privileged processing. Our findings suggest that humans may be attracted to a broad class of faces. Further, we found that while participants rapidly attended to human faces across all cue display durations, they attended to animal faces only when they had sufficient time to process them. Our findings reveal that the dot-probe paradigm is sensitive for capturing both own-species and more general face biases, and that each has a different attentional signature, possibly reflecting their unique but overlapping evolutionary importance.
... Human is an expert in face processing (Gauthier, Skudlarski, Gore, & Anderson, 2000). Human faces are biologically and socially significant stimuli that are potentially difficult to ignore, even when it would be beneficial for efficient task performance (Hansen & Hansen, 1988;Ro, Russell, & Lavie, 2001). Furthermore, emotional faces can interrupt an ongoing memory task. ...
Preprint
Previous studies conducted in healthy humans by applying event-related potentials have shown that task-irrelevant fearful faces are difficult to filter from visual working memory (VWM), and anxiety symptoms increase this difficulty. It is not known, however, whether non-threatening faces are also difficult to be filtered and whether depression symptoms affect it. We tested whether task-irrelevant sad and fearful faces are differently stored by dysphoric (elevated amount of depressive symptoms) and control participants who performed a VWM task related to objects' colors. We found that even if the groups differ neither in their VWM capacity, nor behavioral distractibility, they differed in filtering ability as indexed by the contralateral delay activity, a specific index for the maintenance phase of the VWM. Control participants unnecessarily stored fearful faces in memory, but they were able to filter sad faces, suggesting that specifically threatening faces are difficult to filter from VWM in healthy individuals. Dysphoric participants filtered both fearful and sad face distractors efficiently. Thus, depression-related attentional bias toward sad faces, if existing here, seems not to result in unnecessary storage of sad faces. Our results suggest a threat-related filtering difficulty and unexpected lack of this difficulty in negative face filtering in participants with depression symptoms.
... A second explanation posits that the intuitive accessibility of trait impressions from faces can account for their persistent effects (Jaeger, Evans, Stel, & van Beest, 2019a). Faces attract attention (Ro, Russell, & Lavie, 2001;Theeuwes & Van der Stigchel, 2006) and are processed quickly and efficiently (Stewart et al., 2012;Willis & Todorov, 2006). This processing advantage leads to an intuitive accessibility of trait impressions from faces. ...
Article
Full-text available
Trait impressions from faces influence many consequential decisions even in situations in which decisions should not be based on a person's appearance. Here, we test (a) whether people rely on trait impressions when making legal sentencing decisions and (b) whether two types of interventions-educating decision-makers and changing the accessibility of facial information-reduce the influence of facial stereotypes. We first introduced a novel legal decision-making paradigm. Results of a pretest (n = 320) showed that defendants with an untrustworthy (vs. trustworthy) facial appearance were found guilty more often. We then tested the effectiveness of different interventions in reducing the influence of facial stereotypes. Educating participants about the biasing effects of facial stereotypes reduced explicit beliefs that personality is reflected in facial features, but did not reduce the influence of facial stereotypes on verdicts (Study 1, n = 979). In Study 2 (n = 975), we presented information sequentially to disrupt the intuitive accessibility of trait impressions. Participants indicated an initial verdict based on case-relevant information and a final verdict based on all information (including facial photographs). The majority of initial sentences were not revised and therefore unbiased. However, most revised sentences were in line with facial stereotypes (e.g., a guilty verdict for an untrustworthy-looking defendant). On average, this actually increased facial bias in verdicts. Together, our findings highlight the persistent influence of trait impressions from faces on legal sentencing decisions.
... Even under conditions of complete inattention, individuals still implicitly process and encode facial stimuli (Mack & Rock, 1998). In change-detection paradigms, in which participants search for changes in a visual scene, changes in faces capture attention more efficiently than changes in other objects (Ro, Russell, & Lavie, 2001). Furthermore, changes in people and animals are detected faster than changes in plants or vehicles (New, Cosmides, & Tooby 2007;New et al., 2010). ...
... 24 Research in psychology suggests that faces may be especially important for attracting attention because of the social importance other humans play in individuals' lives. 25,26 Neuroimaging findings 27 as well as behavioral data 28,29 further show that viewers' attention is more readily spent on faces compared to other objects shown in a visual scene. Furthermore, advertising literature has shown that faces in online banner advertisements for hair care products attract more attention than advertisements without faces. ...
Article
Objectives: Minimally regulated electronic cigarettes (e-cigarette) advertising may be one potential factor driving the increasing prevalence of young adult e-cigarette use. Using eye-tracking, the current study examined which e-cigarette advertising features were the most appealing to young adults as a first step to examine how e-cigarette advertising may be regulated. Methods: Using a within-subjects design, 30 young adults (Mage = 20.0 years) viewed e-cigarette ads in a laboratory. Ad features or areas of interest (AOIs) included: 1) brand logo, 2) product descriptor, and 3) people. During ad viewing, eye-tracking measured participants' dwell time and time to first fixation for each AOI as well as each ad brand. Harm perceptions pre- and post-viewing were measured. Results: Participants spent the longest dwell time on people (M = 2701 ms), then product descriptors (M = 924 ms), then brand logos (M = 672 ms; ps < .001). They also fixated fastest on AOIs in that order. Participant sex significantly impacted dwell time of ad brand, and harm perceptions decreased after viewing the ads (ps < .05). Conclusions: This study provides initial evidence about which e-cigarette ad features may appeal most to young adults and may be useful when designing evidence-based policy.
... Pour comprendre plus en détail le fonctionnement de ce système de reconnaissance, il faut s'intéresser aux facteurs capables de l'altérer. Chez l'adulte comme chez l'enfant, il a été montré que les propriétés de reconnaissance des visages sont très réduites dans les situations expérimentales suivantes : lorsque la position standard des traits faciaux n'est plus respectée (Goren et al., 1975;Johnson et al., 1991;Purcell & Stewart, 1986, 1988, lorsque le visage est présenté à l'envers (Gliga et al., 2009;Ro et al., 2001) ou lorsque le contraste et la luminosité sont inversés (Valenza et al., 1996). L'être humain distingue plus facilement deux visages lorsque ceux-ci sont présentés à l'endroit plutôt qu'à l'envers (Valentine, 1988 pour une revue). ...
Thesis
Ma thèse porte sur une dimension fondamentale de la structure des groupes sociaux : lahiérarchie. Chez l’être humain, les hiérarchies sociales régissent en profondeur lesinteractions. Pour naviguer avec succès dans son environnement, il doit être en mesure derepérer précisément les positions hiérarchiques des autres membres de son groupe. Cetravail de thèse vise à caractériser certains mécanismes neuronaux, comportementaux etphysiologiques impliqués dans l’analyse d’un indice hiérarchique. Pour préciser la nature dutraitement de la hiérarchie, j’ai exploré son influence sur différentes étapes de la perceptiondes visages. Je me suis tout d’abord intéressée au décours temporel du traitement neuronaldes visages dans un contexte hiérarchique. Deux études menées en électroencéphalographiem’ont permis d’identifier les potentiels neuronaux et les composants oscillatoires évoquéspar la perception de visages associés soit, à un rang hiérarchique établi à l’issue d’unecompétition, soit à un statut social induit par la profession. Une étude réalisée ensuite enoculométrie avait pour but de capturer l’influence de la hiérarchie sur des mécanismes fins ducontrôle de l’attention visuelle. J’ai étudié à la fois l’exploration visuelle de classementshiérarchiques incluant le participant, et celle de visages associés à des rangs hiérarchiquesdifférents. Enfin, j’ai tenté de déterminer si un signal ou une situation d’asymétriehiérarchique véhicule une valence émotionnelle et motivationnelle non neutre susceptibled’induire des variations de certains paramètres physiologiques, comme le rythme cardiaqueou la réponse électrodermale.
... Pour comprendre plus en détail le fonctionnement de ce système de reconnaissance, il faut s'intéresser aux facteurs capables de l'altérer. Chez l'adulte comme chez l'enfant, il a été montré que les propriétés de reconnaissance des visages sont très réduites dans les situations expérimentales suivantes : lorsque la position standard des traits faciaux n'est plus respectée (Goren et al., 1975;Johnson et al., 1991;Purcell & Stewart, 1986, 1988, lorsque le visage est présenté à l'envers (Gliga et al., 2009;Ro et al., 2001) ou lorsque le contraste et la luminosité sont inversés (Valenza et al., 1996). L'être humain distingue plus facilement deux visages lorsque ceux-ci sont présentés à l'endroit plutôt qu'à l'envers (Valentine, 1988 pour une revue). ...
Article
Full-text available
Hierarchy is a key organizational feature of social groups. In order to successfully navigate their social environment, humans must precisely read the hierarchical position of other during social interaction. This present thesis intends to characterize the neural correlates as well as the early physiological and behavioral mechanisms involved in the processing of social rank. The influence of hierarchy was mainly investigated in the context of face perception. To begin, my focus was on the time course of neuronal processing of faces embedded in a hierarchical context. Using eletroencephalography in two studies, it has been possible to identify evoked neuronal potentials and oscillatory components in response to faces varying in hierarchical rank, established through competition or social status induced by profession. The next study used eye-tracking methodology to explore the influence of hierarchy on the subtle mechanisms of visual attention control. I aimed at characterizing the visual scanning pattern of hierarchical rankings (during a competition) and of faces associated with different hierarchical ranks. Finally, I tried to determine if a hierarchical signal or a social asymmetrical situation conveyed an emotional/motivational valence. During face perception and a minimal social interaction, I examined if this particular dimension of hierarchy generated variations of physiological activity, such as heart rate and skin conductance response.
... The efficiency in processing face stimuli, considering how faces quickly capture and engage our attention (e.g. Palanica and Itier 2012;Ro et al. 2001), is likely related with the fact that an individual's social performance is highly dependent on how people attend to faces. This is even more relevant when a specific emotion is expressed by the face, with, for instance, threatening faces being detected faster than friendly faces among both neutral and emotional distractors (Ohman et al. 2001). ...
Article
Full-text available
Assuming the importance of a social dimension in appraisal of emotion, here we investigate how social presence impact emotional interference effects. This modulation is hypothesized both because social presence is expected to facilitate emotion recognition and to increases executive control functions. In one experiment participants perform two different emotional Stroop tasks that use either emotional words or emotional facial expressions as targets versus distractors, either in a co-action or in an isolated setting. Results show that the social presence reduces interference effects. However the evidence of increases of control in presence of others is less clear when the distractor is an emotional face (versus an emotional word) and this happens particularly for angry expressions. Faces and the expression of emotional anger attentional capture seem not to be overcome by an increase of control promoted by the presence of others, suggesting the higher relevance of these stimuli in social settings.
... Attentional priorities among objects in natural scenes are not absolute but relative: Visual orienting may be well understood as biased competition among objects (Desimone, 1998). The facial advantage may be absent without other competing objects (Ro, Russell, & Lavie, 2001). In the absence of faces, bodies, salient objects, and text strings in scenes may draw attention as effectively as faces (Bindemann et al., 2010;Cerf, Frady, & Koch, 2009). ...
Article
In this study I examined the role of the hands in scene perception. In Experiment 1, eye movements during free observation of natural scenes were analyzed. Fixations to faces and hands were compared under several conditions, including scenes with and without faces, with and without hands, and without a person. The hands were either resting (e.g., lying on the knees) or interacting with objects (e.g., holding a bottle). Faces held an absolute attentional advantage, regardless of hand presence. Importantly, fixations to interacting hands were faster and more frequent than those to resting hands, suggesting attentional priority to interacting hands. The interacting-hand advantage could not be attributed to perceptual saliency or to the hand-owner (i.e., the depicted person) gaze being directed at the interacting hand. Experiment 2 confirmed the interacting-hand advantage in a visual search paradigm with more controlled stimuli. The present results indicate that the key to understanding the role of attention in person perception is the competitive interaction among objects such as faces, hands, and objects interacting with the person.
... Our theorizing builds on research about the effects of motivation on selective attention outside the domain of ideological beliefs (37,38). Hungry individuals, relative to those low in hunger, show a greater attentional bias for food-related stimuli (39), and addicts give preferential attention to the object of addiction relative to control stimuli (40). ...
Article
Full-text available
Significance Inequality between groups is all around us—but who tends to notice, and when? Whereas some individuals assert rampant inequality and demand corrective interventions, others exposed to the same contexts retort that their peers see certain inequalities where none exist and selectively overlook inconvenient others. Across five studies (total N = 8,779), we consider how individuals’ ideological beliefs shape their proclivity to naturalistically attend to—and accurately detect—inequality, depending on which groups bear inequality’s brunt. Our results suggest that social egalitarians (versus anti-egalitarians) are more naturally vigilant for and accurate at detecting inequality when it affects societally disadvantaged groups (e.g., the poor, women, racial minorities) but not when it (equivalently) affects societally advantaged groups (e.g., the rich, men, Whites).
... The faces of the red labels are sad (manga) or angry (comic) and thick in shape. Faces or facial expressions were added to make them more visible (although basic research implies some benefits of faces in capturing attention [39][40][41], studies have not found that, for example, smileys are better than colored labels [34]), especially for children, on which they will be tested in the future. ...
Article
To reduce obesity and thus promote healthy food choices, front-of-pack (FOP) labels have been introduced. Though FOP labels help identify healthy foods, their impact on actual food choices is rather small. A newly developed so-called swipe task was used to investigate whether the type of label used (summary vs. nutrient-specific) had differential effects on different operationalizations of the "healthier choice" measure (e.g., calories and sugar). After learning about the product offerings of a small online store, observers (N = 354) could, by means of a swipe gesture, purchase the products they needed for a weekend with six people. Observers were randomly assigned to one of five conditions, two summary label conditions (Nutri-Score and HFL), two nutrient (sugar)-specific label conditions (manga and comic), or a control condition without a label. Unexpectedly, more products (+7.3 products)-albeit mostly healthy ones-and thus more calories (+1732 kcal) were purchased in the label conditions than in the control condition. Furthermore, the tested labels had different effects with respect to the different operationalizations (e.g., manga reduced sugar purchase). We argue that the additional green-labeled healthy products purchased (in label conditions) "compensate" for the purchase of red-labeled unhealthy products (see averaging bias and licensing effect).
... A second explanation posits that the intuitive accessibility of trait impressions can account for their persistent effects (Jaeger et al., 2019a). Faces attract attention (Ro, Russell, & Lavie, 2001;Theeuwes & Van der Stigchel, 2006) and are processed quickly and efficiently (Stewart et al., 2012;Willis & Todorov, 2006). This processing advantage leads to an intuitive accessibility of trait impressions. ...
Preprint
Full-text available
Trait impressions from faces influence many consequential decisions even in situations in which they have poor diagnostic value and in which decisions should not be based on a person’s appearance. Here, we test (a) whether people rely on facial appearance when making legal sentencing decisions and (b) whether two types of interventions—educating decision-makers and changing the accessibility of facial information—reduces the influence of facial stereotypes. We first introduce a novel legal decision-making paradigm with which we measure reliance on facial appearance. Results of a pretest (n = 320) show that defendants with an untrustworthy (vs. trustworthy) facial appearances are found guilty more often. We then test the effectiveness of different interventions in reducing the influence of facial stereotypes. Educating participants about the biasing effects of facial stereotypes reduces the explicit belief that personality is reflected in facial features, but does not reduce the influence of facial appearance on verdicts (Study 1, n = 979). In Study 2 (n = 975), we present information sequentially to disrupt the intuitive accessibility of trait impressions. Participants indicate an initial verdict based on case-relevant information and a final verdict based on all information (including facial photographs). The wide majority of initial sentences were not revised and therefore unbiased. However, most revised sentences were in line with facial stereotypes (e.g., a guilty verdict for an untrustworthy-looking defendant). On average, this actually increased facial bias in verdicts. Together, our findings highlight the persistent influence of facial appearance on legal sentencing decisions.
... The process seems to be highly automatic and effortless, completing in a fraction of second. There is evidence suggesting that the detection of faces could be done pre-attentively ( Devue et al., 2009 ;Landau and Bentin, 2008 ;Ro et al., 2001 ); and that changeable aspects of faces, such as expression and eye gaze, could be processed even subconsciously . Critical for human social interactions, facial identity is an invariant representation near the top level of visual information processing hierarchy. ...
Article
Full-text available
Face identity is represented at a high level of the visual hierarchy. Whether the human brain can process facial identity information in the absence of visual awareness remains unclear. In this study, we investigated potential face identity representation through face-identity adaptation with the adapting faces interocularly suppressed by Continuous Flash Suppression (CFS) noise, a modified binocular rivalry paradigm. The strength of interocular suppression was manipulated by varying the contrast of CFS noise. While obeservers reported the face images subjectively unperceived and the face identity objectively unrecognizable, a significant face identity aftereffect was observed under low but not high contrast CFS noise. In addition, the identity of face images under shallow interocular suppression can be decoded from multi-voxel patterns in the right fusiform face area (FFA) obtained with high-resolution 7T fMRI. Thus the comined evidence from visual adaptation and 7T fMRI suggest that face identity can be represented in the human brain without explicit perceptual recognition. The processing of interocularly suppressed faces could occur at different levels depending on how “deep” the information is suppressed.
... Certain faces preferentially grab our attention (e.g. Öhman et al., 2001;Ro et al., 2001;Vuilleumier, 2000), and infant faces, in particular, elicit preferential allocation of attention (e.g., Cárdenas et al., 2013;Thompson-Booth et al., 2014a;Venturoso et al., 2019). This preferential allocation of attention to infant-related stimuli is thought to serve an adaptive functiongiven their total reliance on adult care for survival, it is adaptive that humans are especially attuned to infants (Lorenz, 1943). ...
Thesis
Full-text available
Infant facial cues affect a variety of caretaking-related responses in adults. These effects have primarily been explored as they relate to parental care, however infants receive care from others who are not their parents and it would be important for any caregiver, regardless of parental status, to respond to infant cues effectively. Because siblings often fulfill a caregiver role in the home, this study investigated whether having siblings, younger siblings in particular, influences the way in which adults respond to infant cues. Contrary to my predictions, the findings in this study indicate that having siblings does not influence how rewarding infant cuteness is nor how sensitive participants are to infant cuteness. Additional analyses exploring the potential impact of experience with younger siblings also failed to show that responses to infant cues were sensitive to this type of alloparental care. Future research should consider investigating if the age difference between siblings affects responses to infant cues.
... Humans tend to quickly direct their attention and gaze to the faces in a scene (Bindemann, Burton, Hooge, Jenkins & de Haan, 2005;Cerf, Frady & Koch, 2009;Cerf, Harel, Einhauser & Koch, 2008;Costela & Woods, 2019;Coutrot & Guyader, 2014;Foulsham, Cheng, Tracy, Henrich & Kingstone, 2010;Jack & Schyns, 2015;Marat, Rahman, Pellerin, Guyader & Houzet. 2013;Ro, Russell & Lavie, 2001;Theeuwes & Van der Stigchel, 2006) and we prefer faces to other objects from a very young age (Umiltà, Simion & Valenza, 1996;Farzin, Rivera & Whitney, 2009;Frank, Vul & Johnson, 2009). Indeed, we are highly trained with faces, as we typically see and recognize many faces every day starting in infancy. ...
Article
Full-text available
Humans quickly detect and gaze at faces in the world, which reflects their importance in cognition and may lead to tuning of face recognition toward the central visual field. Although sometimes reported, foveal selectivity in face processing is debated: brain imaging studies have found evidence for a central field bias specific to faces, but behavioral studies have found little foveal selectivity in face recognition. These conflicting results are difficult to reconcile, but they could arise from stimulus-specific differences. Recent studies, for example, suggest that individual faces vary in the degree to which they require holistic processing. Holistic processing is the perception of faces as a whole rather than as a set of separate features. We hypothesized that the dissociation between behavioral and neuroimaging studies arises because of this stimulus-specific dependence on holistic processing. Specifically, the central bias found in neuroimaging studies may be specific to holistic processing. Here, we tested whether the eccentricity-dependence of face perception is determined by the degree to which faces require holistic processing. We first measured the holistic-ness of individual Mooney faces (two-tone shadow images readily perceived as faces). In a group of independent observers, we then used a gender discrimination task to measured recognition of these Mooney faces as a function of their eccentricity. Face gender was recognized across the visual field, even at substantial eccentricities, replicating prior work. Importantly, however, holistic face gender recognition was relatively tuned—slightly, but reliably stronger in the central visual field. Our results may reconcile the debate on the eccentricity-dependance of face perception and reveal a spatial inhomogeneity specifically in the holistic representations of faces.
... Nonetheless, managers should be aware of the risk of using anthropomorphic cues in retail settings that may attract more attention than the product itself. Faces are very relevant stimuli that can capture more attention than objects (Ro et al., 2001). ...
Article
Full-text available
In retail environments, consumers are constantly exposed to in‐store marketing communication activities. However, relatively little is known about the attribution of human traits to this communication tool. The current research focuses on how anthropomorphizing retail cues such as dump bins influences consumer behavior and the moderating effect of the vice‐virtue character of the displayed products. Using eye‐tracking technology in an ecological shopping environment, we tracked shoppers' gazes through the store and analyzed their visual attention. Results show that attaching anthropomorphic forms to dump bins positively affects attitudes toward the displayed products. In addition, we demonstrate that displaying a vice product in an anthropomorphic dump bin increases both attitude toward the product and purchase intention, compared to the display of a virtue product. These findings suggest that anthropomorphism has an empathy‐helping underlying psychological mechanism that, when applied to retail communication activities, can contribute to justifying the purchase of vice products.
... Thus, it should be possible to observe binding effects in visual detection performance, if the nonspatial information of visual stimuli is harder to ignore. Faces are visual stimuli that typically allocate more attention (see also Theeuwes & Van der Stigchel, 2006) than simple color dots or shapes (see, e.g., Palermo & Rhodes, 2007;Ro et al., 2001), are stimuli that humans perceive very fast (e.g., Ghuman et al., 2014), and have been used to induce IOR in cue-target designs (e.g., Taylor & Therrien, 2005, 2008. Moreover, humans attend to threatful faces (Mogg & Bradley, 1999), and disengage slower from such fearful faces (e.g., Georgiou et al., 2005). ...
Article
Full-text available
Responding to a stimulus leads to the integration of response and stimulus’ features into an event file. Upon repetition of any of its features, the previous event file is retrieved, thereby affecting ongoing performance. Such integration-retrieval explanations exist for a number of sequential tasks (that measure these processes as ’binding effects’) and are thought to underlie all actions. However, based on attentional orienting literature, Schöpper, Hilchey, et al. (2020) could show that binding effects are absent when participants detect visual targets in a sequence: In visual detection performance, there is simply a benefit for target location changes (inhibition of return). In contrast, Mondor and Leboe (2008) had participants detect auditory targets in a sequence, and found a benefit for frequency repetition – presumably reflecting a binding effect in auditory detection performance. In the current study, we conducted two experiments, that only differed in the modality of the target: Participants signaled the detection of a sound (N = 40) or of a visual target (N = 40). Whereas visual detection performance showed a pattern incongruent with binding assumptions, auditory detection performance revealed a non-spatial feature repetition benefit, suggesting that frequency was bound to the response. Cumulative reaction time distributions indicated that the absence of a binding effect in visual detection performance was not caused by overall faster responding. The current results show a clear limitation to binding accounts in action control: Binding effects are not only limited by task demands, but can entirely depend on target modality.
... In humans, human faces are biologically and socially significant stimuli (Langton, Law, Burton, & Schweinberger, 2008;Ro, Russell, & Lavie, 2001). Although humans are experts in face processing (Gauthier, Skudlarski, Gore, & Anderson, 2000), face distractors can interrupt an ongoing VWM task and can be difficult to filter (Gambarota & Sessa, 2019;Stout, Shackman, & Larson, 2013). ...
Preprint
Previous studies have shown that task-irrelevant threatening faces (e.g., fearful faces) are difficult to filter from visual working memory (VWM). What is not known, however, is whether non-threatening negative faces (e.g., sad faces) are also difficult to filter and whether depressive symptoms affect the ability to filter different emotional faces. We tested whether task-irrelevant sad and fearful faces could be filtered by depressed and non-depressed participants (control group) performing a color-change detection task. The groups differed in their filtering ability, as indicated by their contralateral delay activity, a specific event-related potential (ERP) index for the number of objects stored in the VWM during the maintenance phase. The control group did not unnecessarily store sad face distractors, but they automatically stored fearful face distractors, suggesting that threatening faces are specifically difficult to filter from VWM in non-depressed individuals. By contrast, depressed participants showed no additional consumption of VWM resources for either distractor condition compared to non-distractor conditions, possibly suggesting that neither fearful nor sad face distractors were maintained in VWM. Our control group results confirm the previous findings of a threat-related filtering difficulty in average individuals, while also suggesting that non-threatening negative faces do not unnecessarily load the VWM. The novel finding of the ability to filter negative face distractors in participants with depressive symptoms may reflect a decreased overall responsiveness to emotional stimuli or a greater consumption of VWM resources in non-distractor trials. Future studies need to investigate the mechanism underlying distractor filtering in the depressed population.
... The processing of faces is supported by specialized distributed networks within the brain (e.g., fusiform face area; Kanwisher, 2000), faces are detected quickly and capture people's attention compared to other objects (Langton et al., 2008;M. B. Lewis & Edmonds, 2003;Ro et al., 2001;Theeuwes & Van der Stigchel, 2006). This face expertise has been attributed to the ability to process faces configurally (Maurer et al., 2002). ...
Article
Full-text available
This research aims to determine how disfigurement alters visual attention paid to faces and to examine whether such a potential modified pattern of visual attention to faces with visible difference was associated, in turn, with perceiver’s stigmatizing affective reactions. A pilot study (N = 38) and a pre-registered experimental eye-tracking study (N = 89) were conducted. First, the visual explorations of faces with and without disfigurement were compared. The association of these visual explorations with affective reactions were investigated next. Findings suggest that disfigurement impacts visual attention toward faces; attention is not merely attracted to the disfigured area but it is also diverted particularly from the eye area. Disfigurement also eases disgust-related, surprise-related, anxiety-related, and, to a lesser extent, hostility-related affective states. Exploratory interaction effects between attention to the eyes and to the disfigured part of the face revealed a hybrid effect on disgust-related affect and an increase in surprise-related affect when participants fixated more upon the disfigured area and fixated less upon the eyes. Thus, perceiver’s attention is captured by disfigurement and also diverted from face internal features which seems to play a role in the affective reactions elicited.
... Although drawings of objects and scenes can be useful in studying change detection, some researchers have argued that drawings create an artificial parsing of a scene. To overcome this challenge, other researchers have used photographs of realworld objects (Blackmore et al. 1995;Grimes 1996;Rensink et al. 1997;Zelinsky 2001;Ro et al. 2001) or dynamic displays such as movies Gysen et al. 2002;Wallis and Bulthoff 2000). Finally, other researchers have gone one step further to achieve the highest level of realism by designing experiments using real-life interactions (Simons and Levin 1998;Frances Wang and Simons 1999). ...
Article
Due to the dynamic nature of construction sites, workers face constant changes, including changes that endanger their safety. Failing to notice significant changes to visual scenes-known as change blindness-can potentially put construction workers into harm's way. Hence, understanding the inability or failure to detect change is critical to improving worker safety. No study to date, however, has empirically examined change blindness in relation to construction safety. To address this critical knowledge gap, this study examined the effects of change types (safety-relevant or safety-irrelevant) and work experience on hazard-identification performance, with a focus on fall-related hazards. The experiment required participants (construction workers, students with experience, and students with no work experience) to detect changes between two construction scenario images that alternated repeatedly and then identify any changes. The results demonstrated that, generally, safety-relevant changes were detected significantly faster than safety-irrelevant changes, with certain types of fall hazards (e.g., unprotected edge hazards) being detected faster than other types (e.g., ladder hazards). The study also found that more experienced subjects (i.e., workers) achieved higher accuracy in detecting relevant changes, but their mean response time was significantly longer than that of students with and without experience. Collectively, these findings indicated that change blindness may influence changes in workers' situation awareness on jobsites. Demonstrating workers' susceptibility to change blindness can help raise awareness during worker trainings about how workers allocate and maintain attention.
... Evaluating whether people around us represent a threat for our own safety or, conversely, opportunities for friendly interactions, is a daily fundamental challenge with major implications for survival in social environments. Since we cannot access others' intentions directly, we base the first impression of others on readily available cues, among which facial appearance plays a key role (Ro et al., 2001;Theeuwes & Van der Stigchel, 2006;Zebrowitz, 1997). People process facial features quickly (Stewart et al., 2012;Willis & Todorov, 2006) and infer a wide range of information with a different degree of confidence and accuracy. ...
Article
Full-text available
The present work investigates pupillary reactions induced by exposure to faces with different levels of trustworthiness. Participants’ (N = 69) pupillary changes were recorded while they viewed white male faces with a neutral expression varying on facial trustworthiness. Results suggest that reward processing and pupil mimicry are relevant mechanisms driving participants’ pupil reactions. However, when including both factors in one statistical model, pupil mimicry seems to be a stronger predictor than reward processing of participants’ pupil dilation. Results are discussed in light of pupillometry evidence.
... Faces are effective at retaining attention (Bindemann et al., 2005;Gilchrist & Proske, 2006;Theeuwes & Van, 2006), even when they are task-irrelevant (Langton et al., 2008). Faces are more likely to escape change blindness (Ro et al., 2001), inattentional blindness (Mack & Rock, 1998), and continuous flash suppression (CFS; Jiang et al., 2007;Stein et al., 2016) in typical participants, and they are better at overcoming extinction in neurological patients (Vuilleumier, 2000). These biases for faces show up early in development, as revealed by newborns who prefer to track face-like stimuli (Goren et al., 1975;Johnson et al., 1991) and infants who spend around 25% of their looking time at faces (Jayaraman et al., 2015;Sugden et al., 2014). ...
... This is remarkable considering the ubiquity of the human face as a research stimulus in cognitive, developmental, forensic, and social psychology, and in neuroscience and neuropsychology (Bate, 2012;Bruce & Young, 1998;Hole & Bourne, 2010;Bindemann & Megreya, 2017;Rhodes et al., 2011). In cognitive psychology, for example, faces are employed to study processes such as person identification (Bate & Murray, 2017;Bruce & Young, 1986;Johnston & Edmonds, 2009;Ramon & Gobbini, 2018;Young & Burton, 2017), the allocation of visual attention (Langton et al., 2008;Ro et al., 2001), perspective taking (Hermens & Walker, 2012;Langton et al., 2006), and the recognition of emotional states (Keane et al., 2002;Morris et al., 1998;Zhou & Jenkins, 2020). ...
Article
Full-text available
Experimental psychology research typically employs methods that greatly simplify the real-world conditions within which cognition occurs. This approach has been successful for isolating cognitive processes, but cannot adequately capture how perception operates in complex environments. In turn, real-world environments rarely afford the access and control required for rigorous scientific experimentation. In recent years, technology has advanced to provide a solution to these problems, through the development of affordable high-capability virtual reality (VR) equipment. The application of VR is now increasing rapidly in psychology, but the realism of its avatars, and the extent to which they visually represent real people, is captured poorly in current VR experiments. Here, we demonstrate a user-friendly method for creating photo-realistic avatars of real people and provide a series of studies to demonstrate their psychological characteristics. We show that avatar faces of familiar people are recognised with high accuracy (Study 1), replicate the familiarity advantage typically observed in real-world face matching (Study 2), and show that these avatars produce a similarity-space that corresponds closely with real photographs of the same faces (Study 3). These studies open the way to conducting psychological experiments on visual perception and social cognition with increased realism in VR.
... For instance in naturalistic scenes, person information rapidly captures our attention (Fletcher-Watson, Findlay, Leekam, & Benson, 2008). Further, the phenomenon of 'change blindness,' whereby a change is introduced in a visual scene or stimulus that the observers often fail to notice, is less frequent when the stimulus is a face, suggestive of an attentional prioritisation (Ro, Russell, & Lavie, 2001). ...
Thesis
This thesis provides novel neuroimaging insights into the brain activity related to the processing of highly salient infant faces. Specifically, I provide new information about the spatial and temporal aspects of brain activity for processing infant faces within four experimental investigations. Overall, the presented findings provide novel, important insights into: (1) our current understanding of how the brain processes salient, infant faces, (2) human face perception more generally, and (3) potential implications for how we provide care to our young. In Chapter 1, I review the literature on human face processing, and infant face processing. I draw together insights from prosopagnosia and single-cell studies in primates, moving on to discuss functional neuroimaging findings highlighting a dedicated spatial network of regions for face processing within the brain. The current evidence has good knowledge of ‘what’ and ‘where,’ but lacks a temporal dimension: ‘when.’ I then move on to discuss models of face perception, and how the dominant narrative involves a hierarchical, feedforward process, which is at odds with current knowledge about top-down interactions between brain regions. Lastly, I summarise our current understanding of human parental brain networks. In Chapter 2, I present two quantitative meta-analyses of aggregated fMRI data, using activation likelihood estimation (ALE) analysis. First, I explore nulliparous women viewing infant faces, and second, I explore mothers viewing their own infants’ face. I present findings relating to the spatial coordinates of these two intriguing contrasts, including the apparent left lateralisation of infant face processing in motherhood. I reflect upon how the field of fMRI studies has thus far been limited in its ability to explain the temporal dimension of face processing (“when”) and set a precedent for a greater exploration of infant face processing using temporally sensitive brain imaging methodology and analytic methods. In Chapter 3, I present the analysis of a dataset exploring how the human brain processes infant and adult faces, replicating previous findings of a privileged processing route when viewing infant faces to support sensitive and swift caregiving. I then advance the field by exploring how the human brain also processes juvenile and adult animal faces to test the hypothesis that the infant schema may operate in a cross-species fashion. I report evidence demonstrating that baby animals (kittens and puppies) also trigger an early orbitofrontal cortex response (120ms), that guides the brain to provide sensitive caregiving – “cuteness ignition”. In Chapter 4, I analyse the same dataset as in Chapter 3, this time using a classifier (discriminant analysis) to pose the question as to how the adult brain categorises different kinds of faces. This chapter provides proof of principle for the ability of classification analysis to discover the spatiotemporal features needed to separate and predict up to six classes of face stimuli. The importance of the beta band and the time window of 60-180ms post stimulus presentation for face categorisation are both emphasised. The results provide further evidence for the importance of “when” components in brain activity within the human brain, especially when it comes to distinguishing between highly salient categories such as “cute” baby and baby animal faces. This method also provides exciting new avenues for research into the human parental brain and temporally sensitive parent-infant interactions. Chapter 5 addresses how we can use more nuanced experimental paradigms in fMRI, combined with sensitive network analysis, to draw inferences about how the brain learns about characterological features of infant faces (emotionality). While previous chapters explored the short ‘when’ of infant face processing, this chapter addresses the long ‘when’ involving learning. I report upon a network involving orbitofrontal cortex, amygdala and hippocampus, which is more active for infant faces with a happier temperament and expression of emotionality. This has important implications for social learning, and perhaps for attachment and empathy. Lastly, in Chapter 6 I conclude by drawing together all findings from the thesis to demonstrate how a comprehensive understanding of cognitive processes within the brain necessitates ‘what,’ ‘where,’ and crucially, ‘when’ information. I discuss how this thesis provides evidence of parallel processing pathways, and the likely presence of top-down predictions arising from this structure. I discuss the crucial role of the orbitofrontal cortex in salient face processing, and advance a new theoretical model for salient face processing that unites ‘cuteness ignition’ with current theoretical top-down models of object processing.
... The results of this study suggest that the notion that faces have a special status in the attentional system of humans [5,10,19,59] would need to be qualified considering factors such as the age of the individual, current task priorities, previous experience, or an individual's personal interest in a particular aspect of the object or environment [2,60,61]. Under certain conditions the attentional system may prioritize faces over nonsocial stimuli, however, the capture of attention by faces may not be as automatic and robust as previously suggested. ...
Article
Full-text available
This study examined involuntary capture of attention, overt attention, and stimulus valence and arousal ratings, all factors that can contribute to potential attentional biases to face and train objects in children with and without autism spectrum disorder (ASD). In the visual domain, faces are particularly captivating, and are thought to have a ‘special status’ in the attentional system. Research suggests that similar attentional biases may exist for other objects of expertise (e.g. birds for bird experts), providing support for the role of exposure in attention prioritization. Autistic individuals often have circumscribed interests around certain classes of objects, such as trains, that are related to vehicles and mechanical systems. This research aimed to determine whether this propensity in autistic individuals leads to stronger attention capture by trains, and perhaps weaker attention capture by faces, than what would be expected in non-autistic children. In Experiment 1, autistic children (6–14 years old) and age- and IQ-matched non-autistic children performed a visual search task where they manually indicated whether a target butterfly appeared amongst an array of face, train, and neutral distractors while their eye-movements were tracked. Autistic children were no less susceptible to attention capture by faces than non-autistic children. Overall, for both groups, trains captured attention more strongly than face stimuli and, trains had a larger effect on overt attention to the target stimuli, relative to face distractors. In Experiment 2, a new group of children (autistic and non-autistic) rated train stimuli as more interesting and exciting than the face stimuli, with no differences between groups. These results suggest that: (1) other objects (trains) can capture attention in a similar manner as faces, in both autistic and non-autistic children (2) attention capture is driven partly by voluntary attentional processes related to personal interest or affective responses to the stimuli.
... The human face allows us to identify others, infer emotional states, and participate in shared attention, highlighting the importance of visual attention to faces for successful social interactions (Bruce & Young, 1998;Haxby et al., 2000;. Research has consistently demonstrated an attentional bias for faces compared to other stimuli in our environment, showing that we look significantly longer at faces and also rapidly detect them within cluttered scenes (Bindemann et al., 2005;Bindemann & Lewis, 2013;Johnson et al., 1991;Langton et al., 2008;Lewis & Edmonds, 2005;Ro et al., 2001;Theeuwes & Van der Stigchel, 2006). Such evidence is largely based on screen-based paradigms, which offer high degrees of experimental control, but typically do not allow participants to interact with the viewed face. ...
Article
Full-text available
Cross-cultural psychologists have widely discussed “gaze avoidance” as a sociocultural norm to describe reduced mutual gaze in East Asians (EAs) compared to Western Caucasians (WCs). Supportive evidence is primarily based on self-reports and video recordings of face-to-face interactions, but more objective techniques that can investigate the micro-dynamics of gaze are scarce. The current study used dual head-mounted eye-tracking in EA and WC dyads to examine face looking and mutual gaze during live social interactions. Both cultural groups showed more face looking when listening than speaking, and during an introductory task compared to a storytelling game. Crucially, compared to WCs, EA dyads spent significantly more time engaging in mutual gaze, and individual instances of mutual gaze were longer in EAs for the storytelling game. Our findings challenge “gaze avoidance” as a generalizable cultural observation, and highlight the need to consider contextual factors that dynamically influence gaze both within and between cultures.
Book
Full-text available
The second edition of this popular textbook encapsulates the excitement of the fascinating and fast-moving field of social psychology. A comprehensive and lively guide, it covers general principles, classic studies and cutting-edge research. Innovative features such as 'student projects' and 'exploring further' exercises place the student experience at the heart of this book. This blend of approaches, from critical appraisal of important studies to real-world examples, will help students to develop a solid understanding of social psychology and the confidence to apply their knowledge in assignments and exams.
Article
Infant's face preferences have previously been assessed in displays containing 1 or 2 faces. Here we present 6-month-old infants with a complex visual array containing faces among multiple visual objects. Despite the competing objects, infants direct their first saccade toward faces more frequently than expected by chance (Experiment 1). The attention-grabbing effect of faces is not selective to upright faces (Experiment 2) but does require the presence of internal facial elements, as faces whose interior has been phase-scrambled did not attract infants' attention (Experiment 3). On the contrary, when the number of fixations is considered, upright faces are scanned more extensively than both inverted and phase-scrambled faces. The difference in selectivity between the first look measure and the fixation count measure is discussed in light of a distinction between attention-grabbing and attention-holding mechanisms.
Thesis
This dissertation seeks to unite two major streams of cognitive research that have traditionally proceeded independently (i.e. selective attention, and face processing). It was already well-established that faces convey a great deal of biologically significant information that has direct implications for everyday social behaviour. Moreover, there is substantial evidence that face processing may be qualitatively different from other forms of visual processing, and may even be subserved by face-specific neural systems. If faces are indeed 'special' in these respects, it is possible that their relation to selective attention may also differ from that of other stimulus classes, for which several attentional principles are relatively well understood. To date, however, this possibility has been largely overlooked. The experiments in this thesis addressed the interaction between selective attention and face processing directly, by examining whether faces are particularly difficult stimuli to ignore, and by assessing the consequences of various attentional manipulations for both on-line processing of task-irrelevant faces, and for subsequent incidental memory of these faces. The main findings indicate that faces may be particularly strong competitors for attention, such that they typically capture more attention than competing nonface objects when spatial competition for attention arises. Moreover, they seem to draw on a highly face-specific capacity with its own (face-specific) capacity limits. Despite being special in these two senses, however, face processing may be subject to more general attentional constraints at some stage, since task-irrelevant faces are later recognised less well if attentional capacity was exhausted by a nonface task at exposure. These findings are discussed in relation to the ongoing debate over 'modularity' for face processing. The results may also have practical implications, for example in assessing the reliability of eyewitness testimony.
Preprint
Detection and recognition of social interactions unfolding in the surroundings is as vital as detection and recognition of faces, bodies, and animate entities in general. We have demonstrated that the visual system is particularly sensitive to a configuration with two bodies facing each other as if interacting. In four experiments using backward masking on healthy adults, we investigated the properties of this dyadic visual representation. We measured the inversion effect (IE), the cost on recognition, of seeing bodies upside-down as opposed to upright, as an index of visual sensitivity: the greater the visual sensitivity, the greater the IE. The IE was increased for facing (vs. nonfacing) dyads, whether the head/face direction was visible or not, which implies that visual sensitivity concerns two bodies, not just two faces/heads. Moreover, the difference in IE for facing vs. nonfacing dyads disappeared when one body was replaced by another object. This implies selective sensitivity to a body facing another body, as opposed to a body facing anything. Finally, the IE was reduced when reciprocity was eliminated (one body faced another but the latter faced away). Thus, the visual system is sensitive selectively to dyadic configurations that approximate a prototypical social exchange with two bodies spatially close and mutually accessible to one another. These findings reveal visual configural representations encompassing multiple objects, which could provide fast and automatic parsing of complex relationships beyond individual faces or bodies.
Article
Although attention is thought to be spontaneously biased by social cues like faces and eyes, recent data have demonstrated that when extraneous content, context, and task factors are controlled, attentional biasing is abolished in manual responses while still occurring sparingly in oculomotor measures. Here, we investigated how social attentional biasing was affected by face novelty by measuring responses to frequently presented (i.e., those with lower novelty) and infrequently presented (i.e., those with higher novelty) face identities. Using a dot-probe task, participants viewed either the same face and house identity that was frequently presented on half of the trials or sixteen different face and house identities that were infrequently presented on the other half of the trials. A response target occurred with equal probability at the previous location of the eyes or mouth of the face or the top or bottom of the house. Experiment 1 measured manual responses to the target while participants maintained central fixation. Experiment 2 additionally measured participants’ natural oculomotor behaviour when their eye movements were not restricted. Across both experiments, no evidence of social attentional biasing was found in manual data. However, in Experiment 2, there was a reliable oculomotor bias towards the eyes of infrequently presented upright faces. Together, these findings suggest that face novelty does not facilitate manual measures of social attention, but it appears to promote spontaneous oculomotor biasing towards the eyes of infrequently presented novel faces
Article
Full-text available
As robots begin to receive citizenship, are treated as beloved pets, and given a place at Japanese family tables, it is becoming clear that these machines are taking on increasingly social roles. While human-robot interaction research relies heavily on self-report measures for assessing people’s perception of robots, a distinct lack of robust cognitive and behavioural measures to gauge the scope and limits of social motivation towards artificial agents exists. Here we adapted Conty and colleagues’ (2010) social version of the classic Stroop paradigm, in which we showed four kinds of distractor images above incongruent and neutral words: human faces, robot faces, object faces (for example, a cloud with facial features) and flowers (control). We predicted that social stimuli, like human faces, would be extremely salient and draw attention away from the to-be-processed words. A repeated-measures ANOVA indicated that the task worked (the Stroop effect was observed), and a distractor-dependent enhancement of Stroop interference emerged. Planned contrasts indicated that specifically human faces presented above incongruent words significantly slowed participants’ reaction times. To investigate this small effect further, we conducted a second experiment (N=51) with a larger stimulus set. While the main effect of the incongruent condition slowing down participants’ reaction time replicated, we did not observe an interaction effect of the social distractors (human faces) drawing more attention than the other distractor types. We question the suitability of this task as a robust measure for social motivation and discuss our findings in the light of recent conflicting results in the social attentional capture literature.
Thesis
Full-text available
From just a glimpse of another person, we make inferences about their current states and longstanding traits. These inferences are normally spontaneous and effortless, yet they are crucial in shaping our impressions and behaviours towards other people. What are the perceptual operations involved in the rapid extraction of socially relevant information? To answer this question, over the last decade the visual and cognitive neuroscience of social stimuli has received new inputs through emerging proposals of social vision approaches. Perhaps by function of these contributions, researchers have reached a certain degree of consensus over a standard model of face perception. This thesis aims to extend social vision approaches to the case of human body perception. In doing so, it establishes the building blocks for a perceptual model of the human body which integrates the extraction of socially relevant information from the appearance of the body. Using visual tasks, the data show that perceptual representations of the human body are sensitive to socially relevant information (e.g. sex, weight, emotional expression). Specifically, in the first empirical chapter I dissect the perceptual representations of body sex. Using a visual search paradigm, I demonstrate a differential and asymmetrical representation of sex from human body shape. In the second empirical chapter, using the Garner selective attention task, I show that the dimension of body sex is independent from the information of emotional body postures. Finally, in the third empirical chapter, I provide evidence that category selective visual brain regions, including the body selective region EBA, are directly involved in forming perceptual expectations towards incoming visual stimuli. Socially relevant information of the body might shape visual representations of the body by acting as a set of expectancies available to the observer during perceptual operations. In the general discussion I address how the findings of the empirical chapters inform us about the perceptual encoding of human body shape. Further, I propose how these results provide the initial steps for a unified social vision model of human body perception. Finally, I advance the hypothesis that rapid social categorisation during perception is explained by mechanisms generally affecting the perceptual analysis of objects under naturalistic conditions (e.g. expectations-expertise) operating within the social domain.
Article
Research on facial attractiveness and face recognition has produced contradictory results that we believe are rooted in methodological limitations. Three experiments evaluated the hypothesis that facial attractiveness and face recognition are positively and linearly related. We also expected that social status would moderate the attractiveness effect. Attractive faces were recognized with very high accuracy compared to less attractive faces. We specified two estimates of facial distinctiveness (generalized and idiosyncratic) and demonstrated that the attractiveness effect on face recognition was not due to distinctiveness. This solves the long-standing problem that because facial attractiveness and distinctiveness are naturally confounded, construct validity is compromised. There was no support for the prediction, based on meta-analysis, that females would outperform males in face recognition. The attractiveness effect was so strong that gender effects were precluded. Methodological prescriptions to enhance internal, construct, and statistical conclusion validity in face recognition paradigms are presented.
Article
Internet use by older adults is increasing and has the potential to improve their mental and social wellbeing, however it is still low compared to other age groups for various online activities. The objective of this study was to develop and evaluate a computerized expert system, GoldNet, which functions as a human expert and guides the user, step-by-step and in real time, when performing online operations. A user study was conducted on 30 older adults (over 65 years old) that attend an adult daycare center. The participants were all novice users with no prior experience using the Internet. A stratified randomized between participants experimental design was used to evaluate users' performance and satisfaction before and after receiving guidance, after practice, and after a two-week period without controlled practice. Participants were assigned to one of three study groups according to the guidance they received: GoldNet guides, video guidance, or guidance from a personal human tutor. During the experiment participants’ eye movements were monitored. The results indicate that by using GoldNet's automated guides, older adults can perform online tasks with high effectiveness, efficiency, and user satisfaction, performing comparably to users trained by a personal human tutor and significantly better than users relying on video guidance. GoldNet was also found beneficial as a refresher tool after a period without controlled training.
Article
Objectives: (1) To compare low-contrast detectability of a deep learning-based denoising algorithm (DLA) with ADMIRE and FBP, and (2) to compare image quality parameters of DLA with those of reconstruction methods from two different CT vendors (ADMIRE, IMR, and FBP). Materials and methods: Using abdominal CT images of 100 patients reconstructed via ADMIRE and FBP, we trained DLA by feeding FBP images as input and ADMIRE images as the ground truth. To measure the low-contrast detectability, the randomized repeat scans of Catphan® phantom were performed under various conditions of radiation exposures. Twelve radiologists evaluated the presence/absence of a target on a five-point confidence scale. The multi-reader multi-case area under the receiver operating characteristic curve (AUC) was calculated, and non-inferiority tests were performed. Using American College of Radiology CT accreditation phantom, contrast-to-noise ratio, target transfer function, noise magnitude, and detectability index (d') of DLA, ADMIRE, IMR, and FBPs were computed. Results: The AUC of DLA in low-contrast detectability was non-inferior to that of ADMIRE (p < .001) and superior to that of FBP (p < .001). DLA improved the image quality in terms of all physical measurements compared to FBPs from both CT vendors and showed profiles of physical measurements similar to those of ADMIRE. Conclusions: The low-contrast detectability of the proposed deep learning-based denoising algorithm was non-inferior to that of ADMIRE and superior to that of FBP. The DLA could successfully improve image quality compared with FBP while showing the similar physical profiles of ADMIRE. Key points: • Low-contrast detectability in the images denoised using the deep learning algorithm was non-inferior to that in the images reconstructed using standard algorithms. • The proposed deep learning algorithm showed similar profiles of physical measurements to advanced iterative reconstruction algorithm (ADMIRE).
Article
Eye movement studies show that humans can make very fast saccades towards faces in natural scenes, but the visual mechanisms behind this process remain unclear. Here we investigate whether fast saccades towards faces rely on mechanisms that are sensitive to the orientation or contrast of the face image. We present participants pairs of images each containing a face and a car in the left and right visual field or the reverse, and we ask them to saccade to faces or cars as targets in different blocks. We assign participants to one of three image conditions: normal images, orientation-inverted images, or contrast-negated images. We report three main results that hold regardless of image conditions. First, reliable saccades towards faces are fast – they can occur at 120–130 ms. Second, fast saccades towards faces are selective – they are more accurate and faster by about 60–70 ms than saccades towards cars. Third, saccades towards faces are reflexive – early saccades in the interval of 120–160 ms tend to go to faces, even when cars are the target. These findings suggest that the speed, selectivity, and reflexivity of saccades towards faces do not depend on the orientation or contrast of the face image. Our results accord with studies suggesting that fast saccades towards faces are mainly driven by low-level image properties, such as amplitude spectrum and spatial frequency.
Article
Full-text available
When looking at a scene, observers feel that they see its entire structure in great detail and can immediately notice any changes in it. However, when brief blank fields are placed between alternating displays of an original and a modified scene, a striking failure of perception is induced: identification of changes becomes extremely difficult, even when changes are large and made repeatedly. Identification is much faster when a verbal cue is provided, showing that poor visibility is not the cause of this difficulty. Identification is also faster for objects mentioned in brief verbal descriptions of the scene. These results support the idea that observers never form a complete, detailed representation of their surroundings. In addition, results also indicate that attention is required to perceive change, and that in the absence of localized motion signals it is guided on the basis of high-level interest. To see or not to see: The need for attention to perceive changes in scenes. Available from: https://www.researchgate.net/publication/236170014_To_see_or_not_to_see_The_need_for_attention_to_perceive_changes_in_scenes [accessed Jun 15, 2017].
Article
Full-text available
The current study investigated the influence of a low-level local feature (curvature) and a high-level emergent feature (facial expression) on rapid search. These features distinguished the target from the distractors and were presented either alone or together. Stimuli were triplets of up and down arcs organized to form meaningless patterns or schematic faces. In the feature search, the target had the only down arc in the display. In the conjunction search, the target was a unique combination of up and down arcs. When triplets depicted faces, the target was also the only smiling face among frowning faces. The face-level feature facilitated the conjunction search but, surprisingly, slowed the feature search. These results demonstrated that an object inferiority effect could occur even when the emergent feature was useful in the search. Rapid search processes appear to operate only on high-level representations even when low-level features would be more efficient. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Across saccades, blinks, blank screens, movie cuts, and other interruptions, ob-servers fail to detect substantial changes to the visual details of objects and scenes. This inability to spot changes ("change blindness") is the focus of this special issue of Visual Cognition. This introductory paper briefly reviews recent studies of change blindness, noting the relation of these findings to earlier re-search and discussing the inferences we can draw from them. Most explanations of change blindness assume that we fail to detect changes because the changed display masks or overwrites the initial display. Here I draw a distinction between intentional and incidental change detection tasks and consider how alternatives to the "overwriting" explanation may provide better explanations for change blindness. Imagine you are watching a movie in which an actor is sitting in a cafeteria with a jacket slung over his shoulder. The camera then cuts to a close-up and his jacket is now over the back of his chair. You might think that everyone would notice this obvious editing mistake. Yet, recent research on visual memory has found that people are surprisingly poor at noticing large changes to objects, photographs, and motion pictures from one instant to the next (see Simons & Levin, 1997 for a review). Although researchers have long noted the existence of such "change blindness" (e.g. Bridgeman, Hendry, & Stark, 1975; French, 1953; Friedman, 1979; Hochberg, 1986; Kuleshov, 1987; McConkie & Zola, 1979; Pashler, 1988; Phillips, 1974), recent demonstrations by John Grimes and others have led to a renewed interest in the problem of change detection. The new theoretical ideas and paradigms resulting from this resurgence in the study of visual memory are the focus of this special issue.
Article
Full-text available
Our intuition that we richly represent the visual details of our environment is illusory. When viewing a scene, we seem to use detailed representations of object properties and interobject relations to achieve a sense of continuity across views. Yet, several recent studies show that human observers fail to detect changes to objects and object properties when localized retinal information signaling a change is masked or eliminated (e.g., by eye movements). However, these studies changed arbitrarily chosen objects which may have been outside the focus of attention. We draw on previous research showing the importance of spatiotemporal information for tracking objects by creating short motion pictures in which objects in both arbitrary locations and the very center of attention were changed. Adult observers failed to notice changes in both cases, even when the sole actor in a scene transformed into another person across an instantaneous change in camera angle (or “cut”).
Article
Full-text available
his paper seeks to bring together two previously separate research traditions: research on spatial orienting within the visual cueing paradigm and research into social cognition, addressing our tendency to attend in the direction that another person looks. Cueing methodologies from mainstream attention research were adapted to test the automaticity of orienting in the direction of seen gaze. Three studies manipulated the direction of gaze in a computerized face, which appeared centrally in a frontal view during a peripheral letter-discrimination task. Experiments 1 and 2 found faster discrimination of peripheral target letters on the side the computerized face gazed towards, even though the seen gaze did not predict target side, and despite participants being asked to ignore the face. This suggests reflexive covert and/or overt orienting in the direction of seen gaze, arising even when the observer has no motivation to orient in this way. Experiment 3 found faster letter discrimination on the side the computerized face gazed towards even when participants knew that target letters were four times as likely on the opposite side. This suggests that orienting can arise in the direction of seen gaze even when counter to intentions. The experiments illustrate that methods from mainstream attention research can be usefully applied to social cognition, and that studies of spatial attention may profit from considering its social function.
Article
Full-text available
Cells selectively responsive to the face have been found in several visual sub-areas of temporal cortex in the macaque brain. These include the lateral and ventral surfaces of inferior temporal cortex and the upper bank, lower bank and fundus of the superior temporal sulcus (STS). Cells in the different regions may contribute in different ways to the processing of the facial image. Within the upper bank of the STS different populations of cells are selective for different views of the face and head. These cells occur in functionally discrete patches (3-5 mm across) within the STS cortex. Studies of output connections from the STS also reveal a modular anatomical organization of repeating 3-5 mm patches connected to the parietal cortex, an area thought to be involved in spatial awareness and in the control of attention. The properties of some cells suggest a role in the discrimination of heads from other objects, and in the recognition of familiar individuals. The selectivity for view suggests that the neural operations underlying face or head recognition rely on parallel analyses of different characteristic views of the head, the outputs of these view-specific analyses being subsequently combined to support view-independent (object-centred) recognition. An alternative functional interpretation of the sensitivity to head view is that the cells enable an analysis of 'social attention', i.e. they signal where other individuals are directing their attention. A cell maximally responsive to the left profile thus provides a signal that the attention (of another individual) is directed to the observer's left. Such information is useful for analysing social interactions between other individuals.(ABSTRACT TRUNCATED AT 250 WORDS)
Article
Full-text available
Are faces recognized using more holistic representations than other types of stimuli? Taking holistic representation to mean representation without an internal part structure, we interpret the available evidence on this issue and then design new empirical tests. Based on previous research, we reasoned that if a portion of an object corresponds to an explicitly represented part in a hierarchical visual representation, then when that portion is presented in isolation it will be identified relatively more easily than if it did not have the status of an explicitly represented part. The hypothesis that face recognition is holistic therefore predicts that a part of a face will be disproportionately more easily recognized in the whole face than as an isolated part, relative to recognition of the parts and wholes of other kinds of stimuli. This prediction was borne out in three experiments: subjects were more accurate at identifying the parts of faces, presented in the whole object, than they were at identifying the same part presented in isolation, even though both parts and wholes were tested in a forced-choice format and the whole faces differed only by one part. In contrast, three other types of stimuli--scrambled faces, inverted faces, and houses--did not show this advantage for part identification in whole object recognition.
Article
Full-text available
Sensitivity to configural changes in face processing has been cited as evidence for face-exclusive mechanisms. Alternatively, general mechanisms could be fine-tuned by experience with homogeneous stimuli. We tested sensitivity to configural transformations for novices and experts with nonface stimuli ("Greebles"). Parts of transformed Greebles were identified via forced-choice recognition. Regardless of expertise level, the recognition of parts in the Studied configuration was better than in isolation, suggesting an object advantage. For experts, recognizing Greeble parts in a Transformed configuration was slower than in the Studied configuration, but only at upright. Thus, expertise with visually similar objects, not faces per se, may produce configural sensitivity.
Article
Full-text available
Two studies examined potential age-related differences in attentional capture. Subjects were instructed to move their eyes as quickly as possible to a color singleton target and to identify a small letter located inside it. On half the trials, a new stimulus (i.e., a sudden onset) appeared simultaneously with the presentation of the color singleton target. The onset was always a task-irrelevant distractor. Response times were lengthened, for both young and old adults, whenever an onset distractor appeared, despite the fact that subjects reported being unaware of the appearance of the abrupt onset. Eye scan strategies were also disrupted by the appearance of the onset distractors. On about 40% of the trials on which an onset appeared, subjects made an eye movement to the task-irrelevant onset before moving their eyes to the target. Fixations close to the onset were brief, suggesting parallel programming of a reflexive eye movement to the onset and goal-directed eye movement to the target. Results are discussed in terms of age-related sparing of the attentional and oculomotor processes that underlie attentional capture.
Article
Four experiments investigate the hypothesis that cues to the direction of another's social attention produce a reflexive orienting of an observer's visual attention. Participants were asked to make a simple detection response to a target letter which could appear at one of four locations on a visual display. Before the presentation of the target, one of these possible locations was cued by the orientation of a digitized head stimulus, which appeared at fixation in the centre of the display. Uninformative and to-be-ignored cueing stimuli produced faster target detection latencies at cued relative to uncued locations, but only when the cues appeared 100 msec before the onset of the target (Experiments 1 and 2). The effect was uninfluenced by the introduction of a to-be-attended and relatively informative cue (Experiment 3), but was disrupted by the inversion of the head cues (Experiment 4). It is argued that these findings are consistent with the operation of a reflexive, stimulus-driven or exogenous orienting mechanism which can be engaged by social attention signals.
Article
Compared memory for faces with memory for other classes of familar and complex objects which, like faces, are also customarily seen only in 1 orientation (mono-oriented). Performance of 4 students was tested when the inspection and test series were presented in the same orientation, either both upright or both inverted, or when the 2 series were presented in opposite orientations. The results show that while all mono-oriented objects tend to be more difficult to remember when upside-down, faces are disproportionately affected. These findings suggest that the difficulty in looking at upside-down faces involves 2 factors: a general factor of familiarity with mono-oriented objects, and a special factor related only to faces. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Observers make rapid eye movements to examine the world around them. Before an eye movement is made, attention is covertly shifted to the location of the object of interest. The eyes typically will land at the position at which attention is directed. Here we report that a goal-directed eye movement toward a uniquely colored object is disrupted by the appearance of a new but task-irrelevant object, unless subjects have a sufficient amount of time to focus their attention on the location of the target prior to the appearance of the new object. In many instances, the eyes started moving toward the new object before gaze started to shift to the color-singleton target. The eyes often landed for a very short period of time time (25-150 ms) near the new object. The results suggest parallel programming of two saccades: one voluntary, goal-directed eye movement toward the color-singleton target and one stimulus-driven eye movement reflexively elicited by the appearance of the new object. Neuroanatomical structures responsible for parallel programming of saccades saccades are discussed.
Article
Normal subjects were presented with a simple line drawing of a face looking left, right, or straight ahead. A target letter F or T then appeared to the left or the right of the face. All subjects participated in target detection, localization, and identification response conditions. Although subjects were told that the line drawing’s gaze direction (the cue) did not predict where the target would occur, response time in all three conditions was reliably faster when gaze was toward versus away from the target. This study provides evidence for covert, reflexive orienting to peripheral locations in response to uninformative gaze shifts presented at fixation. The implications for theories of social attention and visual orienting are discussed, and the brain mechanisms that may underlie this phenomenon are considered.
Article
Although at any instant we experience a rich, detailed visual world, we do not use such visual details to form a stable representation across views. Over the past five years, researchers have focused increasingly on 'change blindness' (the inability to detect changes to an object or scene) as a means to examine the nature of our representations. Experiments using a diverse range of methods and displays have produced strikingly similar results: unless a change to a visual scene produces a localizable change or transient at a specific position on the retina, generally, people will not detect it. We review theory and research motivating work on change blindness and discuss recent evidence that people are blind to changes occurring in photographs, in motion pictures and even in real-world interactions. These findings suggest that relatively little visual information is preserved from one view to the next, and question a fundamental assumption that has underlain perception research for centuries: namely, that we need to store a detailed visual representation in the mind/brain from one view to the next.
Article
Two patients developed a severe and long-lasting inability to recognize familiar faces (prosopagnosia) after a stroke, which was shown by CT scan to be confined to the right hemisphere. The area of softening involved the entire cortico-subcortical territory of distribution of the right posterior cerebral artery. These data suggest that in a few cases right occipito-temporal damage may be sufficient to produce prosopagnosia.
Article
Evidence from a series of visual-search experiments suggests that detecting an upright face amidst face-like distractors elicits a pattern of reaction times that is consistent with serial search. In four experiments the impact of orientation, number of stimuli in the display, and similarity of stimuli on search rates was examined. All displays were homogeneous. Trials were blocked by distractor type for three experiments. In the first experiment search rates for faces amidst identical faces rotated by 180 degrees were examined. No advantage was evidenced in searching for an upright face. The impact of the quality of the face representation was examined in the second experiment. Search rates are reported for a line-drawn and a digitized image of a face amidst identical faces rotated by 180 degrees. Search was faster for digitized than for line-drawn faces. The findings of the first experiment for orientation were replicated. In the third and fourth experiments the impact of disrupting the facial configuration in distractors was examined and performance was contrasted for blocked and mixed trials, respectively, with the same stimulus set. Reaction times increased with the number of distractors in the display in all but the nonface condition, which produced a shallow slope suggestive of parallel search. Search amidst other distractors appeared to involve the conjoining of a specific set of features with specific spatial relations. The hierarchy of relevant configural dimensions was inconsistent across these two experiments, suggesting that the symmetry, top-down order of features, orientation of the face, and predictability of the distractor type may have an interactive effect on search strategies.
Article
Subjects were asked to detect faces or facial expressions from patterns with a variable number of nonfaces or faces expressing different emotions. In most tests, reaction time was found to increase steeply with sample size, thus indicating serial-search characteristics for the patterns tested. There were, however, considerable differences in the slopes of the graphs (search time versus sample size), which could be attributed to visual (but not face) cues that are discriminated at similar speeds. Slopes did not change when patterns were presented upside down, although such a modification strongly affects the perception of faces and facial expressions.
Article
Using functional magnetic resonance imaging (fMRI), we found an area in the fusiform gyrus in 12 of the 15 subjects tested that was significantly more active when the subjects viewed faces than when they viewed assorted common objects. This face activation was used to define a specific region of interest individually for each subject, within which several new tests of face specificity were run. In each of five subjects tested, the predefined candidate "face area" also responded significantly more strongly to passive viewing of (1) intact than scrambled two-tone faces, (2) full front-view face photos than front-view photos of houses, and (in a different set of five subjects) (3) three-quarter-view face photos (with hair concealed) than photos of human hands; it also responded more strongly during (4) a consecutive matching task performed on three-quarter-view faces versus hands. Our technique of running multiple tests applied to the same region defined functionally within individual subjects provides a solution to two common problems in functional imaging: (1) the requirement to correct for multiple statistical comparisons and (2) the inevitable ambiguity in the interpretation of any study in which only two or three conditions are compared. Our data allow us to reject alternative accounts of the function of the fusiform face area (area "FF") that appeal to visual attention, subordinate-level classification, or general processing of any animate or human forms, demonstrating that this region is selectively involved in the perception of faces.
Article
We examined whether faces can produce a 'pop-out' effect in visual search tasks. In the first experiment, subjects' eye movements and search latencies were measured while they viewed a display containing a target face amidst distractors. Targets were upright or inverted faces presented with seven others of the opposite polarity as an 'around-the-clock' display. Face images were either photographic or 'feature only', with the outline removed. Naive subjects were poor at locating an upright face from an array of inverted faces, but performance improved with practice. In the second experiment, we investigated systematically how training improved performance. Prior to testing, subjects were practised on locating either upright or inverted faces. All subjects benefited from training. Subjects practised on upright faces were faster and more accurate at locating upright target faces than inverted. Subjects practised on inverted faces showed no difference between upright and inverted targets. In the third experiment, faces with 'jumbled' features were used as distractors, and this resulted in the same pattern of findings. We conclude that there is no direct rapid 'pop-out' effect for faces. However, the findings demonstrate that, in peripheral vision, upright faces show a processing advantage over inverted faces.
Article
this paper (except the resliced images labeled "Axial" in Fig. 2). The brain images at the left show in color the voxels that produced a significantly higher MR signal intensity (based on smoothed data) during the epochs containing faces than during those containing objects (1a) and vice versa (1b) for 1 of the 12 slices scanned. These significance images (see color key at right for this and all figures in this paper) are overlaid on a T1-weighted anatomical image of the same slice. Most of the other 11 slices showed no voxels that reached significance at the p , 10