Article

Reflexive Visual Orienting in Response to the Social Attention of Others

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Four experiments investigate the hypothesis that cues to the direction of another's social attention produce a reflexive orienting of an observer's visual attention. Participants were asked to make a simple detection response to a target letter which could appear at one of four locations on a visual display. Before the presentation of the target, one of these possible locations was cued by the orientation of a digitized head stimulus, which appeared at fixation in the centre of the display. Uninformative and to-be-ignored cueing stimuli produced faster target detection latencies at cued relative to uncued locations, but only when the cues appeared 100 msec before the onset of the target (Experiments 1 and 2). The effect was uninfluenced by the introduction of a to-be-attended and relatively informative cue (Experiment 3), but was disrupted by the inversion of the head cues (Experiment 4). It is argued that these findings are consistent with the operation of a reflexive, stimulus-driven or exogenous orienting mechanism which can be engaged by social attention signals.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... These cues encompass a wealth of information, including the focus of attention, inner thoughts, intentions, and the goal of the action [1-8]. Observers tend to follow other's cue direction to redirect their attention, referred to as social attention [2,5,[9][10][11][12][13][14][15]. Accurately perceiving cues and social attention plays a BMC Psychology † Airui Chen and Meiyi Wang contributed equally to this work. ...
... In classic social attention studies, researchers usually use the Posner cuing paradigm to investigate social attention [13,[30][31][32][33][34][35][36]. Typically, a face with a left or right gaze is presented in the center of the screen, with the subsequent appearance of a target appearing on the left or right as either a valid or invalid cue condition. ...
... To explore the impact of autistic traits on social attention, we compared groups with high and low autistic traits in both a single social cue (Experiment 1) and conflicting cues (Experiment 2) scenarios. Our findings revealed that individuals responded more rapidly to the direction of a single social cue or the majority of multiple cues than if the target appeared in the opposite direction, which is consistent with previous research [10][11][12][13]41]. More importantly, no discernible differences in social attention were identified between individuals with high and low autistic traits, regardless of whether the experimental materials consist of a schematic face, a real face, or faces of multiple people. ...
Article
Full-text available
Individuals often use others’ gaze and head directions to direct their attention. To investigate the influence of autistic traits on social attention, we conducted two experiments comparing groups with high and low autistic traits in single-cue (Experiment 1) and conflicting-cue (Experiment 2) scenarios. Our findings indicate that individuals responded more rapidly to the direction of a single social cue or the consensus of multiple cues. However, we did not observe significant differences in social attention between individuals with high and low autistic traits. Notably, as the stimulus onset asynchrony (SOA) increased, individuals with low autistic traits exhibited greater improvements in reaction speed compared to those with high autistic traits. This suggests that individuals with low autistic traits excel at leveraging temporal information to optimize their behavioral readiness over time, hinting at potential variations in cognitive flexibility related to autistic traits.
... In the lab, attention is frequently studied through the use of simple tasks such as the widely used cueing task (Posner, 1980). The cueing task has been employed for decades in the study of attention, is notably well-regarded, and often used to answer questions regarding the allocation and orientation of attention (Driver et al., 1999;Friesen & Kingstone, 1998;Hayward & Ristic, 2013a;Langton & Bruce, 1999;Posner, 1980;. The earliest versions of the cueing task employed a simple directional cue, namely a central arrow or peripheral abrupt onsets, which preceded an oncoming target after a variable time delay. ...
... This foundational paradigm reveals attention shifts to simple, nonsocial cues. However, information sources in the real world can be much more diverse than the simple cues in traditional cueing tasks (Driver et al., 1999;Friesen & Kingstone, 1998;Hayward et al., 2017;Langton & Bruce, 1999;Posner, 1980;Ristic & Kingstone, 2005;. ...
... Of note, the typical profile of attention for arrow cues is both an early and sustained cueing effect with faster responses to valid targets for SOAs ranging from 100 ms to at least 1,000 ms. In the late 1990s, the cueing task was modified by four independent groups to investigate social attention, by using a central face with averted gaze as the cue (Driver et al., 1999;Friesen & Kingstone, 1998;Hietanen, 1999;Langton & Bruce, 1999). Paradigmatic results indicate that the gaze direction of the face can shift attention, even when the gaze cue is not predictive (i.e., looks at the target at chance level; Friesen & Kingstone, 1998), or even counterpredictive (i.e., looks at the target on between 8% and 20% of trials; Driver et al., 1999;Hayward & Ristic, 2013b) of the location of an upcoming target. ...
Article
Full-text available
While it is widely accepted that the single gaze of another person elicits shifts of attention, there is limited work on the effects of multiple gazes on attention, despite real-world social cues often occurring in groups. Further, less is known regarding the role of unequal reliability of varying social and nonsocial information on attention. We addressed these gaps by employing a variant of the gaze cueing paradigm, simultaneously presenting participants with three faces. Block-wise, we manipulated whether one face ( Identity condition) or one location ( Location condition) contained a gaze cue entirely predictive of target location; all other cues were uninformative. Across trials, we manipulated the number of valid cues (number of faces gazing at target). We examined whether these two types of information ( Identity vs. Location ) were learned at a similar rate by statistically modelling cueing effects by trial count. Preregistered analyses returned no evidence for an interaction between condition, number of valid faces, and presence of the predictive element, indicating type of information did not affect participants’ ability to employ the predictive element to alter behaviour. Exploratory analyses demonstrated (i) response times (RT) decreased faster across trials for the Identity compared with Location condition, with greater decreases when the predictive element was present versus absent, (ii) RTs decreased across trials for the Location condition only when it was completed first, and (iii) social competence altered RTs across conditions and trial number. Our work demonstrates a nuanced relationship between cue utility, condition type, and social competence on group cueing.
... Importantly, participants are told that the direction of the gaze is uninformative of the location of the upcoming target. Despite this, a gaze cuing effect is consistently observed whereby participants respond faster (and more accurately) on trials when the face gazes towards the target compared to trials when the face gazes away from the target (Driver et al., 1999;Friesen & Kingstone, 1998;Langton & Bruce, 1999). This gaze cuing effect has been interpreted as a reflexive and compulsory process that occurs rapidly in response to the presentation of a face (Friesen & Kingstone, 1998;Galfano et al., 2012). ...
... Although the direction of gaze was uninformative about the location of the upcoming target, participants appeared to use this information as RT on congruent trials (face gazed towards) were faster than RTs on incongruent trials (face gazed away). This gaze cuing effect has been well established in the literature (Driver et al., 1999;Friesen & Kingstone, 1998;Langton & Bruce, 1999;McKay et al., 2021) and we were interested in whether the perceived attractiveness of the face cue would modulate this effect. Specifically, given the privileges afforded to those perceived as more attractive and the potential biological informativeness of attractiveness (Lindell & Lindell, 2014;Roth et al., 2022), we hypothesized that we would find a larger gaze cuing effect for faces perceived to be more attractive compared to faces perceived to be less attractive, reflecting the fact that participants would prioritize information from faces perceived to be more attractive. ...
... A potential methodological reason for the lack of an effect observed in Experiments 1 and 2 is that the time between the face cue and the target was relatively long (500 ms). There is evidence that the gaze cuing effect decays relatively quickly (Friesen & Kingstone, 1998;McKay et al., 2021), even as quickly as 500 ms (Langton & Bruce, 1999; but see Driver et al., 1999). In addition, experiments that find an impact of facial features on the gaze cuing effect observed effects when the face cue was presented for 200 ms prior to the target but not when the face was presented for over 400 ms prior to the target Jones et al., 2010). ...
... On the other hand, head orientation, which can work as a social stimulus, can shift the observer's visual attention to the surrounding object (Langton & Bruce, 1999;Langton et al., 2000). The attentional shift of the head is derived from the direction in which the other person is looking, like a gaze cue. ...
... Thus, in this respect, the head orientation may play a different role compared to an arrow or finger direction, in which they are pointing with their arrowhead or fingertips. Indeed, head orientation is utilized when gaze direction is obscured by shadows or sunglasses, suggesting the social role of gaze and head is conceptually close (Emery, 2000;Langton & Bruce, 1999;Langton et al., 2000). Supporting this view, gaze and head cues activate similar brain regions, including the superior temporal sulcus and fusiform gyrus (Emery, 2000). ...
... This pattern of the results expanded Bonventre and Marotta (2023) and Dalmaso et al. (2023), who reported the SSE on pointing gestures, accumulating evidence that the reversion of gaze may not be generalized to other social stimuli. Nevertheless, given that head orientation can function as a social cue (Emery, 2000;Langton & Bruce, 1999;Langton et al., 2000), it is surprising that the head, like gaze cues, a social stimulus, exhibits the typical SSE in the present study. However, this result is explainable if considering the directional salience of the head stimuli (Burton et al., 2009;Hermens et al., 2017;Lu & van Zoest, 2023). ...
Article
Full-text available
In a spatial Stroop task, eye-gaze targets produce a reversed congruency effect (RCE) with faster responses when gaze direction and location are incongruent than congruent. On the other hand, non-social directional targets (e.g., arrows) elicit a spatial Stroop effect (SSE). The present study examined whether other social stimuli, such as head orientation, trigger the RCE. Participants judged the target direction of the head or the gaze while ignoring its location. While the gaze target replicated the RCE, the head target produced the SSE. Moreover, the head target facilitated the overall responses relative to the gaze target. These results suggest that the head, a salient directional feature, overrides the social significance. The RCE may be specific to gaze stimuli, not to social stimuli in general. The head and gaze information differentially affect our attentional mechanisms and enable us to bring about smooth social interactions.
... It is clear from these findings and from experiments reported elsewhere (e.g., Driver et al., 1999;Friesen & Kingstone, 1999;Langton & Bruce, 1999;Langton et al., 1996) that various directional signals are processed automatically by observers. Why should this be so? ...
... Gaze following may also facilitate vocabulary acquisition by toddlers, as the referent of a new word can be specified by the direction in which the speaker is looking (Baldwin, 1991) or perhaps by pointing. In line with this, the results of several studies have shown that nonhuman primates (e.g., Emery, Lorincz, Perrett, Oram, & Baker, 1997), infants (e.g., Butterworth & Jarrett, 1991Hood et al., 1998), and adults (Driver et al., 1999;Langton & Bruce, 1999) spontaneously redirect their gaze, their visual attention, or both in accord with another's gaze or head orientation. ...
... Another hypothesis currently under investigation is that the type of cross-modal interference effects noted in Experiment 2 here and by Langton et al. (1996) are mediated by the effect that certain social signals can exert on an observer's visuospatial attention. A number of research groups, including our own, have recently established that nonpredictive head-eye gaze cues (Langton & Bruce, 1999) and gaze cues from images of real faces (Driver et al., 1999) and schematic faces (Friesen & Kingstone, 1999) can trigger a reflexive, exogenous visual orienting response on behalf of an observer (see Spence & Driver, 1994, for a review of visual orienting). According to one particular theory of spatial attention, the "premotor theory" (e.g., Rizzolatti, Riggio, & Sheliga, 1994), to shift attention to a particular location entails the programming of an eye movement to that location, regardless of whether the eye movement is ever actually executed. ...
Article
Full-text available
Four experiments explored the processing of pointing gestures comprising hand and combined head and gaze cues to direction. The cross-modal interference effect exerted by pointing hand gestures on the processing of spoken directional words, first noted by S. R. H. Langton, C. O'Malley, and V. Bruce (1996), was found to be moderated by the orientation of the gesturer's head–gaze (Experiment 1). Hand and head cues also produced bidirectional interference effects in a within-modalities version of the task (Experiment 2). These findings suggest that both head–gaze and hand cues to direction are processed automatically and in parallel up to a stage in processing where a directional decision is computed. In support of this model, head–gaze cues produced no influence on nondirectional decisions to social emblematic gestures in Experiment 3 but exerted significant interference effects on directional responses to arrows in Experiment 4. It is suggested that the automatic analysis of head, gaze, and pointing gestures occurs because these directional signals are processed as cues to the direction of another individual's social attention.
... Results have demonstrated that social cues have a considerable impact on reaction times in visual detection or discrimination tasks, showing that valid cues lead to faster responses than invalid cues (e.g., Driver et al., 1999;Friesen et al., 2005;Gregory et al., 2015). However, most studies have explored the role of different social cues in isolation, using disjoined stimuli (e.g., Friesen et al., 2005;Friesen & Kingstone, 1998;Hermens et al., 2017;Langton & Bruce, 1999;Sato et al., 2007) or made cues explicitly task relevant for response selection (e.g., . The aim of the current study was to integrate multiple social cues and evaluate their combined influences on biases of spatial attention. ...
... In young children, the influence of pointing cue of overt eye-movement selection has been demonstrated as consistently stronger than gaze cues (Gregory et al., 2016). Studies using head direction as central cue have revealed comparable benefits for spatial attention when this cue correctly indicates the upcoming location of a target (Cooney et al., 2017;Langton, 2000;Langton & Bruce, 1999). Langton and Bruce (1999) showed participants a face in the centre of the screen either looking upwards, downwards, to the left, or to the right. ...
... Studies using head direction as central cue have revealed comparable benefits for spatial attention when this cue correctly indicates the upcoming location of a target (Cooney et al., 2017;Langton, 2000;Langton & Bruce, 1999). Langton and Bruce (1999) showed participants a face in the centre of the screen either looking upwards, downwards, to the left, or to the right. Participants were asked to press the space bar on a keyboard as soon as they detected a target letter which could appear at one of four locations. ...
Article
Full-text available
Social cues bias covert spatial attention. In most previous work the impact of different social cues, such as the gaze, head, and pointing cue, has been investigated using separated cues or making one cue explicitly task relevant in response-interference tasks. In the present study we created a novel cartoon figure in which unpredictive gaze and head and pointing cues could be combined to study their impact on spatial attention. In Experiment 1, gaze and pointing cues were either presented alone or together. When both cues were present, they were always directed to the same location. In Experiment 2, gaze and pointing cues were either directed to the same location (aligned) or directed to different locations (conflicted). Experiment 3 was like Experiment 2, except that the pointing cue was tested alongside a head-direction cue. The results of Experiment 1 showed that the effect of the gaze cue was reliably smaller than the pointing cue, and an aligned gaze cue did not have an additive benefit for performance. In Experiments 2 and 3, performance was determined by the pointing cue, regardless of where they eyes were looking, or the head was directed. The present results demonstrated a strong dominance of the pointing cue over the other cues. The child-friendly stimuli present a versatile way to study the impact of the combination of social cues, which may further benefit developmental research in social attention, and research in populations whose members might have atypical social attention.
... On a less reduced abstraction layer are photographs of faces. Again, gaze cueing is effective with such complex facial representations (Hietanen, 1999;Langton and Bruce, 1999). ...
... This is remarkable considering the pupil size and the conditions when the pupil is visible (compared to head orientation). Other studies, however, reported robust and equivalent effects of gaze cueing with head orientation (e.g., Langton and Bruce, 1999;Xu and Tanaka, 2014 Additionally, we were interested in the temporal dynamics of attention during gaze cueing. ...
... This is assumed to indicate an inhibitory process. For classical Posner cueing tasks, a reliable IOR is found from 200 ms to 1000 ms (for a meta analysis, see Samuel and Kat, 2003), which couldn't be found for gaze cueing tasks (Friesen and Kingstone, 1998;Frischen et al., 2007b;Frischen and Tipper, 2004;Langton and Bruce, 1999). Therefore, it is discussed as an exclusive property of facial stimuli, which might highlight the relevance of such gaze cues in human cognition (Frischen et al., 2007b;Frischen and Tipper, 2004). ...
Thesis
Full-text available
Gazes are of central relevance for people. They are crucial for navigating the world and communicating with others. Nevertheless, research in recent years shows that many findings from experimental research on gaze behavior cannot be transferred from the laboratory to everyday behavior. For example, the frequency with which conspecifics are looked at is considerably higher in experimental contexts than what can be observed in daily behavior. In short: findings from laboratories cannot be generalized into general statements. This thesis is dedicated to this matter. The dissertation describes and documents the current state of research on social attention through a literature review, including a meta-analysis on the /gaze cueing/ paradigm and an empirical study on the robustness of gaze following behavior. In addition, virtual reality was used in one of the first studies in this research field. Virtual reality has the potential to significantly improve the transferability of experimental laboratory studies to everyday behavior. This is because the technology enables a high degree of experimental control in naturalistic research designs. As such, it has the potential to transform empirical research in the same way that the introduction of computers to psychological research did some 50 years ago. The general literature review on social attention is extended to the classic /gaze cueing/ paradigm through a systematic review of publications and a meta-analytic evaluation (Study 1). The cumulative evidence supported the findings of primary studies: Covert spatial attention is directed by faces. However, the experimental factors included do not explain the surprisingly large variance in the published results. Thus, there seem to be further, not well-understood variables influencing these social processes. Moreover, classic /gaze cueing/ studies have limited ecological validity. This is discussed as a central reason for the lack of generalisability. Ecological validity describes the correspondence between experimental factors and realistic situations. A stimulus or an experimental design can have high and low ecological validity on different dimensions and have different influences on behavior. Empirical research on gaze following behavior showed that the /gaze cueing/ effect also occurs with contextually embedded stimuli (Study 2). The contextual integration of the directional cue contrasted classical /gaze cueing/ studies, which usually show heads in isolation. The research results can thus be transferred /within/ laboratory studies to higher ecologically valid research paradigms. However, research shows that the lack of ecological validity in experimental designs significantly limits the transferability of experimental findings to complex situations /outside/ the laboratory. This seems to be particularly the case when social interactions and norms are investigated. However, ecological validity is also often limited in these studies for other factors, such as contextual embedding /of participants/, free exploration behavior (and, thus, attentional control), or multimodality. In a first study, such high ecological validity was achieved for these factors with virtual reality, which could not be achieved in the laboratory so far (Study 3). Notably, the observed fixation patterns showed differences even under /most similar/ conditions in the laboratory and natural environments. Interestingly, these were similar to findings also derived from comparisons of eye movement in the laboratory and field investigations. These findings, which previously came from hardly comparable groups, were thus confirmed by the present Study 3 (which did not have this limitation). Overall, /virtual reality/ is a new technical approach to contemporary social attention research that pushes the boundaries of previous experimental research. The traditional trade-off between ecological validity and experimental control thus becomes obsolete, and laboratory studies can closely inherit an excellent approximation of reality. Finally, the present work describes and discusses the possibilities of this technology and its practical implementation. Within this context, the extent to which this development can still guarantee a constructive classification of different laboratory tests in the future is examined.
... Of all the non-verbal characteristics, eye gaze direction is the most important factor with a significant influence on human attention. Humans can infer the focus of attention, interest, and intention of others through eye gaze direction (Calder et al., 2002;Capozzi & Ristic, 2018;Colombatto, Chen, & Scholl, 2020;Driver et al., 1999;Friesen & Kingstone, 1998;Frischen, Bayliss, & Tipper, 2007;Langton & Bruce, 1999;Langton, Watt, & Bruce, 2000;McKay et al., 2021;Teufel, Fletcher, & Davis, 2010;Vecera & Marron, 1996). Thus, humans tend to pay attention to others' gaze to maintain their interaction. ...
... Many studies have proven that others' gaze directions can trigger observers' corresponding attentional orienting (Bayliss, Bartlett, Naughtin, & Kritikos, 2011;Bayliss, Di Pellegrino, & Tipper, 2004;Driver et al., 1999;Friesen & Kingstone, 1998;Langton & Bruce, 1999;Liu, Yuan, Liu, Wang, & Jiang, 2021;Quadflieg, Mason, & Neil Macrae, 2004;Sato, Kochiyama, Uono, & Toichi, 2016). Previous studies have adopted a modified Posner cue-target paradigm (i.e., the gaze cue paradigm) to investigate gaze-induced attentional orienting. ...
Article
Humans tend to focus on others’ gaze. Previous studies have shown that the gaze direction of others can induce corresponding attentional orienting. However, gaze cues have typically been presented alone in these studies. It is unclear how gaze cues induce observers’ attention in complicated contexts with additional perceptual information. Therefore, the present study investigated gaze-induced attentional orienting at different levels of perceptual load. Results indicated that the attentional effect of the dynamic gaze cue (i.e., GCE: gaze cue effect) emerged under low perceptual load and disappeared under high perceptual load. The absence of GCE could not attribute to perceptual capacity exhaustion. Moreover, the influence of perceptual load on gaze-induced attentional orienting was modulated by individuals’ expectation. Specifically, the GCE occurred under high perceptual load when the gaze cue was predictive (with individuals’ expectation). These findings provide new evidence on the mode of gaze-induced attentional orienting under different perceptual load conditions.
... The use of gaze cueing paradigms has shown that humans process the gaze of others reflexively [13,15,18,30]. This finding has support from neuroscience, indicated by a distinct neural processing of gaze as a social signal [13,19,45,48]. ...
... For example, when head orientation is averted but the eyes are aligned with it, no GCEs would be present, as this might be interpreted as information not related to the observer [21,33]. However, the evidence about this is mixed [30,44], and NAO was capable of orienting the attention just through head movement in our experiment. It is possible that when the head orientation and eye direction are aligned, this is not perceived as a natural purposeful orientation in humans, but that in the case of a robot without eye movement or other less natural stimuli it can still be perceived as an signalling behavior, which would highlight the flexibility of social cognition. ...
Conference Paper
Full-text available
Robots can direct human attention using their eyes. However, it remains unclear whether it is the gaze or the low-level motion of the head rotation that drives attention. We isolated these components in a non-predictive gaze cueing task with a robot to explore how limited robotic signals orient attention. In each trial, the head of a NAO robot turned towards the left or right. To isolate the direction of rotation from its gaze, NAO was presented frontally and backward along blocks. Participants responded faster to targets on the gazed-at site, even when the eyes of the robot were not visible and the direction of rotation was opposed to that of the frontal condition. Our results showed that low-level motion did not orient attention, but the gaze direction of the robot did. These findings suggest that the robotic gaze is perceived as a social signal, similar to human gaze.
... Studies using a gaze cueing paradigm have also supported the part-based processing of gaze information. Tipples's (2005) comprehensive study demonstrated that the gaze cueing effect was unaffected by face inversion (but also see Kingstone et al., 2000;Langton & Bruce, 1999 for vertical cues). These mixed results preclude any unequivocal conclusion regarding the effects of face inversion on gaze processing. ...
... Previous studies explained the reversed congruency effect in terms of social facilitation by direct gaze (Cañadas & Lupiáñez, 2012) and joint attention (Edwards et al., 2020). Given that face inversion can reduce or eliminate these facilitations (Böckler et al., 2015;Kingstone et al., 2000;Langton & Bruce, 1999;Senju et al., 2005Senju et al., , 2008 but also see Tipples, 2005), the reversion for inverted faces may not be consistent with these explanations. In a different line, the reversed congruency effect can be explained by perspective-taking 2 , which is the ability to recognize another person's point of view (Hemmerich, 2018). ...
Article
In a spatial Stroop task, the eye-gaze target produces the reversed congruency effect-responses become shorter when the gaze direction and its location are incongruent than when they are congruent. The present study examined the face inversion effect on the gaze spatial Stroop task to clarify whether the holistic face processing or part-based processing of the eyes is responsible for the reversed congruency effect. In Experiment 1, participants judged the gaze direction of the upright or inverted face with a neutral expression presented either in the left or right visual field. In Experiment 2, we examined whether face inversion interacted with facial expressions (i.e., angry, happy, neutral, and sad). Face inversion disrupted holistic face processing, slowing down the overall performance relative to the performance with upright faces. However, face inversion did not affect the reversed congruency effect. These results further support the parts-based processing account and suggest that while faces are processed holistically, the reversed congruency effect, relying on the extracted local features (i.e., eyes), may be processed in a part-based manner.
... A tendency to orient our attention to parts of the environment attended by others may help us discover items of interest or value, and identify potential threats 6 . Interestingly, however, several findings suggest that these cueing effects are sensitive to the orientation of the cueing stimulus; when faces and bodies are shown upside-down, the cueing effects induced are reduced or abolished 4,5,[7][8][9] . ...
... To begin our investigation, we conducted three experiments to confirm previous reports that the attention cueing effects produced by faces and bodies are greatly reduced by orientation inversion 4,5 . One of our experiments employed face stimuli. ...
Article
Full-text available
It is well-established that faces and bodies cue observers’ visuospatial attention; for example, target items are found faster when their location is cued by the directionality of a task-irrelevant face or body. Previous results suggest that these cueing effects are greatly reduced when the orientation of the task-irrelevant stimulus is inverted. It remains unclear, however, whether sensitivity to orientation is a unique hallmark of “social” attention cueing or a more general phenomenon. In the present study, we sought to determine whether the cueing effects produced by common objects (power drills, desk lamps, desk fans, cameras, bicycles, and cars) are also attenuated by inversion. When cueing stimuli were shown upright, all six object classes produced highly significant cueing effects. When shown upside-down, however, the results were mixed. Some of the cueing effects (e.g., those induced by bicycles and cameras) behaved liked faces and bodies: they were greatly reduced by orientation inversion. However, other cueing effects (e.g., those induced by cars and power drills) were insensitive to orientation: upright and inverted exemplars produced significant cueing effects of comparable strength. We speculate that (i) cueing effects depend on the rapid identification of stimulus directionality, and (ii) some cueing effects are sensitive to orientation because upright exemplars of those categories afford faster processing of directionality, than inverted exemplars. Contrary to the view that attenuation-by-inversion is a unique hallmark of social attention, our findings indicate that some non-social cueing effects also exhibit sensitivity to orientation.
... This raises the question whether gaze cueing also shows differences between the vertical and the horizontal direction. Differences in cueing effects between these axes have been found in some conditions [25,26], although most of the cueing literature finds no evidence for a horizontal/vertical asymmetry in standard cuing paradigms (e.g., [10,27]). Such differences for gaze cues may be coincidental or caused by the design of the specific experiments. ...
Article
Full-text available
Gaze is an important and potent social cue to direct others’ attention towards specific locations. However, in many situations, directional symbols, like arrows, fulfill a similar purpose. Motivated by the overarching question how artificial systems can effectively communicate directional information, we conducted two cueing experiments. In both experiments, participants were asked to identify peripheral targets appearing on the screen and respond to them as quickly as possible by a button press. Prior to the appearance of the target, a cue was presented in the center of the screen. In Experiment 1, cues were either faces or arrows that gazed or pointed in one direction, but were non-predictive of the target location. Consistent with earlier studies, we found a reaction time benefit for the side the arrow or the gaze was directed to. Extending beyond earlier research, we found that this effect was indistinguishable between the vertical and the horizontal axis and between faces and arrows. In Experiment 2, we used 100% “counter-predictive” cues; that is, the target always occurred on the side opposite to the direction of gaze or arrow. With cues without inherent directional meaning (color), we controlled for general learning effects. Despite the close quantitative match between non-predictive gaze and non-predictive arrow cues observed in Experiment 1, the reaction-time benefit for counter-predictive arrows over neutral cues is more robust than the corresponding benefit for counter-predictive gaze. This suggests that–if matched for efficacy towards their inherent direction–gaze cues are harder to override or reinterpret than arrows. This difference can be of practical relevance, for example, when designing cues in the context of human-machine interaction.
... In the social and psychological sciences, such regulariGes are oaen termed effects. For example, the observaGon that humans tend to allocate their abenGon towards the object of another's gaze is oaen referred to as the gazecuing effect (Driver et al., 1999;Frischen et al., 2007;Friesen & Kingstone, 1998;Hietanen, 1999;Langton & Bruce, 1999). ...
Chapter
Full-text available
It is a hallmark of social and psychological science that its findings should be reproducible across relevant contexts, playing a key role in showcasing the reliability of detected effects. However, following what some have called a “watershed” moment, a “crisis of confidence” has befallen the discipline, with commentators zeroing in on the Reproducibility Project’s low replication estimate of 36-47% (Open Science Collaboration, 2015; see also Pashler & Wagenmakers, 2012; Wiggins & Christopherson, 2019). The theory crisis is another perhaps less well-known challenge, with researchers arguing that many of the theories within social and psychological science are generally of poor quality (Eronen & Bringmann, 2021; Fiedler, 2017; Oberauer & Lewandowsky, 2019). Due to this, theories are often not clearly accepted or refuted, tending to come and go with little of the cumulative progress often associated with scientific knowledge. This chapter will look at how the replication and theory crises facing social and psychological science could interact by examining the relationship between replication and theory development. Following an overview of both crises, the first half of the chapter will look at the way in which replications could support theory development, covering the identification of robust phenomena, the identification of effects’ boundary conditions, and the evaluation of theories’ predictions. The second half of the chapter will consider a number of more recent arguments suggesting that well-specified theory is required for replications to be informative (Irvine, 2021; Klein, 2014; Muthukrishna & Henrich, 2019).
... Researchers are increasingly interested in real-world social interactions and how the gaze of one person may impact another (e.g., Capozzi & Ristic, 2018;Dalmaso et al., 2020;Frischen et al., 2007;Richardson & Gobel, 2015;Stephenson et al., 2021). This collective body of work indicates that humans are exquisitely sensitive to where people are looking (Hessels, 2020), with for example one person's gaze affecting where another person attends (Langton & Bruce, 1999), what they understand (Wohltjen & Wheatley, 2021), and when they speak (Kendon, 1967). ...
Article
Full-text available
Shaking hands is a fundamental form of social interaction. The current study used high-definition cameras during a university graduation ceremony to examine the temporal sequencing of eye contact and shaking hands. Analyses revealed that mutual gaze always preceded shaking hands. A follow up investigation manipulated gaze when shaking hands, and found that participants take significantly longer to accept a handshake when an outstretched hand precedes eye contact. These findings demonstrate that the timing between a person's gaze and their offer to shake hands is critical to how their action is interpreted.
... interactions. When the gaze is used as a cue, responses are facilitated if stimuli are presented at the gazed-at location (Friesen and Kingstone 1998;Driver et al. 1999;Langton and Bruce 1999;Friesen et al. 2004;Hietanen et al. 2006). Similar behavioral results have been observed with nonsocial stimuli that are often presented in our environment, such as arrows. ...
Article
Full-text available
Social and nonsocial directional stimuli (such as gaze and arrows, respectively) share their ability to trigger attentional processes, although the issue of whether social stimuli generate other additional (and unique) attentional effects is still under debate. In this study, we used the spatial interference paradigm to explore, using functional magnetic resonance imaging, shared and dissociable brain activations produced by gaze and arrows. Results showed a common set of regions (right parieto-temporo-occipital) similarly involved in conflict resolution for gaze and arrows stimuli, which showed stronger co-activation for incongruent than congruent trials. The frontal eye field showed stronger functional connectivity with occipital regions for congruent as compared with incongruent trials, and this effect was enhanced for gaze as compared with arrow stimuli in the right hemisphere. Moreover, spatial interference produced by incongruent (as compared with congruent) arrows was associated with increased functional coupling between the right frontal eye field and a set of regions in the left hemisphere. This result was not observed for incongruent (as compared with congruent) gaze stimuli. The right frontal eye field also showed greater coupling with left temporo-occipital regions for those conditions in which larger conflict was observed (arrow incongruent vs. gaze incongruent trials, and gaze congruent vs. arrow congruent trials). These findings support the view that social and nonsocial stimuli share some attentional mechanisms, while at the same time highlighting other differential effects. Highlights Attentional orienting triggered by social (gaze) and nonsocial (arrow) cues is comparable. When social and nonsocial stimuli are used as targets, qualitatively different behavioral effects are observed. This study explores the neural bases of shared and dissociable neural mechanisms for social and nonsocial stimuli. Shared mechanisms were found in the functional coupling between right parieto-temporo-occipital regions. Dissociable mechanisms were found in the functional coupling between right frontal eye field and ipsilateral and contralateral occipito-temporal regions.
... Further evidence consistent with automaticity comes from studies with four possible target locations, in which gaze direction was congruent with the target location in only 25% of the total trials (e.g., Cole et al., 2015;Langton & Bruce, 1999). In these studies, robust gaze-cueing effects emerged, suggesting that gaze biased spatial attention even when paying attention to gaze was known to be not only task-irrelevant but also potentially disruptive for the task at hand (i.e., being predictive in only 25% of total trials). ...
Article
Full-text available
In four experiments, we tested the boundary conditions of gaze cueing with reference to the resistance to suppression criterion of automaticity. Participants were asked to respond to peripheral targets preceded by a central gaze stimulus. In one condition, gaze direction was random and uninformative with respect to target location (intermixed condition), as in the typical paradigm. In another condition, gaze direction was uninformative and, crucially, it was also kept constant throughout the sequence of trials (blocked condition). In so doing, we aimed at maximally reducing the informative value of the gaze stimulus since gaze would not only be task-irrelevant, but it would also provide no sudden and unpredictable information. Across the four experiments, the results showed a strong gaze-cueing effect. More specifically, a comparable gaze cueing emerged in the blocked condition and in the intermixed condition. These findings are consistent with the idea that gaze cueing is resistant to suppression and are discussed in relation to current views of the automaticity of gaze cueing.
... Recently, we found the reversed congruency effect of gaze was unaffected by the face inversion in the gaze spatial Stroop task, suggesting that the reversed congruency effect is processed independently of the face-specific processing (i.e., holistic processing of the face) and, thus, depends on part-based processing of the eyes (Tanaka et al., 2022). This result also calls into question the social accounts of the reversed congruency effect because gaze facilitation, which has been explained by the social role of eye gaze, such as eye contact and joint attention, was reduced or eliminated by the inverted faces (Kingstone et al., 2000;Langton & Bruce, 1999;Senju et al., 2005Senju et al., , 2008. Differently from the previous accounts (Cañadas & Lupiáñez, 2012;Edwards et al., 2020;Hemmerich et al., 2022;Ishikawa et al., 2021;Marotta et al., 2018), the dualstage model can account for the finding of inverted faces. ...
Article
In the spatial Stroop task, an arrow target produces a spatial Stroop effect, whereas a gaze target elicits a reversed congruency effect. The reversed congruency effect has been explained by the unique attentional mechanisms of eye gaze. However, recent studies have shown that not only gaze but arrow targets produced a reversed congruency effect when embedded in a complex background. The present study investigated whether non-gaze targets produce a reversed congruency effect. In Experiments 1 and 2, we used the tongue, which is not commonly used to indicate spatial directions in daily life, as a target in the spatial Stroop task, in addition to the conventional gaze and arrows. In Experiment 3, we used arrow stimuli embedded in a complex background as a target. Participants judged the left/right direction of the target presented in the left or right visual field. While arrow and gaze targets replicated previous findings (spatial Stroop and reversed congruency effect, respectively), the tongue target produced a reversed congruency effect (Experiments 1 and 2). The spatial Stroop effect of arrow targets disappeared when they were in a complex background (Experiment 3). These results are inconsistent with previous accounts emphasizing the unique status of eye gaze. We propose that temporal decay of the location code and response inhibition are responsible for the reversal of spatial interference.
... According to Mayer's (2014) social agency theory, making eye contact with learners can elicit a social response in learners and motivate them to be more engaged in the learning task and efficiently search for the most relevant information, which contributes to deeper cognitive processing and better learning outcomes. Furthermore, embodied cognition theory (Langton & Bruce, 1999;Stull et al., 2018a) suggests that learners interpret cues from the model's body to be guided to key elements of the instructional material (Barsalou, 2008;Langton, Watt, & Bruce, 2000). Therefore, the model's gaze can convey an embodied instructional message that directs learners' attention to where and when to look (i.e., gaze guidance) and create a sense of social partnership (i.e., direct gaze) (Chauhan, di Oleggio Castello, Soltani, & Gobbini, 2017;Fiorella et al., 2019;Stull et al., 2018b;Van Gog, 2014). ...
Article
Even though coach behavior is known to affect learning, it is unclear which specific nonverbal behavior might optimize the teaching of a tactical content in basketball. Using eye-tracking technology, recall construction paradigm, and subjective ratings of mental effort, the present study investigated the question of whether the coach’s eye gaze would affect players’ visual attention and recall performance. Expert (N = 72) and novice (N = 72) players watched one of three types of video lecture in which the coach either (i) gazed at the camera while talking (ii) shifted his gaze between the camera and the whiteboard (guided gaze condition), or (iii) gazed at the whiteboard (fixed gaze condition). The results showed that the coach’s guided gaze not only made the novices focus their visual attention more on the corresponding elements of the game system, but also increased their recall performance and decreased their mental effort. However, the performance of the expert players remained the same regardless of the experimental condition, indicating an expertise reversal effect. The findings suggest that the effectiveness of the coach’s gaze guidance is strongly dependent on expertise levels.
... They showed that even when participants are instructed that the cues are "not predictive" of target location (i.e., the cues were "uninformative", like in exogenous cuing), they still followed gaze cues in detection, localization, and identification tasks, as demonstrated by faster responding when the target appeared at the gazed at location than the opposite location (a classic "cuing effect"). Using photographs of faces looking in different directions, Langton and Bruce (1999) found that such gaze cues, when truly uninformative, produced a cuing effect that appeared rapidly (at 100 ms) but also decayed quickly. Driver et al. (1999), as did Friesen and Kingstone, found that uninformative gaze cues showed a cuing effect. ...
Article
Full-text available
Les gens dirigent leur attention vers la direction du regard d’une autre personne. Ce phénomène appelé « repère du regard » (gaze cueing) a plusieurs propriétés en commun avec le contrôle purement endogène (c.-à-d., délibéré) et purement exogène (c.-à-d., réfléchi) de l’attention spatiale. Par exemple, comme c’est le cas avec l’orientation strictement endogène, les repères du regard apparaissent avec la fixation visuelle. Pourtant, dans le cas de l’orientation strictement exogène, les repères du regard provoquent des changements d’attention rapidement après qu’ils se manifestent. Des expériences antérieures démontrent que, lorsqu’ils sont contrôlés de manière endogène plutôt qu’exogène, les effets de l’attention consécutifs au traitement des cibles sont complètement différents. Briand et Klein (1987; voir aussi Briand, 1998) ont démontré que l’orientation endogène s’additionne aux possibilités de conjonctions illusoires, tandis que l’orientation exogène est plutôt de nature interactive. Klein (1994) a en outre démontré que l’orientation endogène interagit avec les attentes de nature non spatiale, tandis que l’orientation exogène s’additionne à celles-ci. Dans le présent projet, nous avons appliqué cette stratégie de double dissociation à l’attention contrôlée par les repères du regard. Dans l’expérience 1, les effets des repères du regard (en termes de précision) s’additionnaient aux possibilités de conjonctions illusoires (comme dans le cas du contrôle endogène). Dans l’expérience 2, les repères du regard s’additionnaient à l’effet des attentes non spatiales (comme avec l’orientation exogène). Par conséquent, dans la nature de ses effets, le repère du regard agit comme un hybride entre l’orientation endogène et exogène.
... From an experimental perspective, social attention has been widely studied through the adoption of spatial cueing tasks in which, typically, a task-irrelevant social stimulus (e.g., a face oriented left or right), presented at the centre of the screen, preceded the appearance of a peripheral target which required a behavioural response (e.g., a key press). In general, a behavioural benefit (e.g., smaller latencies and a greater accuracy) was observed on trials in which the target appeared in the same spatial position indicated by the social cue (i.e., a spatially-congruent trial) than in a different position (i.e., a spatially-incongruent trial; see, e.g., Friesen & Kingstone, 1998;Langton & Bruce, 1999;Cooney, Brady & Ryan, 2017), reflecting a spatial cueing effect. ...
Article
Full-text available
Faces oriented rightwards are sometimes perceived as more dominant than faces oriented leftwards. In this study, we explored whether faces oriented rightwards can also elicit increased attentional orienting. Participants completed a discrimination task in which they were asked to discriminate, by means of a keypress, a peripheral target. At the same time, a task-irrelevant face oriented leftwards or rightwards appeared at the centre of the screen. The results showed that, while for faces oriented rightwards targets appearing on the right were responded to faster as compared to targets appearing on the left, for faces oriented leftwards no differences emerged between left and right targets. Furthermore, we also found a negative correlation between the magnitude of the orienting response elicited by the faces oriented leftwards and the level of conservatism of the participants. Overall, these findings provide evidence for the existence of a spatial bias reflected in social orienting.
... Primates are sensitive to yaw (Emery et al. 1997, Itakura & Anderson 1996-orientation to the left or right of the observer. Yaw provides information about others' direction of attention (Langton & Bruce 2000) and intentions and can trigger automatic shifts of the viewer's attention (Langton & Bruce 1999. Pitch (or head tilt) is informative about emotional states. ...
Article
Full-text available
Primates have evolved diverse cognitive capabilities to navigate their complex social world. To understand how the brain implements critical social cognitive abilities, we describe functional specialization in the domains of face processing, social interaction understanding, and mental state attribution. Systems for face processing are specialized from the level of single cells to populations of neurons within brain regions to hierarchically organized networks that extract and represent abstract social information. Such functional specialization is not confined to the sensorimotor periphery but appears to be a pervasive theme of primate brain organization all the way to the apex regions of cortical hierarchies. Circuits processing social information are juxtaposed with parallel systems involved in processing nonsocial information, suggesting common computations applied to different domains. The emerging picture of the neural basis of social cognition is a set of distinct but interacting subnetworks involved in component processes such as face perception and social reasoning, traversing large parts of the primate brain.
... One typical social interaction difficulty observed in TD individuals who are high in autistic traits is following the gaze (i.e., gaze cues) of social partners. Following the gaze cues of social partners is an essential skill for successful social communication and is vital for understanding the intentions, desires, actions and beliefs of social peers, as gaze cues are informative social signals (Langton & Bruce, 1999;Loucks & Sommerville, 2013;Senju et al., 2008). Effective gaze following has been found to promote joint attention and improve social interaction (Adamson et al., 2009;Mundy & Newell, 2007). ...
Article
Objectives: People with autism spectrum disorders (ASD) usually exhibit typical behaviours and thoughts that are called autistic traits. Autistic traits are widely and continuously distributed among typically developed (TD) and ASD populations. Previous studies have found that people with ASD have difficulty in following the eye gaze of social peers. However, it remains unknown whether TD adults with high or low autistic traits also differ in spontaneous gaze following and initiation in face-to-face social interactions. To fill this gap, this study used a novel and naturalistic gaze-cueing paradigm to examine this research question. Design: A 4 (group: high-high, high-low, low-high or low-low autistic traits) × 3 (congruency: congruent, neutral, or incongruent) mixed-measures design was used. Methods: Typically developed adults who were high or low in autistic traits completed a visual search task while a confederate who was high or low in autistic traits sat facing them. Critically, the match of autistic traits within a participant-confederate pair was manipulated. The confederate gazed at (congruent) or away from (incongruent) the location of the target prior to the appearance of the target. Participants were not explicitly instructed to follow the confederate's gaze. Results: Autistic traits were associated with spontaneous gaze following and initiation in face-to-face social interactions. Specifically, only when both the participant and confederate were low in autistic traits did the incongruent gaze cues of confederates interfere with the participants' responses. Conclusions: Autistic traits impeded gaze following and initiation by TD adults. This study has theoretical and practical implications regarding autistic trait-induced social deficits and indicates a new approach for social skill interventions.
... This is because to elicit endogenous shifts of attention with purely symbolic cues, such as when a central stimulus characteristic is arbitrarily associated with a spatial location (e.g., a yellow circle indicates left and a blue circle indicates right), the cue needs to be predictive of target location and the SOA needs to be longer (> 300 ms) to engender cueing (e.g., Funes et al., 2007;Dodd & Wilson, 2009). Considering the social and biological relevance of faces, it had been originally proposed that orienting to eye gaze represents a unique attentional process that is qualitatively distinct from orienting based on other symbolic, central cues (e.g., Langton & Bruce, 1999;Driver et al., 1999;Friesen & Kingstone, 1998). However, this proposal is challenged by evidence of similar cueing effects observed with central, non-social cues such as arrow-cues (Hommel et al., 2001;Ristic et al., 2002;Tipples, 2002). ...
Article
Full-text available
Orienting attention by social gaze cues shares some characteristics with orienting attention by non-social arrow cues, but it is unclear whether they rely on similar neural mechanisms. The present ALE-meta-analysis assessed the pattern of brain activation reported in 40 single experiments (18 with arrows, 22 with gaze), with a total number of 806 participants. Our findings show that the network for orienting attention by social gaze and by non-social arrow cues is in part functionally segregated. Orienting by both types of cues relies on the activity of brain regions involved in endogenous attention (the superior frontal gyrus). Importantly, only orienting by gaze cues was also associated with the activity of brain regions involved in exogenous attention (medial frontal gyrus), processing gaze, and mental state attribution (superior temporal sulcus, temporoparietal junction).
... A broad and increasing body of literature has shown that people tend to orient their attention towards the same location indicated by a variety of spatial signals provided by their conspecifics, such as eye-gaze direction (Dalmaso et al., 2020c;Frischen et al., 2007;McKay et al., 2021), pointing gestures (Ariga & Watanabe, 2009;Langton & Bruce, 2000), and head and body turns (Azarian et al., 2017;Langton & Bruce, 1999). The ability to pay attention to the same location as another individual, which is often referred to as 'social attention' (Kingstone, 2009), is essential in daily-life social interactions, as it allows people to establish fluent relationships and interactions with both others and the physical environment in which they act (Capozzi & Ristic, 2018;Emery, 2000). ...
Article
Full-text available
Humans tend to orient their attentional resources towards the same location indicated by spatial signals coming from the others, such as pointing fingers, head turns, or eye-gaze. Here, two experiments investigated whether an attentional orienting response can be elicited even by foot cues. Participants were asked to localize a peripheral target while a task-irrelevant picture of a naked human foot, oriented leftward or rightward, was presented on the centre of the screen. The foot appeared in a neutral posture (i.e., standing upright) or an action-oriented posture (i.e., walking/running). In Experiment 1, neutral and action-oriented feet were presented in two distinct blocks, while in Experiment 2 they were presented intermixed. The results showed that the action-oriented foot, but not the neutral one, elicited an orienting response, though this only emerged in Experiment 2. This work suggests that attentional shifts can be induced by action-oriented foot cues, as long as these stimuli are made contextually salient.
... The shift of attention when observing averted eye gaze is believed to occur automatically (Driver et al., 1999;Friesen & Kingstone, 1998;Langton & Bruce, 1999; but see Tipples, 2002), presumably because a gaze shift may signal an important event (Frischen et al., 2007). Within an evolutionary account, this nonverbal way of communication may have evolved because it was beneficial for survival (Emery, 2000). ...
Article
Full-text available
Research on emotional modulation of attention in gaze cueing has resulted in contradictory findings. Some studies found larger gaze cueing effects (GCEs) in response to a fearful gaze cue, whereas others did not. A recent study explained this discrepancy within a cognitive resource account, in which perceptual demands of the task promote a bias toward either a local (discrimination task) or global (localization task) processing strategy. During local processing, the integration of emotional expression with gaze direction is assumed to be impaired, whereas during global processing integration is assumed to be facilitated. In the current study, we investigated the cognitive resource account in three experiments. In Experiment 1, we manipulated task demands by adopting a detection or a localization task whilst both should allow global processing. In Experiments 2 and 3, we induced either a local or global perceptual processing strategy by presenting local or global targets (Experiment 2) or by priming local or global perception prior to the gaze cueing task (Experiment 3). Results showed faster orienting in response to a fearful face cue independent of task demands in Experiment 1. Inducing local and global processing strategies in Experiments 2 and 3 did not affect emotional modulation of the GCE. In contrast, Bayesian analyses provided evidence of absence of such an effect, demonstrating that local or global processing strategies cannot explain the mixed findings obtained in emotional modulation of gaze cueing.
... Given the ecological importance of judging gaze directions -whether for predator avoidance or social communication [3] -it seems plausible that a specialized neural mechanism evolved to optimize processing in order to facilitate rapid behavioral responses. Support for this idea is provided by behavioral studies showing that responses to gaze stimuli in both humans and macaques appear to be reflexive [4][5][6][7][8], i.e. they are fast and cannot be suppressed. Perrett and colleagues were the first to describe neurons in the superior temporal sulcus (STS) that are tuned to different gaze, head and body orientations [9], without any behavioral relevance, however. ...
Preprint
Full-text available
Apart from language, our gaze is arguably the most important means of communication. Where we look lets others know what we are interested in and allows them to join our focus of attention. In several studies our group investigated the neuronal basis of gaze following behavior in humans and macaques and described the Gaze following patch in the posterior temporal cortex as being of central importance for this function. To our knowledge, this makes it the most promising candidate for Simon Baron-Cohen's Eye-Direction-Detector, an integral part of his influential Mindreading System. With the latter, Baron-Cohen proposed a network of domain-specific neurocognitive modules that are necessary to establish a Theory of Mind - the attribution of mental states to others. The tenet of domain-specificity requires that the EDD processes only and exclusively eye-like stimuli with their typical contrast and movement properties. In the present fMRI study, we aim to critically test if the GFP fulfills this criterion. We will test if it is equivalent to or different from the visual motion processing areas located in the same part of the brain. Since our experiments capture the full-behavioral relevance of gaze-following behavior and are specifically designed to reveal an EED our results will provide strong support or rejection of a central property Baron-Cohen's Mindreading-System – domain specificity.
... Previous studies have investigated the patterns of attentional orienting triggered by non-predictive gaze and arrow cues in the classical or modified Posner cue-target paradigm (Driver et al., 1999;Friesen & Kingstone, 1998;Jonides, 1981;Langton & Bruce, 1999;Posner, 1980). Friesen and Kingstone (1998) adopted the gaze direction of the schematic face as a cue. ...
Article
Full-text available
Others’ gaze direction and traffic arrow signal lights play significant roles in guiding observers’ attention in daily life. Previous studies have shown that gaze and arrow cues can direct attention to the cued location. However, it is ambiguous where gaze and arrow cues guide attention: the cued location or a broader cued region. Therefore, the present study adopted a primary cue-target task and manipulated possible target locations to explore this issue. The results revealed that due to the different physical characteristics of non-predictive gaze and arrow cues, physically unfocused-pointing gaze cues guided attention to a broader cued region, whereas focused-pointing arrow cues guided attention to the exact cued location. Furthermore, gaze cues could also direct attention to the exact cued location when observers’ attention was focused in a top-down manner (with highly predictive probability). These findings suggest that where gaze and arrow cues direct attention depends on whether observers’ attention is focused by the cues, either in a bottom-up or top-down manner. Accordingly, a preliminary framework called the “Focused-Diffused Attentional Orienting Model” is proposed to explain how gaze and arrow cues direct humans’ attention. The present study enhances our understanding of human attentional orienting systems from a behavioral perspective.
... Both the head and the eyes can indicate an individual's locus of social attention (Langton et al., 2000;Nummenmaa and Calder, 2009), and while prior research has focused on gaze to identify which talker somebody is attending to, in this paper, we focus on head orientation. Both the head and the eyes have been found to contribute to social attention, reflexively triggering shifts of an observer's attention (Langton and Bruce, 1999;Pejsa et al., 2015), but there are a number of practical reasons to prefer use of head orientation to identify an interlocutor's direction of attention. While the head and the eyes typically orient in the same direction during conversation (~70% of the time, Stiefelhagen and Zhu, 2002), the eyes also have other social functions, such as signalling or influencing dominance, competence, intimacy and liking (Kleinke, 1986;Kuzmanovic et al., 2009;Hamilton, 2016). ...
Article
Full-text available
In conversation, people are able to listen to an utterance and respond within only a few hundred milliseconds. It takes substantially longer to prepare even a simple utterance, suggesting that interlocutors may make use of predictions about when the talker is about to end. But it is not only the upcoming talker that needs to anticipate the prior talker ending—listeners that are simply following the conversation could also benefit from predicting the turn end in order to shift attention appropriately with the turn switch. In this paper, we examined whether people predict upcoming turn ends when watching conversational turns switch between others by analysing natural conversations. These conversations were between triads of older adults in different levels and types of noise. The analysis focused on the observer during turn switches between the other two parties using head orientation (i.e. saccades from one talker to the next) to identify when their focus moved from one talker to the next. For non-overlapping utterances, observers started to turn to the upcoming talker before the prior talker had finished speaking in 17% of turn switches (going up to 26% when accounting for motor-planning time). For overlapping utterances, observers started to turn towards the interrupter before they interrupted in 18% of turn switches (going up to 33% when accounting for motor-planning time). The timing of head turns was more precise at lower than higher noise levels, and was not affected by noise type. These findings demonstrate that listeners in natural group conversation situations often exhibit head movements that anticipate the end of one conversational turn and the beginning of another. Furthermore, this work demonstrates the value of analysing head movement as a cue to social attention, which could be relevant for advancing communication technology such as hearing devices.
... To infer the intentions of others, evolution has equipped humans with dedicated brain mechanisms allowing the fast and automatic processing of social cues (Emery, 2000;Perrett et al., 1985;Perrett & Emery, 1994). The disadvantage of this automatic processing is that social cues can hardly be suppressed (Langton, 2000;Langton & Bruce, 1999;Otsuka, Mareschal, Calder, & Clifford, 2014) and provoke a conflict of information in the head-fake situation. In fact, it has been shown that the opponent's head orientation (but not the gaze direction) causes the head-fake effect (Weigelt, Güldenpenning, & Steggemann-Weinrich, 2020). ...
Article
Typically, head fakes in basketball are generated to, and actually do, deteriorate performance on the side of the observer. However, potential costs at the side of the producer of a fake action have only rarely been investigated before. It is thus not clear yet if the benefit (i.e., slowed reactions in the observer) of performing a head fake is overestimated due to concurrently arising fake production costs (i.e., slowed performance in the producer of a head fake). Therefore, we studied potential head-fake production costs with two experiments. Novice participants were asked to generate passes to the left or right side, either with or without head fakes. In Experiment 1, these actions were determined by an auditory stimulus (i.e., a 440 Hz or 1200 Hz sinus or jigsaw wave). After an interstimulus interval (ISI) of either 0 ms, 800 ms, or 1500 ms, which served the preparation of the action, the cued action had to be executed. In Experiment 2, passing to the left or right, either with or without a head fake, was determined by a visual stimulus (i.e., a player with a red or blue jersey defending either the right or left side). After an ISI of either 0 ms, 400 ms, 800 ms, or 1200 ms, the cued action had to be executed. In both experiments, we observed higher reaction times (RTs) for passes with head fakes as compared to passes without head fakes for no and an intermediate preparation interval (from ISI 0 ms to 800 ms), but no difference for a long preparation interval (for an ISI of 1200 ms and 1500 ms). Both experiments show that generating fake actions produces performance costs, however, these costs can be overcome by a longer preparation phase before movement execution.
... Ro et al. [6] indicates the relationship between the human face and attention, this occurs when the individual pays attention to a certain phenomenon or event that interests him. Langton and Bruce [7] state that facial stimuli such as the position of the head and eyes indicate attention to a particular scene, primarily if eye tracking is performed, the attention of that individual can be captured [8]. ...
Article
Full-text available
The level of attention of students who receive classes through videoconferencing platforms is troubling. Different throughout an entire lecture, the instructor can perceive the participants' behavior, while in an online class, it is difficult to determine if they are attentive to the instructions given. An innovative method that helps solve this problem is the use of computer vision algorithms, with methods such as face detection, facial landmarks, face recognition, and head pose detection based on deep learning networks. In this paper, a neural network was trained using facial landmarks to estimate head position and thus attention levels. The model was applied in five classes using different videoconferencing platforms in the Intelligent Systems subject of a private university in Ecuador. Some limitations, such as lighting and video quality, affected the level of accuracy. The number of registered participants was 12, of which between 30% and 80% attended. The maximum level of attention detected was 91.9%, while the minimum level was 86.6%. This case study proves that the head position detection function employed by many videoconferencing platforms is a useful parameter that aids the instructor in this type of context.
... Additionally, a schematic face cue with eye gaze instead of arrows also caused an illusory line motion (Bavelier et al. 2002). In humans, gaze cueing causes a reflexive shift of attention, as with a peripheral cue, rather than a voluntary shift, as with an arrow cue (Driver et al. 1999;Langton and Bruce 1999). Taken together, these results suggest that illusory line motion is perceived from the side of the reflexive attentional shift, whether it is caused by the peripheral cue or the centrally presented gaze cue [although some researchers argue that attentional shifts do not cause illusory line motion (Downing and Treisman 1997)]. ...
Article
When a row of objects surrounded by a frame suddenly shifts a certain distance so that part of the row is occluded by the frame, humans perceive ambiguous apparent motion either to the left or the right. However, when the objects have “directionality,” humans perceive them as moving forward in the direction in which they are pointing, which is termed forward-facing motion bias. In the present study, five experiments were conducted to address whether, and if so how, physical properties or prior knowledge about the objects affected the perception of their apparent motion in two juvenile chimpanzees (Pan troglodytes). In experiment 1, the chimpanzees did not show a clear forward-facing bias in judging the direction of motion when directed triangles were presented, whereas the human participants did. In contrast, when pictures of the lateral view of chimpanzees with quadrupedal postures were shown, there was a clear bias for going “forward” with regards to the side with the head (experiment 2). We presented pictures of dogs looking back to explore what features caused the forward-facing motion bias (experiment 3). Chimpanzees did not show any bias for these stimuli, suggesting that the direction of the head and body interactively affected the perceptual bias. Experiment 4 tested the role of the head and found that only the lateral view of the heads of chimpanzees or humans caused the bias (experiment 4). Additional tests also showed that the chimpanzees could not solve the task based only on the direction of the stimuli without motion (experiment 5). These results indicate that the perception of motion in the chimpanzees was affected by the biological features of the stimuli, suggesting their prior knowledge of the “body” from a biological (morphological and kinetic) perspective.
... Recently, we found the reversed spatial Stroop effect of gaze was unaffected by the face inversion in the gaze spatial Stroop task, suggesting that the reversed spatial Stroop effect is processed independently of the face-specific processing (i.e., holistic processing of the face) and, thus, depends on part-based processing of the eyes (Tanaka et al., 2022). This result calls into question the social accounts of the reversed spatial Stroop effect because gaze facilitation, which has been explained by the social role of eye gaze, such as eye contact and joint attention, was reduced or eliminated by the inverted faces (Kingstone et al., 2000;Langton & Bruce, 1999;Senju et al., 2005Senju et al., , 2008. Differently from the previous accounts (Cañadas & Lupiáñez, 2012;Edwards et al., 2020;Hemmerich et al., 2022;Ishikawa et al., 2021;Marotta et al., 2018), the dual-stage model can account for the finding of inverted faces. ...
Preprint
Full-text available
In the spatial Stroop task, an arrow target produces a spatial Stroop effect, whereas a gaze target elicits a reversed spatial Stroop effect. The reversed spatial Stroop effect has been explained by the unique attentional mechanisms of eye gaze. However, recent studies have shown that not only gaze but arrow targets produced a reversed spatial Stroop effect when embedded in a complex background. The present study investigated whether non-social targets produce a reversed spatial Stroop effect. We used the tongue, which generally does not convey social information, as a target in the spatial Stroop task, in addition to the conventional gaze and arrows. Participants judged the left/right direction of the target presented in the left or right visual field. While arrow and gaze targets replicated previous findings (spatial Stroop and reversed spatial Stroop effect, respectively), the tongue target produced a reversed spatial Stroop effect. These results are inconsistent with previous accounts emphasizing the unique status of eye gaze. We propose that temporal decay of the location code and response inhibition are responsible for the reversal of spatial interference.
... Many studies have demonstrated that perceiving someone else's gaze induces attention to orient toward the direction of the gaze, even when the direction indicated by the gaze is task irrelevant (Friesen and Kingstone, 1998;Langton and Bruce, 1999) or detrimental to the behavioral goal (Driver et al., 1999;Downing et al., 2004;Friesen et al., 2004). Given that our ability to interpret the gaze of others is vital for communicating with those around us, it seems plausible that the attentional mechanisms that underlie the processing of gaze may differ from those that underlie the processing of ordinary stimuli such as arrows (see Langton et al., 2000;Friesen et al., 2007, for reviews). ...
Article
Full-text available
Among the studies on the perception of gaze vs. non-gaze stimuli, some have shown that the two types of stimuli trigger different patterns of attentional effects, while others have reported no such differences. In three experiments, we investigated the role of stimulus perceivability in spatial interference effects when the targets were gaze vs. non-gaze stimuli. We used a spatial Stroop task that required participants to make a speeded response to the direction indicated by the targets located on the left or right side of fixation. In different experiments, the targets consisted of eyes, symbols, and/or arrows. The results showed that the magnitude of the spatial congruency effect differed between the types of targets when stimulus perceivability was not controlled. However, when the perceivability of the task relevant parts was comparable between the different types of targets, similar congruency effects were found regardless of target type. These results underscore the importance of controlling for stimulus perceivability, which is closely linked to the attentional zoom required to perform a task, when making inferences about the attentional mechanisms in the processing of gaze vs. non-gaze stimuli.
... It has been found that the nonpredictive gaze direction cue could trigger faster reaction times (RT) to targets in the congruent condition than in the incongruent condition. This effect occurs very fast and even when the gaze direction is counterpredictive of the target location, thus, disclosing its reflexive nature (Driver et al., 1999;Friesen & Kingstone, 1998;Friesen et al., 2004;Frischen et al., 2007;Langton & Bruce, 1999). ...
Article
Full-text available
Social directional cues (e.g., gaze direction; walking direction) can trigger reflexive attentional orienting, a phenomenon known as social attention. Here, we examined whether this reflexive social attention could be modulated by the emotional content embedded in social cues. By introducing emotional (happy and sad) biological motion (BM) stimuli to the modified central cuing paradigm, we found that the happy but not the sad emotional gait could significantly boost attentional orienting effect relative to the neutral gait. Critically, this "happiness advantage" effect could be extended to social attention induced by gaze. Furthermore, the observed differential emotional modulations could not be simply explained by low-level physical differences between the emotional stimuli, as inverted social cues (i.e., BM and face) failed to produce such modulation effects. Overall, these findings highlight the role of emotional information in modulating the processing of social signals, and further suggest the existence of a general emotional modulation on social attention triggered by different types of social signals. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
... Participants are asked to respond as quickly as possible to the identity of the target letter by pressing a button on the keyboard. People typically respond faster to targets appearing in the same location gazed at by the cue face (i.e., gaze-congruent trials) than to targets appearing in a location opposite to that gazed at by the cue face (i.e., gaze-incongruent trials), even though gaze direction is irrelevant to the task, thus indicating an automatic nature of social attention (Friesen and Kingstone, 1998;Driver et al., 1999;Langton and Bruce, 1999;Dalmaso et al., 2020). The Simon effect (Simon, 1969(Simon, , 1990) is characterized by a faster and more accurate performance when stimulus position and response position spatially correspond (i.e., corresponding condition) compared to when they do not correspond (i.e., non-corresponding condition), even though stimulus position is irrelevant to the task (e.g., Pellicano et al., 2009Pellicano et al., , 2019Baroni et al., 2012;Lugli et al., 2013Lugli et al., , 2016Lugli et al., , 2017Scerrati et al., 2017;D' Ascenzo et al., 2018D' Ascenzo et al., , 2020. ...
Article
Full-text available
Recent studies suggest that covering the face inhibits the recognition of identity and emotional expressions. However, it might also make the eyes more salient, since they are a reliable index to orient our social and spatial attention. This study investigates (1) whether the pervasive interaction with people with face masks fostered by the COVID-19 pandemic modulates the processing of spatial information essential to shift attention according to other’s eye-gaze direction (i.e., gaze-cueing effect: GCE), and (2) whether this potential modulation interacts with motor responses (i.e., Simon effect). Participants were presented with face cues orienting their gaze to a congruent or incongruent target letter location (gaze-cueing paradigm) while wearing a surgical mask (Mask), a patch (Control), or nothing (No-Mask). The task required to discriminate the identity of the lateralized target letters by pressing one of two lateralized response keys, in a corresponding or a non-corresponding position with respect to the target. Results showed that GCE was not modulated by the presence of the Mask, but it occurred in the No-Mask condition, confirming previous studies. Crucially, the GCE interacted with Simon effect in the Mask and Control conditions, though in different ways. While in the Mask condition the GCE emerged only when target and response positions corresponded (i.e., Simon-corresponding trials), in the Control condition it emerged only when they did not correspond (i.e., Simon-non-corresponding trials). These results indicate that people with face masks induce us to jointly orient our visual attention in the direction of the seen gaze (GCE) in those conditions resembling (or associated with) a general approaching behavior (Simon-corresponding trials). This is likely promoted by the fact that we tend to perceive wearing the mask as a personal safety measure and, thus, someone wearing the face mask is perceived as a trustworthy person. In contrast, people with a patch on their face can be perceived as more threatening, therefore inducing a GCE in those conditions associated with a general avoidance behavior (Simon-non-corresponding trials).
... Neurofunctional data (see Ulloa and George, this journal issue) suggest that pointing gestures and eye gaze elicit similar attentional responses (Langton & Bruce, 2000), and share a basis of information instantiated by common neuronal structures, mainly the superior temporal sulcus (Sato et al., 2009). This is significant in relation to the distinction between peripersonal and extrapersonal space (the spatial dimension targeted by pointing), as we know that the perception of peripersonal space is modulated by the vocabulary of motor actions mapped in the premotor areas (Rizzolatti et al., 1988), while the extra-personal space is mainly mapped by intraparietal areas related to gaze direction and attention fixation: that is why the manipulation of extrapersonal space "is mainly subserved by oculomotor circuits, in which spatial information arises from neurons whose receptive fields are coded in retinal coordinates" (Neppi-Mòdona et al., 2007). ...
Article
Full-text available
We call those gestures "instrumental" that can enhance certain thinking processes of an agent by offering him representational models of his actions in a virtual space of imaginary performative possibilities. We argue that pointing is an instrumental gesture in that it represents geometrical information on one's own gaze direction (i.e., a spatial model for attentional/ocular fixation/orientation), and provides a ritualized template for initiating gaze coordination and joint attention. We counter two possible objections, asserting respectively that the representational content of pointing is not constitutive, but derived from language, and that pointing directly solicits gaze coordination, without representing it. We consider two studies suggesting that attention and spatial perception are actively modified by one's own pointing activity: the first study shows that pointing gestures help children link sets of objects to their corresponding number words; the second, that adults are faster and more accurate in counting when they point.
... Looking at others in social encounters is probably the most basic nonverbal signal of human communication, but above all our number one channel for gathering information about our social environment (Risko, Richardson, & Kingstone, 2016). This function is so fundamentally important that we reflexively orient towards others' faces when they appear in our visual field (Kingstone, Kachkovski, Vasilyev, Kuk, & Welsh, 2019;Langton & Bruce, 1999;Risko et al., 2016). ...
Article
Full-text available
More than half of the world's population is currently living in cities, with more and more people moving to densely populated areas. The experience of growing up and living in crowded environments might influence the way we explore our social environment, mainly how we attend to others. Yet, we know little about how urbanicity affects this vital function of our social life. In two studies, we use mobile eye-tracking to measure participants' social attention, while walking through a shopping mall. Results show that social density of participants' native place impacts how frequently they look at passing strangers. People who experienced more city living from birth to early adolescence, attend more to strangers' faces than their rural counterparts. Our findings demonstrate that the early experience of urban upbringing configures social attention in adulthood. The urbanicity-related bias towards social gazing might reflect a more efficient processing of social information in urban natives.
... Experiments using various cueing paradigms have shown faster reaction times when the gaze of the face is direct-ed towards the target, compared to gaze directed in the opposite direction or directed straight ahead 53,54 . Evidence 55,56 suggests that this orientation of attention is an automatic reflex. Congiu et al 57 compared spontaneous gaze following in ASD children (mean age 5.8 years) to that of typically developing children (mean age 5.7 years). ...
Article
Full-text available
Autism spectrum disorder (ASD) is a broad diagnostic category describing a group of neurodevelopmental disorders which includes the autistic disorder. Failure to develop normal social relationships is a hallmark of autism. An inability to understand and cope with the social environment can occur regardless of IQ. One of the hypotheses of the appearance of ASD symptoms is associated with the theory of mind (TOM). ASD patients do not have the ability to attribute the full range of mental states (goal states and epistemic states) to themselves and to others. Eye-tracking allows for observation of early signs of TOM in ASD individuals, even before they are 1 year old, without the need of developed motor and language skills. This provides a window for looking at the very basics of mindreading - detecting intentionality and eyes in our environment. Studies show that ASD children fail to recognize biological motion, while being highly sensitive to physical contingency within the random movement. Their perception of faces seems disorganized and undirected, while object recognition is intact. Evidence suggests that this orientation of attention following gaze cues is diminished in ASD patients. Available data also show deficits in emotion recognition, that cannot be accounted for by impairments in face processing or visual modality alone. Such observations provide an insight into disturbances of information processing and offer an explanation for poor social functioning of ASD patients. When combined with other methods, Eye-tracking has the potential to reveal differences in processing information on a neural circuitry level. Thus, it may help in understanding the complexity of TOM mechanisms, and their role in social functioning.
... The direct gaze of others means that attention is directed to the observer, while averted gaze implies that the attention of the other is directed to the environment; consequently, averted gaze may also cause the observer to make reflexive shifts of attention toward the environment [8]. The behavioral index of such a joint shift of attention is called the gaze-cueing effect [9] in which human observers have faster saccadic or manual reaction times (RT) to objects appearing at the gaze-congruent locations compared with objects presented in gaze-incongruent locations [9][10][11]. ...
Article
Full-text available
The ability to adaptively follow conspecific eye movements is crucial for establishing shared attention and survival. Indeed, in humans, interacting with the gaze direction of others causes the reflexive orienting of attention and the faster object detection of the signaled spatial location. The behavioral evidence of this phenomenon is called gaze-cueing. Although this effect can be conceived as automatic and reflexive, gaze-cueing is often susceptible to context. In fact, gaze-cueing was shown to interact with other factors that characterize facial stimulus, such as the kind of cue that induces attention orienting (i.e., gaze or non-symbolic cues) or the emotional expression conveyed by the gaze cues. Here, we address neuroimaging evidence, investigating the neural bases of gaze-cueing and the perception of gaze direction and how contextual factors interact with the gaze shift of attention. Evidence from neuroimaging, as well as the fields of non-invasive brain stimulation and neurologic patients, highlights the involvement of the amygdala and the superior temporal lobe (especially the superior temporal sulcus (STS)) in gaze perception. However, in this review, we also emphasized the discrepancies of the attempts to characterize the distinct functional roles of the regions in the processing of gaze. Finally, we conclude by presenting the notion of invariant representation and underline its value as a conceptual framework for the future characterization of the perceptual processing of gaze within the STS.
Chapter
The human body conveys social information through a myriad of cues, one of the most important being emotion. Perception of emotion in faces has been extensively researched, with a particular focus on the impact of eye gaze on emotion recognition. Within face perception work, the social functional approach was developed to explain how combined processing of social cues (e.g., emotion, eye gaze, race, gender/sex) is highly adaptive and necessary to facilitate social interaction. Recently, computer models and machine learning have been incorporated into person perception research to gain a clearer understanding of how physiognomic cues interact with emotion information to inform emotion perception. In this chapter, we review past work examining emotion perception at the intersection of race and gender/sex and highlight important findings from human perceivers and computer models. We also address important ethical considerations involved in using new technology to conduct social perception research and emphasize the critical role of Intersectionality.
Article
Full-text available
Attending to other people's gaze is evolutionary important to make inferences about intentions and actions. Gaze influences covert attention and triggers eye movements. However, we know little about how the brain controls the fine-grain dynamics of eye movements during gaze following. Observers followed people's gaze shifts in videos during search and we related the observer eye movement dynamics to the time course of gazer head movements extracted by a deep neural network. We show that the observers' brains use information in the visual periphery to execute predictive saccades that anticipate the information in the gazer's head direction by 190-350 ms. The brain simultaneously monitors moment-to-moment changes in the gazer's head velocity to dynamically alter eye movements and re-fixate the gazer (reverse saccades) when the head accelerates before the initiation of the first forward gaze-following saccade. Using saccade-contingent manipulations of the videos, we experimentally show that the reverse saccades are planned concurrently with the first forward gaze-following saccade and have a functional role in reducing subsequent errors fixating on the gaze goal. Together, our findings characterize the inferential and functional nature of social attention's fine-grain eye movement dynamics.
Article
In the present study, we examined the effects of the other's triadic attention to objects on visual search performances in chimpanzees. We found the search-asymmetry-like effect of the other's attentional state; the chimpanzees searched a target object not attended by the other individual more efficiently than that attended (Experiment 1). Additional experiments explored the possibility that the other individual "holding an object but not looking at it" led to expectancy violation (Experiment 2) or the role of nonsocial cues such as the proximity relation between the head and the object (Experiment 3). Still, these accounts alone did not explain this effect. It was also shown that the other's attentional state affected the chimpanzees' performances more readily as the interference effect than the facilitation effect (Experiment 4). Furthermore, the same effect was observed in the visual search for the gaze (head direction) of others (Experiment 5). We obtained the same results using photographs of chimpanzees (Experiment 6). Contrary to the chimpanzees, humans detected the object to which attention was directed more efficiently than vice versa (Experiment 7). The present results may reflect species differences between chimpanzees and humans in processing triadic social attention.
Article
Background: Robots are being designed to alleviate the burden of social isolation and loneliness, particularly among older adults for whom these issues are more widespread. While good intentions underpin these developments, the reality is that many of these robots are abandoned within a short period of time. To encourage the longer-term use and utility of such robots, researchers are exploring ways to increase robot likeability and facilitate attachment. Results from experimental psychology suggest that interpersonal synchrony (the overlap of movement/sensation between two agents) increases the extent to which people like one another. Methods: To investigate the possibility that synchrony could facilitate people’s liking towards a robot, we undertook a between-subjects experiment in which participants interacted with a robot programmed to illuminate at the same rate, or 20% slower, than their heart rate. To quantify the impact of cardio-visual synchrony on prosocial attitudes and behaviors toward this robot, participants completed self-report questionnaires, a gaze-cueing task, and were asked to strike the robot with a mallet. Results : Contrary to pre-registered hypotheses, results revealed no differences in self-reported liking of the robot, gaze cueing effects, or the extent to which participants hesitated to hit the robot between the synchronous and asynchronous groups. Conclusions: The quantitative data described above, as well as qualitative data collected in semi-structured interviews, provided rich insights into people’s behaviours and thoughts when socially engaging with a humanoid social robot, and call into question the use of the broad “Likeability” measurement, and the appropriateness of the ‘hesitance to hit’ paradigm as a measure of attachment to a robotic system.
Preprint
Full-text available
Attending to other people's gaze is evolutionary important to make inferences about intentions and actions. Gaze influences covert attention and triggers eye movements. However, we know little about how the brain controls the fine-grain dynamics of eye movements during gaze following. Observers followed people's gaze shifts in videos during search and we related the observer eye movement dynamics to the timecourse of gazer head movements extracted by a deep neural network. We show that the observers' brains use information in the visual periphery to execute predictive saccades that anticipate the information in the gazer's head direction by 190-350 ms. The brain simultaneously monitors moment-to-moment changes in the gazer's head velocity to dynamically alter eye movements and re-fixate the gazer (reverse saccades) when the head accelerates before the initiation of the first forward gaze-following saccade. Using saccade-contingent manipulations of the videos, we experimentally show that the reverse saccades are planned concurrently with the first forward gaze-following saccade and have a functional role in reducing subsequent errors fixating on the gaze goal. Together, our findings characterize the inferential and functional nature of the fine-grain eye movement dynamics of social attention.
Article
Recent work suggests that age-related hearing loss (HL) is a possible risk factor for cognitive decline in older adults. Resulting poor speech recognition negatively impacts cognitive, social and emotional functioning and may relate to dementia. However, little is known about the consequences of hearing loss on other non-linguistic domains of cognition. The aim of this study was to investigate the role of HL on covert orienting of attention, selective attention and executive control. We compared older adults with and without mild to moderate hearing loss (26-60 dB) performing (1) a spatial cueing task with uninformative central cues (social vs. nonsocial cues), (2) a flanker task and (3) a neuropsychological assessment of attention. The results showed that overall response times and flanker interference effects were comparable across groups. However, in spatial cueing of attention using social and nonsocial cues, hearing impaired individuals were characterized by reduced validity effects, though no additional group differences were found between social and nonsocial cues. Hearing impaired individuals also demonstrated diminished performance on the Montreal Cognitive Assessment (MoCA) and on tasks requiring divided attention and flexibility. This work indicates that while response speed and response inhibition appear to be preserved following mild-to-moderate acquired hearing loss, orienting of attention, divided attention and the ability to flexibly allocate attentional resources are more deteriorated in older adults with HL. This work suggests that hearing loss might exacerbate the detrimental influences of aging on visual attention.
Article
It has been proposed that humans automatically compute the visual perspective of others. Evidence for this view comes from the Dot Perspective Task. In this task, participants view a room in which a human actor is depicted, looking either leftwards or rightwards. Dots can appear on either the left wall of the room, the right wall, or both. At the start of each trial, participants are shown a number. Their speeded task is to decide whether the number of dots visible matches the number shown. On consistent trials the participant and the actor can see the same number of dots. On inconsistent trials, the participant and the actor can see a different number of dots. Participants respond faster on consistent trials than on inconsistent trials. This self-consistency effect is cited as evidence that participants compute the visual perspective of others automatically, even when it impedes their task performance. According to a rival interpretation, however, this effect is a product of attention cueing: slower responding on inconsistent trials simply reflects the fact that participants' attention is directed away from some or all of the to-be-counted dots. The present study sought to test these rival accounts. We find that desk fans, a class of inanimate object known to cue attention, also produce the self-consistency effect. Moreover, people who are more susceptible to the effect induced by fans tend to be more susceptible to the effect induced by human actors. These findings suggest that the self-consistency effect is a product of attention cueing.
Article
Résumé Des études ont démontré que les enfants utilisent la direction du regard de l’adulte pour identifier le référent d’un nouveau mot (Baldwin, 1991). Cet article explore la possibilité que le regard oriente l’attention vers un objet à cause de la nature directionnelle du regard. Dans la première étude, nous démontrons que des enfants âgés de 24 mois peuvent mettre en correspondance ( map ) un nouveau mot et un nouvel objet si l’objet apparait à un endroit identifié par un indice non référentiel (c'st-à-dire, lumières clignotantes). Cependant, les résultats de l’étude suivante indiquent que la direction du regard ne fonctionne pas comme les indices non référentiels. En effet, le regard vers un objet spécifique permet aux enfants d’associer cet objet à un nouveau mot mais le regard ne permet pas cette association si c’est l’emplacementt de l’objet qui est ciblé par le regard. Ces résultats suggèrent que les enfants considèrent la direction du regard comme un marqueur de l’intentionalité du locuteur.
Article
Full-text available
Five experiments are reported that investigate the distribution of selective attention to verbal and nonverbal components of an utterance when conflicting information exists in these channels. A Stroop-type interference paradigm is adopted in which attributes from the verbal and nonverbal dimensions are placed into conflict. Static directional (deictic) gestures and corresponding spoken and written words show symmetrical interference (Experiments 1, 2, and 3), as do directional arrows and spoken words (Experiment 4). This symmetry is maintained when the task is switched from a manual keypress to a verbal naming response (Experiment 5), suggesting the mutual influence of the 2 dimensions is independent of spatial stimulus–response compatibility. It is concluded that the results are consistent with a model of interference in which information from pointing gestures and speech is integrated prior to the response selection stage of processing.
Article
Full-text available
We measured manual reaction time in normal human subjects to confirm that an eccentric visual signal has a biphasic effect on covert attention and eye movements. First, it summons attention and biases a saccade toward the signal; a subsequent inhibition of return then slows responses to signals at that location. A temporal hemifield dominance for inhibition of return was shown; this finding coverges with observations in neurologic patients to suggest that it is mediated by midbrain pathways. Endogenous orienting of attention, from a central arrow cue, did not activate inhibition of return, whereas endogenous saccade preparation did so as effectively as an exogenous signal, even when no saccade was made. Inhibition of return is activated by midbrain oculomotor pathways and may function as a location “tagging” mechanism to optimize efficiency of visual search.
Article
Full-text available
To study the mechanisms underlying covert orienting of attention in visual space, subjects were given advance cues indicating the probable locations of targets that they had to discriminate and localize. Direct peripheral cues (brightening of one of four boxes in peripheral vision) and symbolic central cues (an arrow at the fixation point indicating a probable peripheral box) were compared. Peripheral and central cues are believed to activate different reflexive and voluntary modes of orienting (Jonides, 1981; Posner, 1980). Experiment 1 showed that the time courses of facilitation and inhibition from peripheral and central cues were characteristic and different. Experiment 2 showed that voluntary orienting in response to symbolic central cues is interrupted by reflexive orienting to random peripheral flashes. Experiment 3 showed that irrelevant peripheral flashes also compete with relevant peripheral cues. The amount of interference varied systematically with the interval between the onset of the relevant cue and of the distracting flash (cue-flash onset asynchrony) and with the cuing condition. Taken together, these effects support a model for spatial attention with distinct but interacting reflexive and voluntary orienting mechanisms.
Article
Full-text available
In the present study we investigated whether the visually allocated "beam" studied by Posner and others is the same visual attentional resource that performs the role of feature integration in Treisman's model. Subjects were cued to attend to a certain spatial location by a visual cue, and performance at expected and unexpected stimulus locations was compared. Subjects searched for a target letter (R) with distractor letters that either could give rise to illusory conjunctions (PQ) or could not (PB). Results from three separate experiments showed that orienting attention in response to central cues (endogenous orienting) showed similar effects for both conjunction and feature search. However, when attention was oriented with peripheral visual cues (exogenous orienting), conjunction search showed larger effects of attention than did feature search. It is suggested that the attentional systems that are oriented in response to central and peripheral cues may not be the same and that only the latter performs a role in feature integration. Possibilities for future research are discussed.
Article
Full-text available
A goal of neuropsychology is to connect cognitive functions with underlying neural systems. Posner (1984; in press) has proposed a framework for doing so in which elementary mental operations in cognitive models are expressed in terms of component facilitations and inhibitions in the performance of normal persons. Studies of brain-injured patients are used to link these components to underlying neural systems. In the area of spatial attention one such component is the tendency to inhibit orienting towards visual locations which have been previously attended (inhibition of return). Here we report studies in patients and normals which demonstrate the relationship of this component to neural systems which generate saccades. The first experiment showed that midbrain lesions impairing saccade generation produced a concurrent loss of the inhibition of return, whereas cortical components shown to impair facilitatory components did not. The second and third experiments show that the inhibition of return is associated with a bias in eye movements against returning to a previously inhibited location and indicate that inhibition of return occurs even when the eyes are moved to an unchanging visual target. The deficits found in patients and the conditions under which the inhibition is found in normals suggest that inhibition of return may function to favour foveation of information at new locations
Article
Full-text available
Article
Full-text available
Faces, as a class of objects, have been studied extensively in order to understand how the human visual system recognizes and represents objects. In this paper we studied the ontogeny of the ability to perceive gaze direction. We bring together both developmental research and neurophysiological and neuropsychological research in order to address this issue. In two experiments we explored the developmental time course of the ability to discriminate between direct and averted gaze, a task thought to involve cortical information processing of faces. We found that (a) infants as young as four months could discriminate between direct and averted gaze, (b) this ability was not due to the development of low-level visual processes, and (c) younger infants did not show reliable evidence of gaze discrimination. In an additional experiment we tested adults to study the effect of face context on the ability to discriminate gaze direction. Adult subjects were more sensitive in this discrimination when the eyes were in the context of an upright face than when the eyes were in either an inverted face or in a scrambled face. Taken together, these results suggest that the mechanisms underlying gaze detection may be mediated by cortical circuits also involved in other aspects of face recognition.
Article
Full-text available
Reports 5 experiments conducted with 52 paid Ss in which detection of a visual signal required information to reach a system capable of eliciting arbitrary responses required by the experimenter. Detection latencies were reduced when Ss received a cue indicating where the signal would occur. This shift in efficiency appears to be due to an alignment of the central attentional system with the pathways to be activated by the visual input. It is also possible to describe these results as being due to a reduced criterion at the expected target position. However, this ignores important constraints about the way in which expectancy improves performance. A framework involving a limited-capacity attentional mechanism seems to capture these constraints better than the more general language of criterion setting. Using this framework, it was found that attention shifts were not closely related to the saccadic eye movement system. For luminance detection, the retina appears to be equipotential with respect to attention shifts, since costs to unexpected stimuli are similar whether foveal or peripheral. (26 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Compared memory for faces with memory for other classes of familar and complex objects which, like faces, are also customarily seen only in 1 orientation (mono-oriented). Performance of 4 students was tested when the inspection and test series were presented in the same orientation, either both upright or both inverted, or when the 2 series were presented in opposite orientations. The results show that while all mono-oriented objects tend to be more difficult to remember when upside-down, faces are disproportionately affected. These findings suggest that the difficulty in looking at upside-down faces involves 2 factors: a general factor of familiarity with mono-oriented objects, and a special factor related only to faces. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
When a stimulus appears in a previously cued location several hundred milliseconds after the cue, the time required to detect that stimulus is greater than when it appears in an uncued location. This increase in detection time is known as inhibition of return (IOR). It has been suggested that IOR reflects the action of a general attentional mechanism that prevents attention from returning to previously explored loci. At the same time, the robustness of IOR has been recently disputed, given several failures to obtain the effect in tasks requiring discrimination rather than detection. In a series of eight experiments, we evaluated the differences between detection and discrimination tasks with regard to IOR. We found that IOR was consistently obtained with both tasks, although the temporal parameters required to observe IOR were different in detection and discrimination tasks. In our detection task, the effect appeared after a 400-msec delay between cue and target, and was still present after 1,300 msec. In our discrimination task, the effect appeared later and disappeared sooner. The implications of these data for theoretical accounts of IOR are discussed.
Article
Full-text available
his paper seeks to bring together two previously separate research traditions: research on spatial orienting within the visual cueing paradigm and research into social cognition, addressing our tendency to attend in the direction that another person looks. Cueing methodologies from mainstream attention research were adapted to test the automaticity of orienting in the direction of seen gaze. Three studies manipulated the direction of gaze in a computerized face, which appeared centrally in a frontal view during a peripheral letter-discrimination task. Experiments 1 and 2 found faster discrimination of peripheral target letters on the side the computerized face gazed towards, even though the seen gaze did not predict target side, and despite participants being asked to ignore the face. This suggests reflexive covert and/or overt orienting in the direction of seen gaze, arising even when the observer has no motivation to orient in this way. Experiment 3 found faster letter discrimination on the side the computerized face gazed towards even when participants knew that target letters were four times as likely on the opposite side. This suggests that orienting can arise in the direction of seen gaze even when counter to intentions. The experiments illustrate that methods from mainstream attention research can be usefully applied to social cognition, and that studies of spatial attention may profit from considering its social function.
Article
Full-text available
Cortical neurons that are selectively sensitive to faces, parts of faces and particular facial expressions are concentrated in the banks and floor of the superior temporal sulcus in macaque monkeys. Their existence has prompted suggestions that it is damage to such a region in the human brain that leads to prosopagnosia: the inability to recognize faces or to discriminate between faces. This was tested by removing the face-cell area in a group of monkeys. The animals learned to discriminate between pictures of faces or inanimate objects, to select the odd face from a group, to inspect a face then select the matching face from a pair of faces after a variable delay, to discriminate between novel and familiar faces, and to identify specific faces. Removing the face-cell area produced no or little impairment which in the latter case was not specific for faces. In contrast, several prosopagnosic patients were impaired at several of these tasks. The animals were less able than before to discern the angle of regard in pictures of faces, suggesting that this area of the brain may be concerned with the perception of facial expression and bearing, which are important social signals in primates.
Article
Full-text available
Cells selectively responsive to the face have been found in several visual sub-areas of temporal cortex in the macaque brain. These include the lateral and ventral surfaces of inferior temporal cortex and the upper bank, lower bank and fundus of the superior temporal sulcus (STS). Cells in the different regions may contribute in different ways to the processing of the facial image. Within the upper bank of the STS different populations of cells are selective for different views of the face and head. These cells occur in functionally discrete patches (3-5 mm across) within the STS cortex. Studies of output connections from the STS also reveal a modular anatomical organization of repeating 3-5 mm patches connected to the parietal cortex, an area thought to be involved in spatial awareness and in the control of attention. The properties of some cells suggest a role in the discrimination of heads from other objects, and in the recognition of familiar individuals. The selectivity for view suggests that the neural operations underlying face or head recognition rely on parallel analyses of different characteristic views of the head, the outputs of these view-specific analyses being subsequently combined to support view-independent (object-centred) recognition. An alternative functional interpretation of the sensitivity to head view is that the cells enable an analysis of 'social attention', i.e. they signal where other individuals are directing their attention. A cell maximally responsive to the left profile thus provides a signal that the attention (of another individual) is directed to the observer's left. Such information is useful for analysing social interactions between other individuals.(ABSTRACT TRUNCATED AT 250 WORDS)
Article
Full-text available
: The concept of attention as central to human performance extends back to the start of experimental psychology, yet even a few years ago, it would not have been possible to outline in even a preliminary form a functional anatomy of the human attentional system. New developments in neuroscience have opened the study of higher cognition to physiological analysis, and have revealed a system of anatomical areas that appear to be basic to the selection of information for focal (conscious) processing. The importance of attention is its unique role in connecting the mental level of description of processes used in cognitive science with the anatomical level common in neuroscience. Sperry describes the central role that mental concepts play in understanding brain function. As is the case for sensory and motor systems of the brain, our knowledge of the anatomy of attention is incomplete. Nevertheless, we can now begin to identify some principles of organization that allow attention to function as a unified system for the control of mental processing. Although many of our points are still speculative and controversial, we believe they constitute a basis for more detailed studies of attention from a cognitive-neuroscience viewpoint. Perhaps even more important for furthering future studies, multiple methods of mental chronometry, brain lesions, electrophysiology, and several types of neuro-imaging have converged on common findings.
Article
Full-text available
To study the mechanisms underlying covert orienting of attention in visual space, subjects were given advance cues indicating the probable locations of targets that they had to discriminate and localize. Direct peripheral cues (brightening of one of four boxes in peripheral vision) and symbolic central cues (an arrow at the fixation point indicating a probable peripheral box) were compared. Peripheral and central cues are believed to activate different reflexive and voluntary modes of orienting (Jonides, 1981; Posner, 1980). Experiment 1 showed that the time courses of facilitation and inhibition from peripheral and central cues were characteristic and different. Experiment 2 showed that voluntary orienting in response to symbolic central cues is interrupted by reflexive orienting to random peripheral flashes. Experiment 3 showed that irrelevant peripheral flashes also compete with relevant peripheral cues. The amount of interference varied systematically with the interval between the onset of the relevant cue and of the distracting flash (cue-flash onset asynchrony) and with the cuing condition. Taken together, these effects support a model for spatial attention with distinct but interacting reflexive and voluntary orienting mechanisms.
Article
Full-text available
Four experiments are reported that investigate an inhibitory effect associated with externally controlled orienting and first identified by Posner and Cohen (1980, 1984). The effect takes the form of an inability to respond quickly to a stimulus appearing in the same location in the visual periphery as a previous one that produced covert orienting. Several characteristics of the effect are revealed that eliminate possible explanations in terms of response inhibition, masking, and sensory habituation. The inhibitory component of orienting occurs whether or not the first stimulus requires a response (Experiment 1), lasts at least a second (Experiments 1, 2, and 3), affects not only the originally stimulated location but also nearby locations (Experiment 2), is determined by environmental coordinates (Experiment 3), and occurs both in the periphery and at the fovea (Experiment 4). It is concluded that inhibition may act together with an early facilitatory component (Posner & Cohen, 1984) in directing the attention and eye movement systems in order to maintain efficient spatial sampling.
Article
Full-text available
Recognition memory for faces is hampered much more by inverted presentation than is memory for any other material so far examined. The present study demonstrates that faces are not unique with regard to this vulnerability to inversion. The experiments also attempt to isolate the source of the inversion effect. In one experiment, use of stimuli (landscapes) in which spatial relations among elements are potentially important distinguishing features is shown not to guarantee a large inversion effect. Two additional experiments show that for dog experts sufficiently knowledgeable to individuate dogs of the same breed, memory for photographs of dogs of that breed is as disrupted by inversion as is face recognition. A final experiment indicates that the effect of orientation on memory for faces does not depend on inability to identify single features of these stimuli upside down. These experiments are consistent with the view that experts represent items in memory in terms of distinguishing features of a different kind than do novices. Speculations as to the type of feature used and neuropsychological and developmental implications of this accomplishment are offered.
Article
Full-text available
When used with positively skewed reaction time distributions, sample medians tend to over-estimate population medians. The extent of overestimation is related directly to the amount of skew in the reaction time distributions and inversely to the size of the sample over which the median is computed. Simulations indicate that overestimation could approach 50 ms with small samples and highly skewed distributions. An important practical consequence of the bias in median reaction time is that sample medians must not be used to compare reaction times across experimental conditions when there are unequal numbers of trials in the conditions. If medians are used with unequal sample sizes, then the bias may produce an artifactual difference in conditions or conceal a true difference. Some recent studies of cuing and stimulus probability effects provide examples of this potential artifact.
Article
Full-text available
Research on gaze and eye contact was organized within the framework of Patterson's (1982) sequential functional model of nonverbal exchange. Studies were reviewed showing how gaze functions to (a) provide information, (b) regulate interaction, (c) express intimacy, (d) exercise social control, and (e) facilitate service and task goals. Research was also summarized that describes personal, experiential, relational, and situational antecedents of gaze and reactions to gaze. Directions were given for a functional analysis of the relation between gaze and physiological responses. Attribution theories were integrated into the sequential model for making predictions about people's perceptions of their own gazing behavior and the gazing behavior of others. Data on people's accuracy in reporting their own and others' gaze were presented and integrated with related findings in attribution research. The sequential model was used to analyze research studies measuring the interaction between gaze and personal and contextual variables. Methodological and measurement issues were discussed and directions were outlined for future research.
Article
Full-text available
Studies of visual memory have used a technique where a brief presentation of a display is followed some msec. later by a bar or line indicator designating the position that had been occupied by an element of the display. While the major interest has been in the decreasing accuracy in reporting the element as the indicator is increasingly delayed, an equally important problem is the ability of these indicators to selectively determine attention in a matter of msec. The present experiments using undergraduates compared the effectiveness of 4 different kinds of indicators and obtained estimates of the time required to process them or apprehend their meaning.
Article
Full-text available
The effect of temporal discontinuity on visual search was assessed by presenting a display in which one item had an abrupt onset, while other items were introduced by gradually removing line segments that camouflaged them. We hypothesized that an abrupt onset in a visual display would capture visual attention, giving this item a processing advantage over items lacking an abrupt leading edge. This prediction was confirmed in Experiment 1. We designed a second experiment to ensure that this finding was due to attentional factors rather than to sensory or perceptual ones. Experiment 3 replicated Experiment 1 and demonstrated that the procedure used to avoid abrupt onset--camouflage removal--did not require a gradual waveform. Implications of these findings for theories of attention are discussed.
Article
Full-text available
Bartlett viewed thinking as a high level skill exhibiting ballistic properties that he called its “point of no return”. This paper explores one aspect of cognition through the use of a simple model task in which human subjects are asked to commit attention to a position in visual space other than fixation. This instruction is executed by orienting a covert (attentional) mechanism that seems sufficiently time locked to external events that its trajectory can be traced across the visual field in terms of momentary changes in the efficiency of detecting stimuli. A comparison of results obtained with alert monkeys, brain injured and normal human subjects shows the relationship of this covert system to saccadic eye movements and to various brain systems controlling perception and motion. In accordance with Bartlett's insight, the possibility is explored that similar principles apply to orienting of attention toward sensory input and orienting to the semantic structures used in thinking.
Conference Paper
A cold and the flu are both respiratory illnesses and they are very common to us. Vaccination is the most effective way to prevent infection of the flu, but there is no way for a cold. Thus, the best strategy for individuals is to stay away from the flu or cold carriers and to wash their hands often. Early detection of flu epidemics and a quick response to that can minimize the impact of the flu. We observed tweets as social signals of flu symptoms to detect the flu epidemics in early stage. We compared a tweet corpus from nine cities in Korea to the weather factors, flu forecast, and Influenza-like Illness datasets. The results show the possibility of using social signals to detect epidemic diseases.
Article
The occipitotemporal cortical areas of the macaque monkey are known to be important for normal object recognition processes, but comparatively little effort has gone into investigations of the role of these areas in selective attention to objects. In this paper we review the behavioural and electrophysiological evidence, which suggests that the occipitotemporal areas are also important for selective attention to recognisable objects. Areas V4 and IT are seen to be involved in aspects of selective attention driven by the spatial location of the attended object, features of objects, the relevance of a stimulus to a particular task, and the amount of sustained attention required to perform a task. The superior temporal polysensory area (STPa) is an area usually thought of as a component of the temporal processing stream. However, the evidence reviewed here shows that one role of area STPa is to decode the direction of others' attention, a function which requires that the region accesses information from both of the major corticocortical processing streams.
Article
Physiological recordings along the length of the upper bank of the superior temporal sulcus (STS) revealed cells each of which was selectively responsive to a particular view of the head and body. Such cells were grouped in large patches 3-4 mm across. The patches were separated by regions of cortex containing cells responsive to other stimuli. The distribution of cells projecting from temporal cortex to the posterior regions of the inferior parietal lobe was studied with retrogradely transported fluorescent dyes. A strong temporoparietal projection was found originating from the upper bank of the STS. Cells projecting to the parietal cortex occurred in large patches or bands. The size and periodicity of modules defined through anatomical connections matched the functional subdivisions of the STS cortex involved in face processing evident in physiological recordings. It is speculated that the temporoparietal projections could provide a route through which temporal lobe analysis of facial signals about the direction of others' attention can be passed to parietal systems concerned with spatial awareness.
Article
A series of experiments is reported which show that three successive mechanisms are involved in the first 18 months of life in ‘looking where someone else is looking’. The earliest ‘ecological’ mechanism enables the infant to detect the direction of the adult's visual gaze within the baby's visual field but the mother's signal alone does not allow the precise localization of the target. Joint attention to the same physical object also depends on the intrinsic, attention-capturing properties of the object in the environment. By about 12 months, we have evidence for presence of a new ‘geometric’ mechanism. The infant extrapolates from the orientation of the mother's head and eyes, the intersection of the mother's line of sight within a relatively precise zone of the infant's own visual space. A third ‘representational’ mechanism emerges between 12 and 18 months, with an extension of joint reference to places outside the infant's visual field. None of these mechanisms require the infant to have a theory that others have minds; rather the perceptual systems of different observers ‘meet’ in encountering the same objects and events in the world. Such a ‘realist’ basis for interpersonal knowledge may offer an alternative starting point for development of intrapersonal knowledge, rather than the view that mental events can only be known by construction of a theory.
Article
review the neurobiology of [the] neural mechanisms for orienting attention from a neurological perspective / content of the chapter draws on knowledge from cognitive science, neural science, and clinical neurology / focuses on spatial orienting as a model system for understanding mind–brain relationships and for appreciating the biological basis of a basic mental process in health and disease review some of what is known about the psychobiology of the clinical disorders neglect and hemianopia, and of some other more rare syndromes / goals of the current analysis are twofold: to understand how attentional processes operate in regulating normal perception and action and to identify the neural basis of visual attention (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Normal subjects were presented with a simple line drawing of a face looking left, right, or straight ahead. A target letter F or T then appeared to the left or the right of the face. All subjects participated in target detection, localization, and identification response conditions. Although subjects were told that the line drawing’s gaze direction (the cue) did not predict where the target would occur, response time in all three conditions was reliably faster when gaze was toward versus away from the target. This study provides evidence for covert, reflexive orienting to peripheral locations in response to uninformative gaze shifts presented at fixation. The implications for theories of social attention and visual orienting are discussed, and the brain mechanisms that may underlie this phenomenon are considered.
Article
Two experiments examined whether infants shift their visual attention in the direction toward which an adult's eyes turn. A computerized modification of previous joint-attention paradigms revealed that infants as young as 3 months attend in the same direction as the eyes of a digitized adult face. This attention shift was indicated by the latency and direction of their orienting to peripheral probes presented after the face was extinguished. A second experiment found a similar influence of direction of perceived gaze, but also that less peripheral orienting occurred if the central face remained visible during presentation of the probe. This may explain why attention shifts triggered by gaze perception have been difficult to observe in infants using previous naturalistic procedures. Our new method reveals both that direction of perceived gaze can be discriminated by young infants and that this perception triggers corresponding shifts of their own attention.
Article
The production of pointing, the understanding of pointing and the comprehension of another's line of regard was investigated in 36 male and female infants 9-, 12-, and 14-months-old. Production of pointing was present in eight out of twelve 12-month-olds and in eleven out of twelve 14-month-olds; only a few of the youngest Ss pointed. For the youngest infants comprehension of pointing was a function of the distance between the person pointing and the object pointed to. All 12-and 14-month-old children comprehended the pointing to a nearby object and most of them also understood the pointing to a distant object. Ten out of twelve 12-month-olds and eleven out of twelve 14-month-olds were able to tell where another person was looking if both the cues of movement and orientation of the head and the eyes were present; their performance was less perfect with only the cue of orientation present or with only the eyes moving. Never more than three out of the twelve youngest Ss succeeded on any of these percept-diagnosis tasks.
Article
Recordings were made of the eye fixations of three subjects in two tasks involving black-and-white photographs of faces. In the first task, subjects matched a test face with a previously viewed target face; in the second task, subjects compared two simultaneously presented faces. The eye movements were recorded with a corneal reflection technique. Each subject showed an individual fixation strategy for the tasks; in particular each subject had one or more preferred facial features which were viewed foveally in both tasks. The subjects also showed some tendency to use a regular sequential pattern of eye movements. However, the sequences used differed from one task to the other. Although some aspects of the results support the scanpath hypothesis of Noton, it is suggested that an alternative interpretation is possible.
Article
LITTLE is known about how visual attention of the mother-infant pair is directed jointly to objects and events in the visual surround during the first year of the child's life. To what extent does the child follow the mother's lead and the mother the child's, and what are the processes involved? The ability of the infant to respond successfully to such signals allows the mother to isolate and highlight a much wider range of environmental features than if the infant ignores her attention-directing efforts. We report a preliminary investigation of the extent of the infant's ability to follow changes in adult gaze direction during the first year of life.
Article
The social impairments of autism, which are especially salient in autism of the Asperger type, have been attributed to a failure of affective processing, and more recently to a failure to develop a "theory of mind". Recent research evidence bearing on these theories is reviewed and a new hypothesis is put forward, based on research in progress, which posits a developmentally earlier abnormality of the "social gaze response": the inherent tendency of the normal infant to focus gaze and attention on social cues and, later, on objects in the environment as indicated by the gesture of gaze of others. Weakness or absence of the social gaze response is enough, it is argued, to account for many of the typical symptoms of autism, including the failure to acquire a theory of mind.
Article
There are suggestions in the literature that spatial precuing of attention with peripheral and central cues may be mediated by different mechanisms. To investigate this issue, data from two previous papers were reanalysed to investigate the complete time course of precuing target location with either: (1) a peripheral cue that may draw attention reflexively, or (2) a central, symbolic cue that may require attention to be directed voluntarily. This analysis led to predictions that were tested in another experiment. The main result of this experiment was that a peripheral cue produced its largest effects on discrimination performance within 100 msec, whereas a central cue required approximately 300 msec to achieve maximum effects. In conjunction with previous findings, the present evidence for time differences between the two cuing conditions suggests that more than one process is involved in the spatial precuing of attention.
Article
Accuracy at perceiving frontal eye gaze was studied in monkeys and human subjects using a forced-choice detection task on paired photographs of a single human face. Monkeys learned the task readily, but after bilateral removal of the banks and floor of the superior temporal sulcus (STS) they failed to perform the task efficiently. This result is consistent with the conclusion, based on recordings from single cells in awake, behaving monkeys [Perret et al., Physiological Aspects of Clinical Neuro-ophthalmology, Chapman & Hall, London, 1988] that this region of the temporal lobe is important for coding information about eye-gaze of a confronting animal. Human subjects were given identical stimuli in a task where they were asked to detect "the face that is looking straight at you". Human performance is sensitive to the degree of angular deviation from the frontal gaze position, being poorest at small angular deviations from 0 degrees. This was also true of monkeys viewing these stimuli, pre- and post-operatively. Compared with normal controls, two humans prosopagnosics were impaired at this task. However the extent of impairment was different in the two patients. These findings are related to earlier reports (including those for patients with right-hemisphere damage without prosopagnosia), to normal performance with upright and inverted face photographs, and to notions of independent subsystems in face processing.
Article
It is often claimed that advance knowledge of target location enhances target recognition only when there are competing stimuli simultaneously present in the visual field (e.g., Grindley and Townsend 1968). This study reinvestigated this question applying a ‘cost-benefit’ analysis to a discrimination plus localisation task adapted from Grindley and Townsend. Spatial cueing produced benefits in accuracy for cued and costs for uncued locations, both with single and with multiple displays. However, costs plus benefits were more marked for multiple displays. Peripheral cues produced strong facilitation for cued and strong inhibition for uncued locations at short cue-target SOAs. Central cues showed a comparatively delayed build-up of facilitation and weaker inhibition. Costs plus benefits decreased at long SOAs, mainly because of a decrease in inhibition for uncued locations. This decrease was more marked for single than for multiple displays. This pattern suggests that there is an early ‘automatic’ (peripheral cues only) and a later ‘controlled’ (both central and peripheral cues) mechanism of spatial orienting, which differ in their interruptability by competing strimuli, e.g. targets at uncued locations. These are more likely to be detected in single than in multiple displays because a single luminance increment can attract attention through the same mechanism as a peripheral cue.
Article
Recognition of faces has been shown to be more impaired by inversion than recognition of other objects normally only seen upright (Yin 1969).> Experiment 1 explores the possibility that this result is explicable in terms of the familiarity of the recognition tasks rather than a ‘face-specific’ factor. However, a less familiar task (recognizing other race faces) was more disrupted by inversion than recognizing own race faces. In experiment 2, Yin's (1969) finding was replicated using a different view of items at test, a task which is more representative of everyday face recognition. Yin (1970) suggested that the disproportionate effect of inversion may be due to difficulty in perceiving facial expression in an'inverted face. However, in experiment 3, subjects encouraged to make personality judgements on initial viewing of the faces were no more impaired by inversion at test than subjects encouraged to name distinctive physical features. These results imply that the disproportionate effect of inversion upon face recognition cannot be explained in terms of the extra familiarity of the task or the use of identical photographs at test. Furthermore, it appears that the role of facial expression is not sufficient to account for this effect. However, the results of experiment 3 are also discussed in terms of the effectiveness of specific encoding instructions to enhance expression analysis selectively.
Article
An investigation of what can be learned about representational processes in face recognition from the independent and combined effects of inverting and negating facial images is reported. In experiment 1, independent effects of inversion and negation were observed in a task of identifying famous faces. In experiments 2 through 4 the question of whether effects of negation were still obtained when effects due to the reversal of pigmentation in negative images were eliminated was examined. By the use of images of the 3-D surfaces of faces measured by laser, and displays as smooth surfaces devoid of pigmentation, only effects of inversion were obtained reliably, suggesting that the effects observed in experiment 1 arose largely through the inversion of pigmentation values in normal images of faces. The results of experiment 5 suggested that the difference was not due to the different task demands of experiments 2-4 compared with those of experiment 1. When normally pigmented face images were used in a task making similar demands to that of experiment 4, independent effects of inversion and negation were again observed. When a task of sex classification was used in experiments 6 and 7, clear effects of negation as well as inversion were observed on latencies, though not accuracies, of responding. The results are interpreted in terms of the information content of pigmentation relative to shape from shading in different face-classification tasks. The results also reinforce other recent evidence demonstrating the importance of image intensity as well as spatial layout of face 'features'.
Article
Three experiments tested whether spatial attention can be influenced by a predictive relation between incidental information and the location of target events. Subjects performed a simple dot detection task; 600 msec prior to each target a word was presented briefly 5 degrees to the left or right of fixation. There was a predictive relationship between the semantic category (living or non-living) of the words and target location. However, subjects were instructed to ignore the words, and a post-experiment questionnaire confirmed that they remained unaware of the word-target relationship. In all three experiments, given some practice on the task, response times were faster when target appeared at likely ( p = 0.8 ), compared to unlikely ( p = 0.2 ) locations, in relation to lateral word category. Experiments 2 and 3 confirmed that this effect was driven by semantic encoding of the irrelevant words, and not by repetition of individual stimuli. Theoretical implications of this finding are discussed.