
Wolfgang Einhäuser- PhD
- Professor (Full) at Chemnitz University of Technology
Wolfgang Einhäuser
- PhD
- Professor (Full) at Chemnitz University of Technology
About
176
Publications
44,869
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
6,763
Citations
Introduction
Current institution
Additional affiliations
March 2008 - February 2015
Publications
Publications (176)
When viewing natural scenes, participants tend to direct their gaze towards the image center, the so-called “central bias.” Unless the head is fixed, gaze shifts to peripheral targets are accomplished by a combination of eye and head movements, with substantial individual differences in the propensity to use the head. We address the relation of cen...
In virtual reality (VR), we assessed how untrained participants searched for fire sources with the digital twin of a novel augmented reality (AR) device: a firefighter’s helmet equipped with a heat sensor and an integrated display indicating the heat distribution in its field of view. This was compared to the digital twin of a current state-of-the-...
Which properties of a natural scene affect visual search? We consider the alternative hypotheses that low-level statistics, higher-level statistics, semantics, or layout affect search difficulty in natural scenes. Across three experiments (n = 20 each), we used four different backgrounds that preserve distinct scene properties: (a) natural scenes (...
Multistable perception occurs in all sensory modalities, and there is ongoing theoretical debate about whether there are overarching mechanisms driving multistability across modalities. Here we study whether multistable percepts are coupled across vision and audition on a moment-by-moment basis. To assess perception simultaneously for both modaliti...
It is often assumed that rendering an alert signal more salient yields faster responses to this alert. Yet, there might be a trade-off between attracting attention and distracting from task execution. Here we tested this in four behavioral experiments with eye-tracking using an abstract alert-signal paradigm. Participants performed a visual discrim...
Gaze is an important and potent social cue to direct others’ attention towards specific locations. However, in many situations, directional symbols, like arrows, fulfill a similar purpose. Motivated by the overarching question how artificial systems can effectively communicate directional information, we conducted two cueing experiments. In both ex...
When humans walk, it is important for them to have some measure of the distance they have travelled. Typically, many cues from different modalities are available, as humans perceive both the environment around them (for example, through vision and haptics) and their own walking. Here, we investigate the contributions of visual cues and non-visual s...
Gaze is a powerful cue for directing attention. We investigate the interpretation of an abstract figure as gaze modulates its efficacy as an attentional cue. In each trial, two vertical lines on a central disk moved to one side (left or right). Independent of this "feature-cued" side, a target (black disk) subsequently appeared on one side. After 3...
Humans can quickly adapt their behavior to changes in the environment. Classical reversal learning tasks mainly measure how well participants can disengage from a previously successful behavior but not how alternative responses are explored. Here, we propose a novel 5-choice reversal learning task with alternating position-reward contingencies to s...
Objects influence attention allocation; when a location within an object is cued, participants react faster to targets appearing in a different location within this object than on a different object. Despite consistent demonstrations of this object-based effect, there is no agreement regarding its underlying mechanisms. To test the most common hypo...
Visual search is widely studied, with paradigms ranging from simple stimuli and tasks to complex scenes and real-world settings. Here, we investigated a real-world scenario, the search for fire sources with a novel augmented reality (AR) device: a firefighter’s helmet equipped with a heat sensor and an integrated display indicating the heat distrib...
In virtual reality (VR), we assessed how untrained participants searched for fire sources with the digital twin of a novel augmented-reality (AR) device: a firefighter’s helmet equipped with a heat sensor and an integrated display indicating the heat distribution in its field of view. This was compared to the digital twin of a current state-of-the-...
Gaze is a powerful cue for directing attention. We investigate whether the interpretation of an abstract figure as gaze modulates its efficacy as attentional cue. In each trial, two vertical lines on a central disk moved to one side (left, right). Independent of this “feature-cued” side, a target (black disk) subsequently appeared on one side. Afte...
Gaze is a powerful cue for directing attention. We investigate whether the interpretation of an abstract figure as gaze modulates its efficacy as attentional cue. In each trial, two vertical lines on a central disk moved to one side (left, right). Independent of this “feature-cued” side, a target (black disk) subsequently appeared on one side. Afte...
Walking is a complex task. To prevent falls and injuries, gait needs to constantly adjust to the environment. This requires information from various sensory systems; in turn, moving through the environment continuously changes available sensory information. Visual information is available from a distance, and therefore most critical when negotiatin...
Sequential auditory scene analysis (ASA) is often studied using sequences of two alternating tones, such as ABAB or ABA_, with “_” denoting a silent gap, and “A” and “B” sine tones differing in frequency (nominally low and high). Many studies implicitly assume that the specific arrangement (ABAB vs ABA_, as well as low-high-low vs high-low-high wit...
Screen-based communication increasingly replaces face-to-face interactions. Gaze information is important for nonverbal communication. Therefore, we investigate how humans (“receivers”) estimate gaze direction of others (“senders”) in videoconferencing-like settings, a major example of screen-based communication. In two online experiments, receiver...
Walking is a complex task. To prevent falls and injuries, gait needs to constantly adjust to the environment. This requires information from various sensory systems; in turn, moving through the environment continuously changes available sensory information. Visual information is available from a distance, and therefore most critical when negotiatin...
Walking is a complex task. To prevent falls and injuries, gait needs to constantly adjust to the environment. This requires information from various sensory systems; in turn, moving through the environment continuously changes available sensory information. Visual information is available from a distance, and therefore most critical when negotiatin...
Sensory consequences of one's own action are often perceived as less intense, and lead to reduced neural responses, compared to externally generated stimuli. Presumably, such sensory attenuation is due to predictive mechanisms based on the motor command (efference copy). However, sensory attenuation has also been observed outside the context of vol...
Objects influence attention allocation; when a location within an object is cued, participants react faster to targets appearing in a different location within this object than on a different object. Despite consistent demonstrations of this object-based effect, there is no agreement regarding its underlying mechanisms. To test the most common hypo...
Objects influence attention allocation; when a location within an object is cued, participants react faster to targets appearing in a different location within this object than on a different object. Despite consistent demonstrations of this object-based effect, there is no agreement regarding its underlying mechanisms. To test the most common hypo...
The course of pupillary constriction and dilation provides an easy-to-access, inexpensive, and noninvasive readout of brain activity. We propose a new taxonomy of factors affecting the pupil and link these to associated neural underpinnings in an ascending hierarchy. In addition to two well-established low-level factors (light level and focal dista...
Attention is largely guided by objects; when attending a location on an object, observers react faster to targets in other locations within the object than outside the object. While crucial to our understanding of attention, there is no consensus on the mechanisms underlying such object-based attention. We describe a continuous measure of object-ba...
Attention is largely guided by objects; when attending to a location on an object, observers react faster to targets appearing in other locations within the object than outside the object. While crucial to our understanding of attention, the mechanisms underlying object-based attention remain a topic of debate. Moreover, the behavioral effects of o...
Most humans can walk effortlessly across uniform terrain even when they do not pay much attention to it. However, most natural terrain is far from uniform, and we need visual information to maintain stable gait. Recent advances in mobile eye-tracking technology have made it possible to study, in natural environments, how terrain affects gaze and th...
Pedestrians' decision-making when crossing a road is investigated in an online video study with a two-alternative choice reaction task. Results are interpreted with a drift-diffusion-model.
In multistability, a constant stimulus induces alternating perceptual interpretations. For many forms of visual multistability, the transition from one interpretation to another (“perceptual switch”) is accompanied by a dilation of the pupil. Here we ask whether the same holds for auditory multistability, specifically auditory streaming. Two tones...
The pupil reliably dilates with increases in cognitive load, e.g., when performing mental multiplication. This mechanism has previously been used for communication with Locked-In-Syndrome patients [37], providing a proof-of-principle without focusing on communication speed. We investigated a similar paradigm in a larger sample of healthy participan...
How vision guides gaze in realistic settings has been researched for decades. Human gaze behavior is typically measured in laboratory settings that are well controlled but feature-reduced and movement-constrained, in sharp contrast to real-life gaze control that combines eye, head, and body movements. Previous real-world research has shown environm...
In multistability, perceptual interpretations (“percepts”) of ambiguous stimuli alternate over time. There is considerable debate as to whether similar regularities govern the first percept after stimulus onset and percepts during prolonged presentation. We address this question in a visual pattern-component rivalry paradigm by presenting two overl...
A major objective of perception is the reduction of uncertainty about the outside world. Eye-movement research has demonstrated that attention and oculomotor control can subserve the function of decreasing uncertainty in vision. Here, we ask whether a similar effect exists for awareness in binocular rivalry, when two distinct stimuli presented to t...
Sensory consequences of one's own action are often perceived as less intense, and lead to reduced neural responses, compared to externally generated stimuli. Presumably, such sensory attenuation is due to predictive mechanisms based on the motor command (efference copy). However, sensory attenuation has also been observed outside the context of vol...
Whether fixation selection in real-world scenes is guided by image salience or by objects has been a matter of scientific debate. To contrast the two views, we compared effects of location-based and object-based visual salience in young and older (65 + years) adults. Generalized linear mixed models were used to assess the unique contribution of sal...
Most humans can walk effortlessly across uniform terrain even without paying much attention to it. However, most natural terrain is far from uniform, and we need visual information to maintain stable gait. In a controlled yet naturalistic environment, we simulated terrain difficulty through slip-like perturbations that were either unpredictable (ex...
Most humans can walk effortlessly across uniform terrain even without paying much attention to it. However, most natural terrain is far from uniform, and we need visual information to maintain stable gait. In a controlled yet naturalistic environment, we simulated terrain difficulty through slip-like perturbations that were either unpredictable (ex...
Fixation durations provide insights into processing demands. We investigated factors controlling fixation durations during scene viewing in two experiments. In Experiment 1, we tested the degree to which fixation durations adapt to global scene processing difficulty by manipulating the contrast (from original contrast to isoluminant) and saturation...
Episodic memory, the ability to remember past events in time and place, develops during childhood. Much knowledge about the underlying neuronal mechanisms has been gained from methods not suitable for children. We applied pupillometry to study memory encoding and recognition mechanisms. Children aged 8 and 9 years (n = 24) and adults (n = 24) studi...
The brain has been theorized to employ inferential processes to overcome the problem of uncertainty. Inference is thought to underlie neural processes, including in disparate domains such as value-based decision-making and perception. Value-based decision-making commonly involves deliberation, a time-consuming process that requires conscious consid...
We compared perceptual multistability across modalities, using a visual plaid pattern (composed of two transparently overlaid drifting gratings) and auditory streaming (elicited by a repeating “ABA_” tone sequence). Both stimuli can be perceived as integrated (one plaid pattern, one stream comprising “A” and “B” tones) or segregated (two individual...
The process of inference is theorized to underlie neural processes, including value-based decision-making and perception. Value-based decision-making commonly involves deliberation, a time-consuming process that requires conscious consideration of decision variables. Perception, by contrast, is thought to be automatic and effortless. To determine i...
Can cognition penetrate action-to-perception transfer? Participants observed a structure-from-motion cylinder of ambiguous rotation direction. Beforehand, they experienced one of two mechanical models: An unambiguous cylinder was connected to a rod by either a belt (cylinder and rod rotating in the same direction) or by gears (both rotating in oppo...
Diverse paradigms, including ambiguous stimuli and mental imagery, have suggested a shared representation between motor and perceptual domains. We examined the effects of manual action on ambiguous perception in a continuous flash suppression (CFS) experiment. Specifically, we asked participants to try to perceive a suppressed grating while rotatin...
When a visual stimulus oscillates in luminance, pupil size follows this oscillation. Recently, it has been demonstrated that such induced pupil oscillations can be used to tag which stimulus is covertly attended. Here we ask whether this "pupil frequency tagging" approach can be extended to visual awareness, specifically to inferring perceptual dom...
Since the turn of the millennium, a large number of computational models of visual salience have been put forward. How best to evaluate a given model's ability to predict where human observers fixate in images of real-world scenes remains an open research question. Assessing the role of spatial biases is a challenging issue; this is particularly tr...
Of all peripheral measures of (neuro-)physiological activity, pupil size is probably the easiest to access. Far beyond its well-known reaction to light incident on the eye, pupil size is a rich marker of many cognitive processes. Since the turn of the millennium, the increasing availability of video-based eyetracking devices has led to a renaissanc...
In daily life, we are surrounded by objects with pre-existing motivational associations. However, these are rarely controlled for in experiments with natural stimuli. Research on natural stimuli would therefore benefit from stimuli with well-defined motivational properties; in turn, such stimuli also open new paths in research on motivation. Here w...
In addition to low-level stimulus characteristics and current goals, our previous experience with stimuli can also guide attentional deployment. It remains unclear, however, if such effects act independently or whether they interact in guiding attention. In the current study, we presented natural scenes including every-day objects that differed in...
When distinct stimuli are presented to the two eyes, their mental representations alternate in awareness. Here, such "binocular rivalry" was used to investigate whether audio-visual associations bias visual perception. To induce two arbitrary associations, each between a tone and a grating of a specific color and motion direction, observers were re...
Visual illusions explore the limits of sensory processing and provide an ideal testbed to study perception. Size illusions – stimuli whose size is consistently misperceived – do not only result from sensory cues, but can also be induced by cognitive factors, such as social status. Here we investigate, whether the ecological relevance of biological...
In binocular rivalry, paradigms have been proposed for unobtrusive moment-by-moment readout of observers' perceptual experience ("no-report paradigms"). Here, we take a first step to extend this concept to auditory multistability. Observers continuously reported which of two concurrent tone sequences they perceived in the foreground: high-pitch (10...
During natural scene viewing, humans typically attend and fixate selected locations for about 200–400 ms. Two variables characterize such ‘‘overt’’ attention: the probability of a location being fixated, and the fixation’s duration. Both variables have been widely researched, but little is known about their relation. We use a two- step approach to...
The interaction between action and visual perception is bi-directional: not only does vision guide most of our actions, but a growing number of studies suggest that our actions can in turn directly affect perceptual representations. The degree of action-perception congruency required to evoke such direct effects of action on perception is, however,...
Size perception is distorted in several illusions, including some that rely on complex social attributes: for example, people of higher subjective importance are associated with larger size. Biological motion receives preferential visual processing over non-biological motion with similar low-level properties, a difference presumably related to a st...
Visual search can be accelerated when properties of the target are known. Such knowledge allows the searcher to direct attention to items sharing these properties. Recent work indicates that information about properties of non-targets (i.e., negative cues) can also guide search. In the present study, we examine whether negative cues lead to differe...
Background: People with color vision deficiencies report numerous limitations in daily life, restricting, for example, their access to some professions. However, they use basic color terms systematically and in a similar manner as people with normal color vision. We hypothesize that a possible explanation for this discrepancy between color percepti...
Recorded gaze-aligned video during search for targets of any color. The video frames have been analyzed by an object-detection algorithm. All detected objects have been marked in the video. The circle represents the region of the frames corresponding to the fovea. Note: The participant is fixating candies repeatedly, as can be seen when following t...
Participant during search in lawn.
Combined recording of head-fixed scene shot and gaze-directed camera. The black circle in the head-fixed shot shows the current direction of gaze. Note: Candies in head-fixed short are too small to be evaluated, while the gaze-camera delivers magnified shots that can be evaluated by an object-detection algorithm.
Autism spectrum disorder (ASD) is characterized by substantial social deficits. The notion that dysfunctions in neural circuits involved in sharing another's affect explain these deficits is appealing, but has received only modest experimental support. Here we evaluated a complex paradigm on the vicarious social pain of embarrassment to probe socia...
A gaze-driven methodology for discomfort glare was developed and applied for glare evaluation. A series of user assessments were performed in an office-like test laboratory under various lighting conditions. The participants’ gaze responses were recorded by means of mobile eye tracking while monitoring photometric quantities relevant to visual comf...
While being in the center of attention and exposed to other's evaluations humans are prone to experience embarrassment. To characterize the neural underpinnings of such aversive moments, we induced genuine experiences of embarrassment during person-group interactions in a functional neuroimaging study. Using a mock-up scenario with three confederat...
Which image characteristics predict where people fixate when memorizing natural images? To answer this question, we introduce a new analysis approach that combines a novel scene-patch analysis with generalized linear mixed models (GLMMs). Our method allows for (1) directly describing the relationship between continuous feature value and fixation pr...
Figure S1. Average feature maps for grid cells across images.
Figure S2. Matrix of pairwise scatter plots given the image features considered in the study. Panels above the diagonal plot the actual data points, which are mean feature values for a given scene and analysis grid cell (135 scenes × 48 grid cells = 6,480 data points in each panel). Hexa...
The effects of aging on eye movements are well studied in the laboratory. Increased saccade latencies or decreased smooth-pursuit gain are well established findings. The question remains whether these findings are influenced by the rather untypical environment of a laboratory; that is, whether or not they transfer to the real world. We measured 34...
Emotional instability, difficulties in social adjustment, and disinhibited behavior are the most common symptoms of the psychiatric comorbidities in juvenile myoclonic epilepsy (JME). This psychopathology has been associated with dysfunctions of mesial-frontal brain circuits. The present work is a first direct test of this link and adapted a paradi...
Competition is ubiquitous in perception. For example, items in the visual field compete for processing resources, and attention controls their priority (biased competition). The inevitable ambiguity in the interpretation of sensory signals yields another form of competition: distinct perceptual interpretations compete for access to awareness. Rival...
Table S1. Network parameters
Figure S1. Levelt's second proposition.
Our perception does not provide us with an exact imprint of the outside world, but is continuously adapted to our internal expectations, task sets, and behavioral goals. Although effects of reward-or value in general-on perception therefore seem likely, how valuation modulates perception and how such modulation relates to attention is largely unkno...
Alterations of eye movements in schizophrenia patients have been widely described for laboratory settings. For example, gain during smooth tracking is reduced, and fixation patterns differ between patients and healthy controls. The question remains, whether such results are related to the specifics of the experimental environment, or whether they t...
Whether overt attention in natural scenes is guided by object content or by low-level stimulus features has become a matter of intense debate. Experimental evidence seemed to indicate that once object locations in a scene are known, salience models provide little extra explanatory power. This approach has recently been criticised for using inadequa...
The exact function of color vision for natural-scene perception has remained puzzling. In rapid serial visual presentation (RSVP) tasks, categorically defined targets (e.g., animals) are detected typically slightly better for color than for grayscale stimuli. Here we test the effect of color on animal detection, recognition, and the attentional bli...
Gaze is widely considered a good proxy for spatial attention. We address whether such "overt attention" is related to other attention measures in natural scenes, and to what extent laboratory results on eye movements transfer to real-world gaze orienting. We find that the probability of a target to be detected in a rapid-serial-visual-presentation...
When two dissimilar stimuli are presented to the eyes, perception alternates between multiple interpretations, a phenomenon dubbed binocular rivalry. Numerous recent imaging studies have attempted to unveil neural substrates underlying multistable perception. However, these studies had a conceptual constraint: access to observers' perceptual state...
For decades, the cognitive and neural sciences have benefitted greatly from a separation of mind and brain into distinct functional domains. The tremendous success of this approach notwithstanding, it is self-evident that such a view is incomplete. Goal-directed behaviour of an organism requires the joint functioning of perception, memory and senso...
For natural scenes, attention is frequently quantified either by performance during rapid presentation or by gaze allocation during prolonged viewing. Both paradigms operate on different time scales, and tap into covert and overt attention, respectively. To compare these, we ask some observers to detect targets (animals/vehicles) in rapid sequences...
Pupil dilation is implicated as a marker of decision-making as well as of cognitive and emotional processes. Here we tested whether individuals can exploit another's pupil to their advantage. We first recorded the eyes of 3 "opponents", while they were playing a modified version of the "rock-paper-scissors" childhood game. The recorded videos serve...
Example movie of an opponent’s pupil.
Video depicting three games as the players viewed it in the informed-eye and naïve-eye conditions condition. If you want to try the experiment yourself, watch the movies and pick the best option to beat your opponent. The audio track consists of the words “rock”, “paper”, and “scissors”, presented in 4-s inter...
Example movie of an opponent’s reconstructed pupil.
Video depicting the same three games shown in Movie S1 for the reconstructed-pupil condition.
(MOV)
For patients with severe motor disabilities, a robust means of communication is a crucial factor for their well-being [1]. We report here that pupil size measured by a bedside camera can be used to communicate with patients with locked-in syndrome. With the same protocol we demonstrate command-following for a patient in a minimally conscious state,...
Social robots are often applied in recreational contexts to improve the experience of using technical systems, but they are also increasingly used for therapeutic purposes. In this study, we compared how patients with Autism Spectrum Disorder (ASD) interact with a social robot and a human actor. We examined the gaze behavior of nine ASD patients an...
When different stimuli are presented to either eye, perception alternates between distinct interpretations, a phenomenon dubbed binocular rivalry. Numerous recent imaging studies have attempted to unveil neural substrates underlying binocular rivalry. However, most rivalry studies are bedeviled by a major methodological constraint: access to observ...