Heinrich H Bülthoff

Heinrich H Bülthoff
Max Planck Institute for Biological Cybernetics | KYB · Department of Human Perception, Cognition and Action

Prof. Dr.

About

1,039
Publications
144,542
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
29,407
Citations
Introduction
The department is concerned with the fundamental processes of human perception. The primary focus is how the information from different sense organs is integrated to create a consistent representation of the “world in the head”. My research focuses on the integration of information from the visual, haptic and balance senses and on the development of efficient algorithms for building assistant systems to help the aging society to cope with the challenges of the decline in perceptual and cognitive capabilities with age.
Additional affiliations
January 2019 - present
Max Planck Institute for Biological Cybernetics
Position
  • Managing Director
Description
  • Emeritus Director of the Department of Human Perception, Cognition and Action
October 2009 - February 2016
Korea University
Position
  • Professor (Associate)
Description
  • Cross-cultural Perception and Cognition
June 1996 - present
University of Tuebingen
Position
  • Honorar Professor
Education
October 1985 - November 1987
University of Tuebingen
Field of study
  • Biology
June 1975 - September 1980
University of Tuebingen
Field of study
  • Biologische Kybernetik
October 1970 - May 1975
University of Tuebingen
Field of study
  • Biology

Publications

Publications (1,039)
Article
The visual system uses several signals to deduce the three-dimensional structure of the environment, including binocular disparity, texture gradients, shading and motion parallax. Although each of these sources of information is independently insufficient to yield reliable three-dimensional structure from everyday scenes, the visual system combines...
Article
Full-text available
SINCE Wertheimer's classic paper1, research in motion perception has been concerned with the study of visual illusions such as phi-motion. Various phenomena of this type are easy to elicit by successive changes of the light flux in spatially distinct photoreceptors, and easy to explain by the specific properties of the motion detectors, although th...
Article
Full-text available
Images of artificial and natural scenes typically contain many highlights generated by mirror-like reflection from glossy surfaces. Until recently, computational models of visual processes have tended to regard highlights as obscuring the structure of the underlying scene. The truth is that, on the contrary, highlights are rich in local geometric i...
Article
Full-text available
The precise management of two-dimensional field of velocities from time-varying two-dimensional images is impossible in general. It is, however, possible to compute suitable 'optical flows' that are qualitatively similar to the velocity field in most cases. We describe a simple, parallel algorithm that computes an optical flow from sequences of rea...
Article
Full-text available
The interaction between depth perception and object recognition has important implications for the nature of mental object representations and models of hierarchical organization of visual processing. It is often believed that the computation of depth influences subsequent high-level object recognition processes, and that depth processing is an ear...
Conference Paper
Full-text available
Disorientation, nausea, and vomiting of passengers that result from vehicle vibrations are characterized as kinetosis or motion sickness. It is mainly the low-frequency movements of these vehicles that contribute to the kinetosis of passengers. Frequencies that provoke kinetosis lie within the rigid-body flight characteristics of helicopters, which...
Article
Full-text available
In dynamic driving simulators, the experience of operating a vehicle is reproduced by combining visual stimuli generated by graphical rendering with inertial stimuli generated by platform motion. Due to inherent limitations of the platform workspace, inertial stimulation is subject to shortcomings in the form of missing cues, false cues, and/or sca...
Article
Full-text available
Previous literature suggests a relationship between individual characteristics of motion perception and the peak frequency of motion sickness sensitivity. Here, we used well-established paradigms to relate motion perception and motion sickness on an individual level. We recruited 23 participants to complete a two-part experiment. In the first part,...
Article
Full-text available
Illusory self-motion often provokes motion sickness, which is commonly explained in terms of an inter-sensory conflict that is not in accordance with previous experience. Here we address the influence of cognition in motion sickness and show that such a conflict is not provocative when the observer believes that the motion illusion is indeed actual...
Article
Full-text available
Percepts of verticality are thought to be constructed as a weighted average of multisensory inputs, but the observed weights differ considerably between studies. In the present study, we evaluate whether this can be explained by differences in how visual, somatosensory and proprioceptive cues contribute to representations of the Head In Space (HIS)...
Article
Full-text available
The risk of motion sickness is considerably higher in autonomous vehicles than it is in human-operated vehicles. Their introduction will therefore require systems that mitigate motion sickness. We investigated whether this can be achieved by augmenting the vehicle interior with additional visualizations. Participants were immersed in motion simulat...
Conference Paper
Full-text available
The serial introduction of passive and active anti-vibration means lead primarily to the reduction of the vibration levels at blade passage frequencies Nb/rev. Consequently, other - previously unnoticed - sources of vibration are perceived by rotorcraft occupants. Therefore, a comprehensive vibration assessment metric is required to characterize th...
Article
Full-text available
The goal of new adaptive technologies is to allow humans to interact with technical devices, such as robots, in natural ways akin to human interaction. Essential for achieving this goal, is the understanding of the factors that support natural interaction. Here, we examined whether human motor control is linked to the visual appearance of the inter...
Article
Full-text available
To determine own upright body orientation the brain creates a sense of verticality by a combination of multisensory inputs. To test whether this process is affected by aging, we placed younger and older adults on a motion platform and systematically tilted the orientation of their visual surroundings by using an augmented reality setup. In a series...
Article
Full-text available
Even when we are wearing gloves, we can easily detect whether a surface that we are touching is sticky or not. However, we know little about the similarities between brain activations elicited by this glove contact and by direct contact with our bare skin. In this functional magnetic resonance imaging (fMRI) study, we investigated which brain regio...
Article
Full-text available
Inertial motions may be defined in terms of acceleration and jerk, the time-derivative of acceleration. We investigated the relative contributions of these characteristics to the perceived intensity of motions. Participants were seated on a high-fidelity motion platform, and presented with 25 above-threshold 1 s forward (surge) motions that had acc...
Article
Haptic support systems have been widely used for supporting human operators when performing a manual control task. These systems are commonly designed to track known target trajectories. However, the trajectory to track is not known in many realistic cases. For instance, the pilot-intended trajectory is not known beforehand when considering a helic...
Article
Full-text available
Spatial orientation relies on a representation of the position and orientation of the body relative to the surrounding environment. When navigating in the environment, this representation must be constantly updated taking into account the direction, speed, and amplitude of body motion. Visual information plays an important role in this updating pro...
Article
Full-text available
We present a novel robotic front-end for autonomous aerial motion-capture (mocap) in outdoor environments. In previous work, we presented an approach for cooperative detection and tracking (CDT) of a subject using multiple micro-aerial vehicles (MAVs). However, it did not ensure optimal view-point configurations of the MAVs to minimize the uncertai...
Article
Full-text available
Distinguishing animate from inanimate objects is fundamental for social perception in humans and animals. Visual motion cues indicative of self-propelled object motion are useful for animacy perception: they can be detected over a wide expanse of visual field, at distance and in low visibility conditions, can attract attention and provide clues abo...
Article
Creating metrically accurate avatars is important for many applications such as virtual clothing try-on, ergonomics, medicine, immersive social media, telepresence, and gaming. Creating avatars that precisely represent a particular individual is challenging however, due to the need for expensive 3D scanners, privacy issues with photographs or video...
Preprint
Full-text available
Autonomous motion capture (mocap) systems for outdoor scenarios involving flying or mobile cameras rely on i) a robotic front-end to track and follow a human subject in real-time while he/she performs physical activities, and ii) an algorithmic back-end that estimates full body human pose and shape from the saved videos. In this paper we present a...
Article
Full-text available
Previous human fMRI studies have reported activation of somatosensory areas not only during actual touch, but also during touch observation. However, it has remained unclear how the brain encodes visually evoked tactile intensities. Using an associative learning method, we investigated neural representations of roughness intensities evoked by (a) t...
Article
Full-text available
The neural substrates of tactile roughness perception have been investigated by many neuroimaging studies, while relatively little effort has been devoted to the investigation of neural representations of visually perceived roughness. In this human fMRI study, we looked for neural activity patterns that could be attributed to five different roughne...
Article
Full-text available
Current neuroscientific models of bodily self-consciousness (BSC) argue that inaccurate integration of sensory signals leads to altered states of BSC. Indeed, using virtual reality technology, observers viewing a fake or virtual body while being exposed to tactile stimulation of the real body, can experience illusory ownership over-and mislocalizat...
Article
Full-text available
Optimization-based motion cueing algorithms based on model predictive control have been recently implemented to reproduce the motion of a car within the limited workspace of a driving simulator. These algorithms require a reference of the future vehicle motion to compute a prediction of the system response. Assumptions regarding the future referenc...
Article
Full-text available
Full-field visual rotation around the vertical axis induces a sense of self-motion (vection), optokinetic nystagmus (OKN), and, eventually, also motion sickness (MS). If the lights are then suddenly switched off, optokinetic afternystagmus (OKAN) occurs. This is due to the discharge of the velocity storage mechanism (VSM), a central integrative net...
Article
Full-text available
Visual heading estimation is subject to periodic patterns of constant (bias) and variable (noise) error. The nature of the errors, however, appears to differ between studies, showing underestimation in some, but overestimation in others. We investigated whether field of view (FOV), the availability of binocular disparity cues, motion profile, and v...
Article
Full-text available
Abstract In environments where orientation is ambiguous, the visual system uses prior knowledge about lighting coming from above to recognize objects, determine which way is up, and reorient the body. Here we investigated the extent with which assumed light from above preferences are affected by body orientation and the orientation of the retina re...
Article
Full-text available
A growing number of studies investigated anisotropies in representations of horizontal and vertical spaces. In humans, compelling evidence for such anisotropies exists for representations of multi-floor buildings. In contrast, evidence regarding open spaces is indecisive. Our study aimed at further enhancing the understanding of horizontal and vert...
Conference Paper
Full-text available
Today, simulators are achieving levels of complexity and cost that are comparable to those of the aircraft they should replace. For this reason, questions have been raised, in both the technical and training communities , on the required level of simulation delity for effective pilot training. Computer Based Trainers (CBTs) are not currently consid...
Article
Objects learned within single enclosed spaces (e.g., rooms) can be represented within a single reference frame. Contrarily, the representation of navigable spaces (multiple interconnected enclosed spaces) is less well understood. In this study we examined different levels of integration within memory (local, regional, global), when learning object...
Article
In this paper, we propose and experimentally verify a distributed formation control algorithm for a group of multirotor unmanned aerial vehicles (UAVs). The algorithm brings the whole group of UAVs simultaneously to a prescribed submanifold that determines the formation shape in an asymptotically stable fashion in two- and three-dimensional environ...
Article
Full-text available
The object orientation effect describes shorter perceived distances to the front than to the back of oriented objects. The present work extends previous studies in showing that the object orientation effect occurs not only for egocentric distances between an observer and an object, but also for exocentric distances, that are between two oriented ob...
Article
Most studies on spatial memory refer to the horizontal plane, leaving an open question as to whether findings generalize to vertical spaces where gravity and the visual upright of our surrounding space are salient orientation cues. In three experiments, we examined which reference frame is used to organize memory for vertical locations: the one bas...
Article
Full-text available
A hallmark of human social behavior is the effortless ability to relate one’s own actions to that of the interaction partner, e.g., when stretching out one’s arms to catch a tripping child. What are the behavioral properties of the neural substrates that support this indispensable human skill? Here we examined the processes underlying the ability t...
Chapter
In this paper we present preliminary, experimental results of an Adaptive Super-Twisting Sliding-Mode Controller with time-varying gains for redundant Cable-Driven Parallel Robots. The sliding-mode controller is paired with a feed-forward action based on dynamics inversion. An exact sliding-mode differentiator is implemented to retrieve the velocit...
Preprint
Full-text available
Recent work indicates that the central nervous system assesses the causality of visual and inertial information in the estimation of qualitative characteristics of self-motion and spatial orientation, and forms multisensory perceptions in accordance with the outcome of these assessments. Here, we extend the assessment of this Causal Inference (CI)...
Article
Full-text available
Motor-based theories of facial expression recognition propose that the visual perception of facial expression is aided by sensorimotor processes that are also used for the production of the same expression. Accordingly, sensorimotor and visual processes should provide congruent emotional information about a facial expression. Here, we report eviden...
Article
In motion simulation, motion input scaling is often applied to deal with the limited motion envelopes of motion simulators. In this research, the time-varying effects of scaling the lateral specific force up or down during passive curve driving in a car driving simulation are investigated through a simulator experiment. It is concluded that lateral...
Conference Paper
Full-text available
In this paper we present the implementation of a model predictive controller (MPC) for real-time control of a motion simulator based on a serial robot with 8 degrees of freedom. The goal of the controller is to accurately reproduce six reference signals simultaneously (the accelerations and angular velocities in the body frame of reference) taken f...
Conference Paper
Full-text available
Take-over requests (TORs) in highly automated vehicles are cues that prompt users to resume control. TORs however, are often evaluated in non-moving driving simulators. This ignores the role of motion, an important source of information for users who have their eyes off the road while engaged in non-driving related tasks. We ran a user study in a m...
Conference Paper
Full-text available
Design recommendations for notifications are typically based on user performance and subjective feedback. In comparison, there has been surprisingly little research on how designed notifications might be processed by the brain for the information they convey. The current study uses EEG/ERP methods to evaluate auditory notifications that were design...
Article
Full-text available
The perceptual upright is thought to be constructed by the central nervous system (CNS) as a vector sum; by combining estimates on the upright provided by the visual system and the body's inertial sensors with prior knowledge that upright is usually above the head. Recent findings furthermore show that the weighting of the respective sensory signal...