Marc O Ernst

Marc O Ernst
Ulm University | UULM · Department of Psychology

Prof.

About

274
Publications
43,491
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
12,853
Citations
Citations since 2016
80 Research Items
6825 Citations
201620172018201920202021202202004006008001,000
201620172018201920202021202202004006008001,000
201620172018201920202021202202004006008001,000
201620172018201920202021202202004006008001,000
Additional affiliations
April 2016 - present
Ulm University
Position
  • Professor
October 2013 - April 2016
Center for Interdisciplinary Research (ZiF), Bielefeld
Position
  • Managing Director
January 2011 - April 2016
Bielefeld University
Position
  • Professor

Publications

Publications (274)
Article
Full-text available
Being able to perform adept goal-directed actions requires predictive, feed-forward control, including a mapping between the visually estimated target locations and the motor commands reaching for them. When the mapping is perturbed, e.g., due to muscle fatigue or optical distortions, we are quickly able to recalibrate the sensorimotor system to up...
Article
Full-text available
Early visual deprivation typically results in spatial impairments in other sensory modalities. It has been suggested that, since vision provides the most accurate spatial information, it is used for calibrating space in the other senses. Here we investigated whether sight restoration after prolonged early onset visual impairment can lead to the dev...
Article
Multiple cues contribute to the discrimination of slip motion speed by touch. In our previous study, we demonstrated that masking vibrations at various frequencies impaired the discrimination of speed. In this study, we extended the previous results to evaluate this phenomenon on a smooth glass surface, and for different values of contact force and...
Article
Full-text available
With increasing complexity in visual computing tasks, a single device may not be sufficient to adequately support the user’s workflow. Here, we can employ multi-device ecologies such as cross-device interaction, where a workflow can be split across multiple devices, each dedicated to a specific role. But what makes these multi-device ecologies comp...
Article
Full-text available
Over the last few years online platforms for running psychology experiments beyond simple questionnaires and surveys have become increasingly popular. This trend has especially increased after many laboratory facilities had to temporarily avoid in-person data collection following COVID-19-related lockdown regulations. Yet, while offering a valid al...
Preprint
Full-text available
Being able to perform goal-directed actions requires predictive, feed-forward control, including a mapping between the visually estimated target locations and the motor commands reaching for them. When the mapping is perturbed, e.g., due to muscle fatigue or optical distortions, we are quickly able to recalibrate the sensorimotor system to update t...
Article
Full-text available
Adult humans make effortless use of multisensory signals and typically integrate them in an optimal fashion.1 This remarkable ability takes many years for normally sighted children to develop.2,3 Would individuals born blind or with extremely low vision still be able to develop multisensory integration later in life when surgically treated for sigh...
Article
During a smooth pursuit eye movement of a target stimulus, a briefly flashed stationary background appears to move in the opposite direction as the eye's motion ― an effect known as the Filehne illusion. Similar illusions occur in audition, in the vestibular system, and in touch. Recently, we found that the movement of a surface perceived from tact...
Article
Full-text available
Whenever we grasp and lift an object, our tactile system provides important information on the contact location and the force exerted on our skin. The human brain integrates signals from multiple sites for a coherent representation of object shape, inertia, weight, and other material properties. It is still an open question whether the control of g...
Preprint
Full-text available
Early studies have shown that the localization of a sound source in the vertical plane can be accomplished with only a single ear and thus assumed to be based on monaural spectral cues. Such cues consists of notches and peaks in the perceived spectrum which vary systematically with the elevation of sound sources. This poses several problems to the...
Article
Full-text available
In our daily life, we often interact with objects using both hands raising the question the question to what extent information between the hands is shared. It has, for instance, been shown that curvature adaptation aftereffects can transfer from the adapted hand to the non-adapted hand. However, this transfer only occurred for dynamic exploration,...
Article
Full-text available
Adaptation to statistics of sensory inputs is an essential ability of neural systems and extends their effective operational range. Having a broad operational range facilitates to react to sensory inputs of different granularities, thus is a crucial factor for survival. The computation of auditory cues for spatial localization of sound sources, par...
Article
Full-text available
While interacting with the world our senses and nervous system are constantly challenged to identify the origin and coherence of sensory input signals of various intensities. This problem becomes apparent when stimuli from different modalities need to be combined, e.g., to find out whether an auditory stimulus and a visual stimulus belong to the sa...
Preprint
Full-text available
Visual landmarks provide crucial information for human navigation. But what defines a landmark? To be uniquely recognized, a landmark should be distinctive and salient, while providing precise and accurate positional information. It should also be permanent, e.g., to find back your car, a nearby church seems a better landmark compared with a truck,...
Preprint
Full-text available
When we act, our brains register the visual feedback of the outcome of our actions only approximately 150 ms after sending the motor commands to the muscles. By anticipating the sensory consequences of our actions, temporal prediction mechanisms help us to accurately perform time-critical motor actions, such as catching a ball, and to experience th...
Preprint
Full-text available
The development of spatially registered auditory maps in the external nucleus of the inferior colliculus (ICx) in young owls and their maintenance in adult animals is visually guided and evolves dynamically. To investigate the underlying neural mechanisms of this process, we developed a model of stabilized neoHebbian correlative learning which is a...
Preprint
Full-text available
Adaptation to statistics of sensory inputs is an essential ability of neural systems and extends their effective operational range. Having a broad operational range of sensory inputs facilitates the ability to react to very different sensory inputs, thus is a crucial factor for survival. Adaptation mechanisms are found at many stages of sensory pat...
Article
Full-text available
In vision, the perceived velocity of a moving stimulus differs depending on whether we pursue it with the eyes or not: A stimulus moving across the retina with the eyes stationary is perceived as being faster compared with a stimulus of the same physical speed that the observer pursues with the eyes, while its retinal motion is zero. This effect is...
Article
Full-text available
Feedback is essential for skill acquisition as it helps identifying and correcting performance errors. Nowadays, Virtual Reality can be used as a tool to guide motor learning, and to provide innovative types of augmented feedback that exceed real world opportunities. Concurrent feedback has shown to be especially beneficial for novices. Moreover, w...
Preprint
Full-text available
Motion encoding in touch relies on multiple cues, such as displacements of traceable texture elements, friction-induced vibrations, and gross fingertip deformations by shear force. We evaluated the role of deformation and vibration cues in tactile speed discrimination. To this end, we tested the discrimination of speed of a moving smooth glass plat...
Article
Full-text available
The plasticity of the human nervous system allows us to acquire an open-ended repository of sensorimotor skills in adulthood, such as the mastery of tools, musical instruments or sports. How novel sensorimotor skills are learned from scratch is yet largely unknown. In particular, the so-called inverse mapping from goal states to motor states is und...
Data
Reach performance for different test targets (human population). For each individual, the reach-endpoints for the median postures across the last three test blocks are plotted. The ellipses are the across participant covariance ellipses. (TIFF)
Data
Friedman ANOVA to test for differences in the relative use of the q2 joint in the first Principal Component (contrast conditions H1 and H2) across Time (8 levels). (DOCX)
Data
Individual learning curves. Mean distance d¯ across training blocks. Odd numbered participants started in Condition H1, even numbered participants started in Condition H2. (TIFF)
Data
Average reach performance for different test targets across time. Lines represent across individual mean. Left: condition H1, right: condition H2. Top: simulated agents, bottom: human participants. Solid lines represent interpolation targets, dotted lines represent extrapolation targets. The brightness represents distance from the home posture (the...
Data
ANOVA table for two-way repeated-measures ANOVA (rmANOVA) to test for differences in the mean distance to the targets (error) with factors: Time (block number 1–24) and Condition (H1, H2). SSq. Stands for the sum of squares, DF for Degrees of Freedom, Mean Sq. for the Mean Squared Error, F for the F statistics, p-value for the probability that the...
Data
ANOVA table for two-way repeated-measures ANOVA (rmANOVA) to test for differences in the summed variance in motor space with factors: Time (8 levels) and Condition (H1, H2). SSq. Stands for the sum of squares, DF for Degrees of Freedom, Mean Sq. for the Mean Squared Error, F for the F statistics, p-value for the probability that the null hypothesis...
Data
A clip of the task as seen through the eyes of a participant (training phase). (MP4)
Data
A .zip file with all data reported in the manuscript. The 220 .json files contain the data (20 for the participants, 200 for simulation runs), the text file legend_json.txt explains the data format. (ZIP)
Data
Reach performance for different test targets (agent population). For each individual, the reach-endpoints during the last test block are plotted. The ellipses are the across agent covariance ellipses. (TIFF)
Data
ANOVA table for two-way repeated-measures ANOVA (rmANOVA) to test for differences in the relative distance of solutions to the two home positions with factors: Time (8 levels) and Condition (H1, H2). SSq. Stands for the sum of squares, DF for Degrees of Freedom, Mean Sq. for the Mean Squared Error, F for the F statistics, p-value for the probabilit...
Article
To estimate object speed with respect to the self, retinal signals must be summed with extraretinal signals that encode the speed of eye and head movement. Prior work has shown that differences in perceptual estimates of object speed based on retinal and oculomotor signals lead to biased percepts such as the Aubert-Fleischl phenomenon (AF), in whic...
Article
Full-text available
Abstract In environments where orientation is ambiguous, the visual system uses prior knowledge about lighting coming from above to recognize objects, determine which way is up, and reorient the body. Here we investigated the extent with which assumed light from above preferences are affected by body orientation and the orientation of the retina re...
Article
In a basic cursor-control task, the perceived positions of the hand and the cursor are biased towards each other. We recently found that this phenomenon conforms to the reliability-based weighting mechanism of optimal multisensory integration. This indicates that optimal integration is not restricted to sensory signals originating from a single sou...
Conference Paper
Spatial navigation tasks often require us to memorize the relation between visual landmarks and the goal. Landmarks might only provide ambiguous or unreliable cues, making it necessary to make predictions about landmark reliability when learning a new location. These predictions might be enhanced with prior knowledge. How does anticipation about la...
Article
Full-text available
Memories of places often include landmark cues, i.e., information provided by the spatial arrangement of distinct objects with respect to the target location. To study how humans combine landmark information for navigation, we conducted two experiments: To this end, participants were either provided with auditory landmarks while walking in a large...
Article
Full-text available
Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattl...
Article
Full-text available
Sensory information about the state of the world is generally ambiguous. Understanding how the nervous system resolves such ambiguities to infer the actual state of the world is a central quest for sensory neuroscience. However, the computational principles of perceptual disambiguation are still poorly understood: What drives perceptual decision-ma...
Data
Effect of noise and previous response on the luminance kernel. Classification images for luminance calculated separately for sound presence/absence and for trials following a stream/bounce response (aggregate observer) in Experiment 1. The bottom-right panel corresponds to the classification image presented in Fig 2. (EPS)
Data
Disambiguated stimuli. To better appreciate the effect, it is recommended to look at one display at a time (either the upper or lower one), while covering the other: the upper display should appear to bounce more often than the lower display. We recommend using VLC player. (AVI)
Data
Data matrix. The first column contains the ID of the observer (CP = 1; CG = 2; VL = 3). The second column indicates the presence/absence of the sound (1 = present). The third column contains the response of the previous trial (1 = bounce). The fourth column indicates the order of the trial within each block (given that we were interested in the eff...
Data
Alternative model. Luminance and contrast kernels calculated from the alternative model. This model is sensitive to contrast but not to motion, and it is unable to replicate the empirical classification images (see Fig 2B). (TIF)
Data
Effect of noise and previous response on the contrast kernel. Classification images for contrast calculated separately for sound presence/absence and for trials following a stream/bounce response (aggregate observer) in Experiment 1. The bottom-right panel corresponds to the classification image presented in Fig 2. (EPS)
Data
The motion energy model. The stimulus matrix is convolved by a series of spatiotemporally tuned filters, whose output are combined, squared and normalized to calculate leftward and rightward motion energy. Rightward and leftward energy matrices are subtracted to compute the opponent energy matrix (see Fig 3). (EPS)
Data
Stream-bounce display. Switch the audio volume on and off to assess the effect of sound on perception; we recommend using VLC player. (AVI)
Data
Single observer analyses. Results of Experiment 1 for each individual observer (from top CP, CG, VL) and for the aggregate observer (bottom). See Fig 2 for further details. (EPS)
Data
Stimuli used in Experiment 1. A complete description of how the stimuli were generated can be found in Fig 1. Switch the audio volume on and off to assess the effect of sound on perception; we recommend using VLC player. (AVI)
Data
Stimuli used in Experiment 2. The upper display has more motion energy than the lower display. Dark and light moving bars alternate in the movie. To better appreciate the effect of motion energy on perception, it is recommended to look at one display at a time (either the upper or lower one), while covering the other: the upper display should appea...
Article
Full-text available
Because of the complex anatomy of the human hand, in the absence of external constraints a large number of postures and force combinations can be used to attain a stable grasp. Motor synergies provide a viable strategy to solve this problem of motor redundancy. In this study, we exploited the technical advantages of an innovative sensorized object...
Article
Humans are well able to operate tools whereby their hand movement is linked, via a kinematic transformation, to a spatially distant object moving in a separate plane of motion. An everyday example is controlling a cursor on a computer monitor. Despite these separate reference frames, the perceived positions of the hand and the object were found to...
Conference Paper
Latency between a user's movement and visual feedback is inevitable in every Virtual Reality application, as signal transmission and processing take time. Unfortunately, a high end-to-end latency impairs perception and motor performance. While it is possible to reduce feedback delay to tens of milliseconds, these delays will never completely vanish...
Article
Human touch is an inherently active sense: to estimate an object’s shape humans often move their hand across its surface. This way the object is sampled both in a serial (sampling different parts of the object across time) and parallel fashion (sampling using different parts of the hand simultaneously). Both the serial (moving a single finger) and...
Article
Unisensory information is of course processed before integration with signals from other senses. Sensory input in the visual and auditory systems are segregated into ON and OFF pathways that respond differentially to changes in input intensity. Specifically, an increase in intensity leads to an increment in neural response in the ON pathway, and de...
Article
Spatial navigation tasks require us to memorize the location of a goal, using sensory cues from multiple sources. Information about one's position in relation to the goal comes from the kinaesthetic and proprioceptive senses of the human body. Also external reference points, such as landmarks or beacons provide information about the spatial positio...
Article
To estimate object speed in the world, retinal motion must be summed with extra-retinal signals that tell about the speed of eye and head movement. Prior work has shown that differences in perceptual estimates of retinal and oculomotor (eye) speed lead to effects such as the Aubert-Fleischl phenomenon (AF) in which pursued targets are typically per...
Conference Paper
Movement of a limb substantially decreases the intensity and sensitivity with which tactile stimuli on that limb are perceived. This movement-related tactile suppression likely interferes with performance in motor tasks that require the precise evaluation of tactile feedback, such as the adjustment of grip forces during grasping. Therefore, we hypo...
Article
Full-text available
The brain efficiently processes multisensory information by selectively combining related signals across the continuous stream of multisensory inputs. To do so, it needs to detect correlation, lag and synchrony across the senses; optimally integrate related information; and dynamically adapt to spatiotemporal conflicts across the senses. Here we sh...
Data
Auditory and visual stimuli. The first three visual and auditory stimuli correspond to the three stimuli represented in Figure 2A. Note that in the actual experiment did not use a computer screen, but a white, sound-transparent fabric disk back-lit by a white LED. Auditory stimuli were delivered by a speaker placed behind the fabric disc, next to t...
Data
Matlab implementation of the MCD model. See comments for instructions.
Data
Supplementary Figures 1-9, Supplementary Table 1, Supplementary Notes 1-4 and Supplementary References
Data
MCD model. The top-part of the Movie represents the MCD model. The luminance of the input and the output units represents the strength of the input and the output, respectively. The bottom-left part of the Movie represents the input (SV(t), SA(t)) and the output (MCDCorr(t) and MCDLag(t, Equations 4-5) signals. The moving bar represents time. As th...
Article
Full-text available
Humans, many animals, and certain robotic hands have deformable fingertip pads [1, 2]. Deformable pads have the advantage of conforming to the objects that are being touched, ensuring a stable grasp for a large range of forces and shapes. Pad deformations change with finger displacements during touch. Pushing a finger against an external surface ty...
Chapter
According to classical studies in physiology, muscle spindles and other receptors from joints and tendons provide crucial information on the position of our body and our limbs. Cutaneous cues also provide an important contribution to our sense of position. For example, it is possible to induce a vivid sensation of movement in the anesthetized finge...
Chapter
Grasping is a complex motor task which requires a fine control of the multiple degrees of freedom of the hand, in both the position and the force domain. In this chapter, we investigated the coordinated control of digit position and force in the human hand while grasping and holding a moving object. We observed a substantial variability between par...
Article
The term ‘synergy’ – from the Greek synergia – means ‘working together’. The concept of multiple elements working together towards a common goal has been extensively used in neuroscience to develop theoretical frameworks, experimental approaches, and analytical techniques to understand neural control of movement, and for applications for neuro-reha...
Article
Full-text available
Crossmodal judgments of relative timing commonly yield a non-zero point of subjective simultaneity (PSS). Here, we test whether subjective simultaneity is coherent across all pair-wise combinations of the visual, auditory, and tactile modalities. To this end, we examine PSS estimates for transitivity: if stimulus A has to be presented x ms before s...
Conference Paper
It is known since the early prism adaptation experiments that humans can adapt to spatial perturbations of visual feedback. After training with prism goggles that spatially displace the visual field, people adapt their behaviour and perception to compensate for the visuomotor mismatch. This adaptation persists for some time after the goggles are re...
Article
Full-text available
Recent studies have proposed that some cross-modal illusions might be expressed in what were previously thought of as sensory-specific brain areas. Therefore, one interesting question is whether auditory-driven visual illusory percepts respond to manipulations of low-level visual attributes (such as luminance or chromatic contrast) in the same way...
Article
Full-text available
The relative motion between the surface of an object and our fingers produces patterns of skin deformation like stretch, indentation, and vibrations. Here, we hypothesized that motion-induced vibrations are combined with other tactile cues for the discrimination of tactile speed. Specifically, we hypothesized that vibrations provide a critical cue...
Article
Full-text available
We continually move our body and our eyes when exploring the world, causing our sensory surfaces, the skin and the retina, to move relative to external objects. In order to estimate object motion consistently, an ideal observer would transform estimates of motion acquired from the sensory surface into fixed, world-centered estimates, by taking the...