Erwan David's research while affiliated with Goethe-Universität Frankfurt am Main and other places

Publications (17)

Preprint
Full-text available
Visual search is a ubiquitous challenge in natural vision, including daily tasks such as finding a friend in a crowd or searching for a car in a parking lot. Human rely heavily on relevant target features to perform goal-directed visual search. Meanwhile, context is of critical importance for locating a target object in complex scenes as it helps n...
Chapter
Wie können die Bedürfnisse unterschiedlicher Nutzender bei der Gestaltung neuer, umweltfreundlicher Mobilitätsangebote einbezogen werden? Der Umbau oder die Neuplanung von Verkehrsinfrastrukturen und infrastrukturellen Bauwerken erstreckt sich meist über längere Zeiträume, bis zu 20 Jahren. Für den Erfolg solcher Maßnahmen ist es sehr wertvoll, Tes...
Article
Full-text available
Central and peripheral vision during visual tasks have been extensively studied on two-dimensional screens, highlighting their perceptual and functional disparities. This study has two objectives: replicating on-screen gaze-contingent experiments removing central or peripheral field of view in virtual reality, and identifying visuo-motor biases spe...
Article
Full-text available
We wish to make the following correction to the published paper “Effects of Transient Loss of Vision on Head and Eye Movements during Visual Search in a Virtual Environment” [...]
Article
Full-text available
Visual search in natural scenes is a complex task relying on peripheral vision to detect potential targets and central vision to verify them. The segregation of the visual fields has been particularly established by on-screen experiments. We conducted a gaze-contingent experiment in virtual reality in order to test how the perceived roles of centra...
Article
Full-text available
Central and peripheral fields of view extract information of different quality and serve different roles during visual tasks. Past research has studied this dichotomy on-screen in conditions remote from natural situations where the scene would be omnidirectional and the entire field of view could be of use. In this study, we had participants lookin...
Article
Full-text available
Virtual reality (VR) headsets offer a large and immersive workspace for displaying visualizations with stereoscopic vision, as compared to traditional environments with monitors or printouts. The controllers for these devices further allow direct three-dimensional interaction with the virtual environment. In this paper, we make use of these advanta...
Article
Visual field defects are a world-wide concern, and the proportion of the population experiencing vision loss is ever increasing. Macular degeneration and glaucoma are among the four leading causes of permanent vision loss. Identifying and characterizing visual field losses from gaze alone could prove crucial in the future for screening tests, rehab...
Conference Paper
Full-text available
Research on visual attention in 360° content is crucial to understand how people perceive and interact with this immersive type of content and to develop efficient techniques for processing, encoding, delivering and rendering. And also to offer a high quality of experience to end users. The availability of public datasets is essential to support an...
Article
Research on understanding how people watch/explore Virtual Reality (VR) and 360° content is crucial to develop appropriate technologies for processing, encoding, delivering and rendering media content in order to provide high-quality immersive experiences to users. In this sense, the work described in this paper aims at supporting the research on v...

Citations

... For other general gaze movement measures that might be of interest in panoramic scenes, see Bischof et al. (2020) and David et al. (2022). ...
... When the sampled information from the periphery and the fovea is discrepant (e.g., when the saccade target is displaced or exchanged with another object), the visual system can segregate pre-and postsaccadic information (e.g., Atsma et al., 2016;Demeyer et al., 2010;Laurin et al., 2021;Tas et al., 2012;Tas et al., 2021). Furthermore, in some cases, the visual system samples object information with only peripheral vision (Treisman, 1986), and visual search is surprisingly unaffected by blocking foveal vision (David et al., 2021;Nuthmann, 2014;Nuthmann & Canas-Bajo, 2022). According to the transsaccadic feature prediction mechanism (Herwig & Schneider, 2014), object recognition and visual search is supported by predictions based on previous associations of peripheral and foveal information of objects. ...
... One way to mitigate this is to mount the cables to the ceiling and use VR controllers or wireless keyboards for manual responses. It remains an interesting open question whether it makes a difference in how participants attend to VR scenes if they are seated (as in the vast majority of our studies) or standing (as in, for example, David et al. 2020David et al. , 2021. ...
... Satriadi et al. [52] proposed hierarchical multi-view layouts for geospatial data analysis in VR, where participants were observed to prefer a spherical cap layout around themselves, and the views were often reorganized during the tasks. Derived from the concept of Multiple Coordinated Views (MCV), Spur et al. [57] designed a vertical stack layout in VR while Mahmood et al. [41] proposed Multiple Coordinated Spaces (a 3D counterpart) using AR, both for geospatial data analysis. Moreover, prior works have highlighted contextual visualization and data analysis in-situ by considering the physical surrounding as a factor for virtual content placement [19]. ...
... Current efforts in creating three-dimensional geospatial visualization through immersive experiences have focused on a range of topics, from 3D visualization techniques to interaction techniques (Weise et al., 2019) to the design of embodied maps (Newbury et al., 2021). Examples of 3D visualization techniques are spherical interaction and 3D-edge bundling to visualize geospatial network (Zhang et al., 2018), Structure from motion (SfM) to visualize a photorealistic visualization of a volcano (Zhao, Wallgrün, et al., 2019), multiple and coordinated displays for geovisualization of different layers (Spur et al., 2020), and 3D prism maps in VR (Yang et al., 2020). For interaction techniques, Giannopoulos et al. (2017) compared a hybrid technique consisting of touch and button controls with an allgesture technique that uses gazing and head movement. ...
... In the same vein, participants in the gazecontingent condition exhibited fewer false-positive spacebar presses than in the full-view condition, suggesting a more conservative decision threshold and overall more passive behavior. Previous studies using similar gaze-contingent paradigms concur that fixation durations are longer when only (para)foveal information is available as compared to having a full view of the scene (Bertera & Rayner, 2000;David et al., 2019;De Winter et al., 2022;Loschky & McConkie, 2002;Nuthmann, 2014). ...
... Previous studies have explored the effect of simulated scotomas on eye movements during visual exploration (David et al., 2018(David et al., , 2019Laubrock et al., 2013;McIlreavy et al., 2012). Laubrock and colleagues characterized eye movements during visual exploration of natural scenes, using gaze contingent displays to apply different filters to the foveal or peripheral stimulation. ...
... However, existing saliency map (a 2-D image that contains the saliency value at each pixel) prediction is mainly for perspective projection cameras, and the performance for 360°images is fairly limited [7]- [10]. To overcome this limitation, we propose a simple but effective new data augmentation method using a random rotation on a unit sphere which deals with the scarce, strongly biased training data of existing single 360°image saliency prediction dataset [11], [12]. ...
... For experiments, we utilize a dataset of head and eye movements in virtual reality to evaluate the proposed approach under actual conditions [43]. After the client has processed the video, it will be delivered as an adapted video to the screen. ...
... Identifying the salient areas of visual stimuli that are likely to attract viewers' attention is a challenging task due to the complex nature of cognitive behaviours in the brain (Gutiérrez et al., 2018). Researchers have made significant contributions to this field (Xu et al., 2020;Zhu et al., 2018). ...