Lab

MIREVI

About the lab

https://mirevi.de/

Featured projects (1)

Featured research (7)

Repository that is published with this contribution: https://github.com/mati3230/IndividualMotionAnalysis
Face-to-face conversation in Virtual Reality (VR) is a challenge when participants wear head-mounted displays (HMD). A significant portion of a participant's face is hidden and facial expressions are difficult to perceive. Past research has shown that high-fidelity face reconstruction with personal avatars in VR is possible under laboratory conditions with high-cost hardware. In this paper, we propose one of the first low-cost systems for this task which uses only open source, free software and affordable hardware. Our approach is to track the user's face underneath the HMD utilizing a Convolutional Neural Network (CNN) and generate corresponding expressions with Generative Adversarial Networks (GAN) for producing RGBD images of the person's face. We use commodity hardware with low-cost extensions such as 3D-printed mounts and miniature cameras. Our approach learns end-to-end without manual intervention, runs in real time, and can be trained and executed on an ordinary gaming computer. We report evaluation results showing that our low-cost system does not achieve the same fidelity of research prototypes using high-end hardware and closed source software, but it is capable of creating individual facial avatars with person-specific characteristics in movements and expressions.
The segmentation of point clouds is conducted with the help of deep reinforcement learning (DRL) in this contribution. We want to create interactive virtual reality (VR) environments from point cloud scans as fast as possible. These VR environments are used for secure and immersive trainings of serious real life applications such as the extinguishing of a fire. It is necessary to segment the point cloud scans to create interactions in the VR. Existing geometric and semantic point cloud segmentation approaches are not powerful enough to automatically segment point cloud scenes that consist of diverse unknown objects. Hence, we tackle this problem by considering point cloud segmentation as markov decision process and applying DRL. More specifically, a deep neural network (DNN) sees a point cloud as state, estimates the parameters of a region growing algorithm and earns a reward value. The point cloud scenes originate from virtual mesh scenes that were transformed to point clouds. Thus, a point to segment relationship exists that is used in the reward function. Moreover, the reward function is developed for our case where the true segments do not correspond to the assigned segments. This case results from, but is not limited to, the usage of the region growing algorithm. Several experiments with different point cloud DNN architectures such as PointNet [13] are conducted. We show promising results for the future directions of the segmentation of point clouds with DRL.

Lab head

Christian Geiger
Department
  • Department of Media
About Christian Geiger
  • Christian Geiger currently works at the Department of Media, Hochschule Düsseldorf. Christian does research in Human-computer Interaction, Computer Graphics and Artificial Intelligence.

Members (11)

Marcel Tiator
  • Hochschule Düsseldorf
Jochen Feitsch
  • Hochschule Düsseldorf
Philipp Ladwig
  • Hochschule Düsseldorf
Daniel Drochtert
  • Hochschule Düsseldorf
Patrick Pogscheba
  • Hochschule Düsseldorf
Roman Wiche
  • Hochschule Düsseldorf
Ivana Druzetic
  • Hochschule Düsseldorf
Laurin Gerhardt
  • Hochschule Düsseldorf
Fabian Büntig
Fabian Büntig
  • Not confirmed yet
Ben Fischer
Ben Fischer
  • Not confirmed yet
Anastasia Treskunov
Anastasia Treskunov
  • Not confirmed yet
Christoph Vogel
Christoph Vogel
  • Not confirmed yet
Alexander Pech
Alexander Pech
  • Not confirmed yet
Emil Gerhardt
Emil Gerhardt
  • Not confirmed yet
Eric J. Jansen
Eric J. Jansen
  • Not confirmed yet
Kester Evers
Kester Evers
  • Not confirmed yet