Conference PaperPDF Available

TransparentHMD: revealing the HMD user's face to bystanders

Authors:

Abstract and Figures

While the eyes are very important in human communication , once a user puts on a head mounted display (HMD), the face is obscured from the outside world's perspective. This leads to communication problems when bystanders approach or collaborate with an HMD user. We introduce TransparentHMD, which employs a head-coupled perspective technique to produce an illusion of a transparent HMD to bystanders. We created a self contained system, based on a mobile device mounted on the HMD with the screen facing bystanders. By tracking the relative position of the bystander using the smartphone's camera, we render an adapting perspective view in realtime that creates the illusion of a transparent HMD. By revealing the user's face to bystanders, our easy to implement system allows for opportunities to investigate a plethora of research questions particularly related to collaborative VR systems.
Content may be subject to copyright.
TransparentHMD: Revealing the HMD
User’s Face to Bystanders
Christian Mai, Lukas Rambold, Mohamed Khamis
LMU Munich, Germany
Christian.Mai@ifi.lmu.de, Lukas.Rambold@campus.lmu.de,
Mohamed.Khamis@ifi.lmu.de
ABC
Figure 1: Our approach allows for bystanders to perceive the
HMD user’s face. (A) shows the situation as it is today. In (B), we
show a 2D image of the user’s face on a smartphone screen
mounted on an HMD. While in (C), we show a 3D-face model that
is projected according to the bystander’s head position, which is
detected using the front-facing camera of the smartphone
(Perspective mismatch in (C) is created by the camera position
and the tracked head position).
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
Copyright held by the owner/author(s).
MUM 2017, November 26–29, 2017, Stuttgart, Germany
ACM 978-1-4503-5378-6/17/11.
https://doi.org/10.1145/3152832.3157813
Abstract
While the eyes are very important in human communica-
tion, once a user puts on a head mounted display (HMD),
the face is obscured from the outside world’s perspective.
This leads to communication problems when bystanders
approach or collaborate with an HMD user. We introduce
TransparentHMD, which employs a head-coupled perspec-
tive technique to produce an illusion of a transparent HMD
to bystanders. We created a self contained system, based
on a mobile device mounted on the HMD with the screen
facing bystanders. By tracking the relative position of the
bystander using the smartphone’s camera, we render an
adapting perspective view in realtime that creates the illu-
sion of a transparent HMD. By revealing the user’s face to
bystanders, our easy to implement system allows for op-
portunities to investigate a plethora of research questions
particularly related to collaborative VR systems.
Author Keywords
Virtual Reality; Head-mounted Displays; Eyes; Face; Gaze
ACM Classification Keywords
H.5.m [Information interfaces and presentation (e.g., HCI)]:
Miscellaneous
Introduction and Related Work
The recent fall in prices of head mounted displays (HMDs)
encouraged a wider adoption of VR applications. However,
while the new HMDs promise more immersive experiences
particularly in VR applications, HMDs hide the user’s face.
The human eyes and face are strong tools for human com-
munication. They are needed for example to regulate con-
versation flow, providing feedback, communicating emo-
tional information, communicating the nature of interper-
sonal relationships and avoiding distraction by restricting
visual input [6]. Additionally the face reflects emotions [2].
Figure 2: The aim of our system is
to render the user’s face on the
front of the HMD in a way that
would let bystanders perceive the
HMD as a transparent one.
In this work, we report on our implementation of Transpar-
entHMD. We 3D-printed a mount that holds a mobile device
on the outward-facing surface of an HTC Vive. The mobile
device shows the user’s face in a way that gives an illusion
of a transparent HMD. We showcase three versions. Fig-
ure 1A shows how the interaction is like today, with nothing
presented on the HMD’s outward-facing surface. Figure 1B
shows the first version of our approach; a static 2D illus-
tration of the user’s face. While Figure 1C demonstrates
the second version of our approach, which leverages a
head-coupled perspective technique to allow the illustra-
tion to appear differently depending on the perspective of
the bystander. More specifically, we track the head of the
bystander through the front-facing camera of the mounted
mobile device. This information is then used to render a
3D-face model projection in real time on the image plane,
such that the observer has the impression of a transparent
display (see Figure 2).
Previous work looked into how eye tracking can be used
to animate the eyes presented on a front facing display [3]
or even fully cover the user’s head to hide the HMD with
a view on the virtual scene [9]. In contrast, our work looks
into the use of depth illusion in order to improve the impres-
sion of actually looking at the HMD user’s face. This way,
our approach has the potential to improve communication
and interaction between HMD users and bystanders. There
are many situations where such interaction can be use-
ful. For example, recent work demonstrated the benefits of
including bystanders in experiences of HMD users; Gugen-
heimer et al. found that this increases enjoyment, presence
and social interaction [8]. Additionally, there is a plethora of
work about collaborative environments in which non-HMD
users collaborate with HMD users in. This can be in sci-
entific context for example, VR training where the trainee
wears an HMD while the trainer uses a PC [7, 13], building
3D scenes where the designer is a non-HMD user and an
HMD user perceives the scene [4, 10], and collaborative vir-
tual environments (CVEs) [5]. Or it can have a commercial
background like presenting cars in an HMD based configu-
ration tool [1]
Prototype Concept and Implementation
TransparentHMD at its core describes the idea of making
HMDs invisible by adding a display to the outside. It ren-
ders an augmentation of the obstructed area of the user’s
face in a way as if there is no headset in the first place. For
this to happen, the system actively tracks the position of
the bystander to render a 3D-face model projection that is
adjusted to the image plane according to the bystander’s
perspective. This way, the bystander has the impression of
a coherent scene (see Figure 2).
Design Requirements & Considerations
This system is designed to foster communication between
HMD and non-HMD users. As a result, the design of the
proposed method has to meet not only purely functional re-
quirements, but it should: (1) seamless integrate into the
visual appearance of the hardware, (2) do not impose more
face obstruction, (3) be affordable and portable, ideally with-
out additional hardware for tracking the bystander’s head
for fast setup and cost effective implementation in various
contexts, and (4) interdependency from VR-platforms and
affiliated system for a future-proof implementation.
To achieve these goals, we decided to leverage a mobile
device to display the user’s face. We were able to integrate
it seamlessly into the HMD by using a 3D-printed holder
(requirement 1). The mobile device’s size is comparable to
that of the HMD hence it does not obscure the user’s face
further (requirement 2). The recent advancements in pro-
cessing power and in visual computing allow performing
head tracking directly on commodity mobile devices, hence
our solution is portable, and does not need additional hard-
ware (requirement 3). Finally, being a separate platform,
our system can be connected to other hardware and soft-
ware by, for example, integrating an eye tracker in the HMD,
allowing for interaction with the virtual scene, or communi-
cating between the mobile device and the software running
the HMD (in our case, Unity 5) (requirement 4).
Furthermore, with double-sided smartphones becoming
popular1, HMDs that leverage mobile devices as displays
such as Google cardboard and Samsung Gear VR can
adopt our solution without any additional mobile devices.
Implementation
Figure 3: Our system architecture
links different services with the a
unity scene containing the face
model. Everything is combined into
one android package.
The general Android application architecture can be seen
in Figure 3 and will be described in the following. For eas-
ier installation and to prevent the issues created by having
two apps running at the same time on the android device
– e.g. preventing apps in the background from accessing
the camera and/or slow down or pause background apps
to save battery – we decided to produce one single instal-
lable Android package. The components are a Unity 5.6.0f3
1http://abcn.ws/192Nc1v
instance that includes the 3D model of the face and the
Main Camera viewport that renders the perspective view
presented on the screen. The relative X- and Y- positions
of the bystanders face in the camera picture are updated
every frame by the FaceTrackerLauncher. This also han-
dles the activity of the FaceTrackerPlugin within our Custom
Android Library, which is starting and handling permission
for the camera and the variables given by the Mobile Vi-
sion API 2included in the Google Play Services. To detect
the position of the bystanders face, we used the center of
the bounding box provided by the Mobile Vision API, which
marks the center between the eyes with an update rate of
30Hz.
The 3D Model integrated in to Unity scene uses Philip-
Face03 and PhilipEyes03 from the Genesis2Male model
from DAZ3D3, which is optimized for running on a smart-
phone and customized to match the impression of the users’
face by using DAZ Studio Pro 4.9.3. We animated the face
using “RandomEyes”4to generate random movements of
the eyeballs, eyebrows, and eyelids. The random move-
ments were focused around the bystanders direction in or-
der to create the feeling of being looked at. The 3D model
also includes the inner part of the HTC Vive 5. Furthermore
we are able to let the avatar look into certain directions on
key press like downwards or upwards. The Unity scene
also acts as a host for all other components on the smart-
phone, offering a basic setting UI switching between the
modes shown in Figure 1, rendering out the model on the
screen, and applying the head-coupled-perspective projec-
tion based on the tracking data from the front-facing camera
that shows the bystander’s position relative to the user. In
2https://developers.google.com/vision/
3https://www.daz3d.com/
4http://crazyminnowstudio.com/unity-3d/lip-sync-salsa/
5https://sketchfab.com/models/4cee0970fe60444ead77d41fbb052a33
order to facilitate recreating our system we will describe our
implementation of the Head-coupled Perspective (HCP) and
the smoothing of the Data in the following.
Head-coupled Perspective (HCP)
An HCP can not be achieved by mere camera movements
that are applied by the FaceTrackerLauncher script.
The pure movement of the camera does not consider the
changes in the virtual camera frustum created by the de-
tected position of the bystanders head in relative position
to the boundaries of the smartphones screen. When the
face is observed from an angle for example, the display in
the real world is not a rectangle but a trapeze, signaling
that distances on the display, further away from the eye are
squeezed, while distances closer to the eye appear longer.
However without adaption, the virtual camera will still have
a symmetrical view frustum. To achieve this perspective
correction we adapt the virtual cameras frustum during ver-
tex operation in the rendering pipeline of Unity.
Projection in Rendering Pipeline To create a mapping
from 3D-coordinates to 2D-coordinates that can be dis-
played on a device, four steps are applied to the vertex:
First, model transformations are applied to local model
vertices, resulting in (x, y, z) world coordinates. Second,
world coordinates are transformed to camera coordinates,
where the camera is the origin of the coordinate system. As
a next step a projection matrix and a normalization matrix
are applied to transform all coordinates inside the frustum
to normalized device coordinates (NDC). To display those
on the target display, the normalized coordinates have to be
mapped to the actual viewport with simple translation. The
projection step is where changes in the perspective distor-
tion can be made. The default projection matrix in OpenGL
and therefore in Unity ‘GL_PROJECTION’ is defined by the
dimensions of the camera frustum as follows:
2n
rl0r+l
rl0
02n
tb
t+b
tb0
0 0 (f+n)
fn
2fn
fn
0 0 1 0
Where nand fare the near and far culling planes and
rthe right and lthe left, tthe top and bthe bottom are the
offsets of the respective edge from the main camera ray
at the height of the near clipping plane. The angles of the
viewing volume and the depth of field are implicitly defined
by the position nof the near clipping plane [12].
The C# script ObliqueFrustum applies the current projec-
tion matrix in the Camera.projectionMatrix attribute.
For defining all variables in the base case of a frontal cam-
era position, fixed height and width are retrieved from the
dimensions of a proxy object that represents the opening of
the HMD and the near clipping plane. The left bound on the
near clipping plane is computed e.g. by l=witdth
2, while
the right clipping plane can be described by r=witdth
2.
To support camera movement caused by the face tracker,
the frustum values and the resulting matrix are updated at
every frame (60 fps). For the updated values of land rwe
subtract from the the previously calculated fixed value the
(horizontal) x-translation of the camera (localPosition).
Variables tand bbehave analogously with vertical transla-
tion. For distance to the near clipping plane n, Transparen-
tHMD uses the distance to the proxy object and for the far
clipping plane fa fixed large enough value to include every
vertex in the scene. The result of different projection matri-
ces can be observed in figure 1, where the nose is signifi-
cantly distorted. When this is observed from the upper left
corner the perspective distortion of the matrix would cancel
out the perspective distortion of the real world, resulting in
the illusion that the paper or screen has depth.
Smoothing If face tracking data images were directly ap-
plied to the perspective correction, quick jumps in values
would result in unnatural flow of the face animation. Hence
we smooth the face tracking data with a regulation tech-
nique, a p-Filter. It can be described as:
u(t) = Kpe(t)
, where tis the time, measured in frames, e(t)is the input
signal (ie. the change in tracking data) u(t)is regulating
signal and thus the change, which will be applied to the
render. 1Kp<0denotes the amplification of the filter
[11].
For Kp= 1 changes are directly applied to the outgoing
signal. The smaller the value, the steadier the output sig-
nal becomes. However steadiness also results in a slower
propagation of actual changes to the output. Hence a trade-
off has to be made between responsiveness and stability. In
this work we used a value of Kp= 0.05.
Technical Limitations
Our work comes with some limitations. First, the head-
coupled projection in our implementation is restricted by
the field of view of the smartphone’s front-facing camera.
However, advances in smartphone promise front-facing
cameras with wider lenses, and future systems can employ
a dedicated wide-lens camera (e.g., fisheye lens). Sec-
ond, our system supports a single non-HMD user only. This
means that if the user is surrounded by multiple bystanders,
it would render the face according to one of them only. Fu-
ture systems could choose which bystander to render the
user’s face to, depending on the bystander’s gaze; if the by-
stander gazes at the user, then the system would adapt the
animated face according to that bystander’s position.
Conclusion and Future work
In this work we introduced TransparentHMD, a prototype
that renders the face of HMD users to the bystanders. Rather
than showing a static 2D picture of the eyes as in figure 1B,
we instead render a 3D animation (Figure 1C) that reacts to
the position of the bystander as detected by a front-facing
camera of a smartphone mounted on the HMD. In future
work, we intend to use eye tracking from within the HMD
to animate the shown eyes accordingly. We also plan on
building a complete pipeline that starts by a 3D scan of the
user’s face (e.g., using a Kinect) and then generating a 3D
model of the face to use in TransparentHMD.
REFERENCES
1. AUDI AG. 2016. Audi Digital Illustrated - Audi VR
experience. (2016). https://audi-illustrated.com/
en/CES-2016/Audi-VR-experience
2. John Bassili. 1979. Emotion recognition: The role of
facial movement and the relative importance of upper
and lower areas of the face. Journal of Personality and
Social Psychology 37, 11 (1979), 2049–2058. DOI:
http://dx.doi.org/10.1037/0022-3514.37.11.2049
3. Liwei Chan and Kouta Minamizawa. 2017. FrontFace:
Facilitating Communication Between HMD Users and
Outsiders Using Front-facing-screen HMDs. In
Proceedings of the 19th International Conference on
Human-Computer Interaction with Mobile Devices and
Services (MobileHCI ’17). ACM, New York, NY, USA,
Article 22, 5 pages. DOI:
http://dx.doi.org/10.1145/3098279.3098548
4. Karin Coninx, Frank Van Reeth, and Eddy Flerackers.
1997. A Hybrid 2D / 3D User Interface for Immersive
Object Modeling. In Proceedings of the 1997
Conference on Computer Graphics International (CGI
’97). IEEE Computer Society, Washington, DC, USA.
http://dl.acm.org/citation.cfm?id=792756.792856
5. Thierry Duval and Cedric Fleury. 2009. An Asymmetric
2D Pointer/3D Ray for 3D Interaction Within
Collaborative Virtual Environments. In Proceedings of
the 14th International Conference on 3D Web
Technology (Web3D ’09). ACM, New York, NY, USA,
33–41. DOI:
http://dx.doi.org/10.1145/1559764.1559769
6. Maia Garau, Mel Slater, Simon Bee, and
Martina Angela Sasse. 2001. The impact of eye gaze
on communication using humanoid avatars. In
Proceedings of the SIGCHI conference on Human
factors in computing systems - CHI ’01. ACM Press,
New York, New York, USA, 309–316. DOI:
http://dx.doi.org/10.1145/365024.365121
7. Dominic Gorecky, Mohamed Khamis, and Katharina
Mura. 2017. Introduction and establishment of virtual
training in the factory of the future. International Journal
of Computer Integrated Manufacturing 30, 1 (2017),
182–190. DOI:
http://dx.doi.org/10.1080/0951192X.2015.1067918
8. Jan Gugenheimer, Evgeny Stemasov, Julian Frommel,
and Enrico Rukzio. 2017a. ShareVR: Enabling
Co-Located Experiences for Virtual Reality Between
HMD and Non-HMD Users. In Proceedings of the 2017
CHI Conference on Human Factors in Computing
Systems (CHI ’17). ACM, New York, NY, USA,
4021–4033. DOI:
http://dx.doi.org/10.1145/3025453.3025683
9. Jan Gugenheimer, Evgeny Stemasov, Harpreet
Sareen, and Enrico Rukzio. 2017b. FaceDisplay:
Enabling Multi-User Interaction for Mobile Virtual
Reality. In Proceedings of the 2017 CHI Conference
Extended Abstracts on Human Factors in Computing
Systems (CHI EA ’17). ACM, New York, NY, USA,
369–372. DOI:
http://dx.doi.org/10.1145/3027063.3052962
10. Roland Holm, Erwin Stauder, Roland Wagner, Markus
Priglinger, and Jens Volkert. 2002. A combined
immersive and desktop authoring tool for virtual
environments. In Proceedings IEEE Virtual Reality
2002. 93–100. DOI:
http://dx.doi.org/10.1109/VR.2002.996511
11. Jan Lunze. 2016. Regelungstechnik 1. Springer Berlin
Heidelberg, Berlin, Heidelberg. DOI:
http://dx.doi.org/10.1007/978-3-662-52678- 1
12. Alfred Nischwitz, Max Fischer, Peter Haberäcker, and
Gudrun Socher. 2007. Computergrafik und
Bildverarbeitung. (2007), 608. DOI:
http://dx.doi.org/10.1007/978-3-8348-9190- 7
13. Jauvane Oliveira, Xiaojun Shen, and Nicolas
Georganas. 2000. Collaborative Virtual Environment for
Industrial Training and e-Commerce. In IEEE VRTS.
... To evaluate the concept of VRception, we re-implemented six cross-reality papers published in the last five years [13,21,31,36,40,68]. Next, we presented the original authors with our implementation and conducted semi-structured interviews to ask the authors if our implementation could simulate their system's core functionalities and to collect feedback about the implementation of our VRception Toolkit. ...
... We focused mainly on VR systems as they were more frequent and their nature of excluding reality was considered more interesting. Therefore, our final set of papers is as follows: [13,21,31,36,40,68]; see Table 1. [64] TransceiVR sync info between asymmetric users RW↔VR 2020 UIST Jansen et al. [31] ShARe interact via projector attached to AR-HMD RW↔AR 2020 UIST ...
... TransparentHMD. The fundamental idea of TransparentHMD is to reveal the HMD user's eyes to bystanders and thereby reduce communication problems [36]. For our re-implementation, we attached our developed display component (cf. ...
Conference Paper
Full-text available
Cross-reality systems empower users to transition along the reality-virtuality continuum or collaborate with others experiencing different manifestations of it. However, prototyping these systems is challenging, as it requires sophisticated technical skills, time, and often expensive hardware. We present VRception, a concept and toolkit for quick and easy prototyping of cross-reality systems. By simulating all levels of the reality-virtuality continuum entirely in Virtual Reality, our concept overcomes the asynchronicity of realities, eliminating technical obstacles. Our VRception toolkit leverages this concept to allow rapid prototyping of cross-reality systems and easy remixing of elements from all continuum levels. We replicated six cross-reality papers using our toolkit and presented them to their authors. Interviews with them revealed that our toolkit sufficiently replicates their core functionalities and allows quick iterations. Additionally, remote participants used our toolkit in pairs to collaboratively implement prototypes in about eight minutes that they would have otherwise expected to take days.
... Yang et al's ShareSpace system considered how a VR user and bystander might coexist in the same physical space and investigated how to allow the bystander to section off areas of the space as their own [57]. Scavarelli [25,26]. They found their system had a positive effect on social presence and highlighted the need to consider how VR's introduction within the household might create social separation amongst its members. ...
... It may also be because the user is not motivated to turn on the passthrough feature, because it is faster to remove the headset, because the VR user finds it cumbersome or socially awkward to wear it during interactions or because they desire bidirectional eye contact during their interaction with the bystander. While Mai et al. have investigated how an artificial face imposed over the headset might encourage bystanders to interact with the VR user[25,26] they have not considered whether such systems allow the VR user to fully express themselves during interactions. Future work might also investigate what shortcomings users have with the existing passthrough view systems (e.g. the Oculus passthrough view ...
Chapter
We present a short survey on recovering nonverbal communication cues that are hidden by head-mounted devices while interacting in augmented reality. The focus is on recovering facial expressions and gaze behavior by using various kinds of sensors that are attached to or integrated with these devices. The nonverbal cues can be made visible for other co-located or remote interactants on devices or avatars.
Article
In this paper, we present a system that allows a user with a head-mounted display (HMD) to communicate and collaborate with spectators outside of the headset. We evaluate its impact on task performance, immersion, and collaborative interaction. Our solution targets scenarios like live presentations or multi-user collaborative systems, where it is not convenient to develop a VR multiplayer experience and supply each user (and spectator) with an HMD. The spectator views the virtual world on a large-scale tiled video wall and is given the ability to control the orientation of their own virtual camera. This allows spectators to stay focused on the immersed user's point of view or freely look around the environment. To improve collaboration between users, we implemented a pointing system where a spectator can point at objects on the screen, which maps an indicator directly onto the objects in the virtual world. We conducted a user study to investigate the influence of rotational camera decoupling and pointing gestures in the context of HMD-immersed and non-immersed users utilizing a large-scale display. Our results indicate that camera decoupling and pointing positively impacts collaboration. A decoupled view is preferable in situations where both users need to indicate objects of interest in the scene, such as presentations and joint-task scenarios, as it requires a shared reference space. A coupled view, on the other hand, is preferable in synchronous interactions such as remote-assistant scenarios.
Conference Paper
Full-text available
Virtual reality (VR) head-mounted displays (HMD) allow for a highly immersive experience and are currently becoming part of the living room entertainment. Current VR systems focus mainly on increasing the immersion and enjoyment for the user wearing the HMD (HMD user), resulting in all the bystanders (Non-HMD users) being excluded from the experience. We propose ShareVR, a proof-of-concept prototype using floor projection and mobile displays in combination with positional tracking to visualize the virtual world for the Non-HMD user, enabling them to interact with the HMD user and become part of the VR experience. We designed and implemented ShareVR based on the insights of an initial online survey (n=48) with early adopters of VR HMDs. We ran a user study (n=16) comparing ShareVRto a baseline condition showing how the interaction using ShareVR led to an increase of enjoyment, presence and social interaction. In a last step we implemented several experiences for ShareVR, exploring its design space and giving insights for designers of co-located asymmetric VR experiences.
Conference Paper
Full-text available
We present FaceDisplay, a multi-display mobile virtual reality (VR) head mounted display (HMD), designed to enable non-HMD users to perceive and interact with the virtual world of the HMD user. Mobile VR HMDs offer the ability to immerse oneself wherever and whenever the user wishes to. This enables application scenarios in which users can interact with VR in public places. However, this results in excluding all the people in the surrounding without an HMD to become sole bystanders and onlookers. We propose FaceDisplay, a multi-display mobile VR HMD, allowing by-standers to see inside the immersed users virtual world and enable them to interact via touch. We built a prototype consisting of three additional screens and present interaction techniques and an example application that leverage the FaceDisplay design space.
Article
Full-text available
Collaborative Virtual Environment (CVE) concepts have been used in many systems in the past few years. Applications of such technology range from military combat simulations to various civilian commercial applications. This paper presents CVE as a medium for Industrial Training and Electronic Commerce.
Conference Paper
Full-text available
In this paper we describe an experiment designed to investigate the importance of eye gaze in humanoid avatar's representing people engaged in conversation. We compare responses to dyadic conversations in four mediated conditions: video, audio-only, and two avatar conditions. The avatar conditions differed only in their treatment of eye gaze. In the random-gaze condition the avatars head and eye animations were unrelated to conversational flow. In the informed-gaze condition, they were related to turn-taking during the conversation. The head animations were tracked and the eye animations were inferred from the audio stream. Our comparative analysis of 100 post-experiment questionnaires showed that the random-gaze avatar did not improve on audio-only communication. The informed-gaze avatar significantly outperformed the random-gaze model and also outperformed audio-only on several response measures. We conclude that an avatar whose gaze behaviour is related to the conversation provides a marked improvement on an avatar that merely exhibits liveliness.
Article
Full-text available
In this paper we present a new metaphor for interaction within Collaborative Virtual Environments (CVE). This metaphor is dedicated to non-immersive or semi-immersive 3D interactions, for which users cannot afford to buy expensive devices neither for 3D visualization of their virtual environment nor for interaction. With these low-cost restrictions, we think that it is more effective to use basic 2D metaphors rather than to try to adapt 3D virtual metaphors which would be more difficult to use because of the poor immersion level offered by such systems. The problem that will arise within a CVE is that it is difficult to make a user aware of the 2D metaphors used by another user, because they are not associated with a 3D virtual object of the shared universe. So our idea is to provide to a user a 3D virtual ray (using ray-casting for object selection) that would act like a 2D pointer on the screen, allowing the user to only control the 2D position of the closest ray end, and calculating the orientation of the ray so that its projection on the screen would always be a point. This way, since the user is controlling a 3D virtual ray, the other users can be made aware of his activity. To test the efficiency of this 2D Pointer / 3D Ray, we have made some experiments making users compare different devices to realize some simple selection and manipulation tasks. The results show that this kind of 2D solution is efficient and allows 3D interaction within Virtual Environments by people who cannot afford expensive immersive hardware. This new metaphor allows more users to collaborate within CVE.
Book
In diesem Buch finden Sie alles, was Sie für Studium und Praxis über Generierung und Verarbeitung von digitalen Bildern wissen und anwenden möchten. Vorteile sind der klare didaktische Aufbau und die – nahezu – vollständige Behandlung aktueller Methoden und Themen. Von der Generierung synthetischer Bilder und Szenarien in interaktiven Anwendungen über die Vorverarbeitung und Merkmalsextraktion digitaler Bilder bis zur Bildsegmentierung, Objekterkennung und Objektverfolgung mit Kalman-Filtern. Profitieren Sie von dem kostenlosen Online-Service: Bildverarbeitungswerkzeuge, Beispiel-Software und interaktive Vorlesungen (als HTML-Seiten mit Java-Applets und Praktikumsaufgaben). Folgende Themen wurden ergänzt: Der Canny-Kantendetektor und die Segmentierung lauflängenkodierter Binärbilder mit einem Union-Find-Algorithmus.
Conference Paper
A head-mounted display (HMD) immerses users in a virtual world, but separates them from outsiders in the real world. We present FrontFace, which is a novel HMD that combines an eye-tracker with a front-facing screen, to lower the communication barrier between HMD users and outsiders. The front-facing screen reveals user attention (e.g., the users eye motions) and user presence in the virtual or real world by displaying the scene in the virtual world or a skin background respectively, enabling eye-contact interactions between the HMD user and the outsiders. FrontFace has the following benefits. Firstly, it communicates the presence of the HMD user to outsiders; secondly, it reveals the player's visual attention by introducing the HMD users originally occluded eye motions, enabling outsiders to make sense of the HMD user's reaction in the virtual world or the real world. Three interactive techniques for the outsiders to initiate communication to HMD users are proposed: they are tap-trigger, hand-gesture trigger, and voice-trigger interactions. A small focus group provided feedback.
Book
Dieses Lehrbuch überzeugt durch seine Didaktik und Stoffauswahl. Die Darstellung zielt auf ein tiefgründiges Verständnis dynamischer Systeme und Regelungsvorgänge, wobei mit Zeitbereichsbetrachtungen im Zustandsraum begonnen und erst danach zur Frequenzbereichsdarstellung übergegangen wird. Praktische Beispiele aus Elektrotechnik, Maschinenbau, Verfahrenstechnik und Verkehrstechnik illustrieren die Anwendung der behandelten Methoden und zeigen den fachübergreifenden Charakter der Regelungstechnik. Mit der Einführung in MATLAB (Release R2016a) wird der Anschluss an die rechnergestützte Arbeitsweise der Ingenieure hergestellt. Übungsaufgaben mit ausführlichen Lösungen dienen der Vertiefung des Stoffes. Für die 11. Auflage wurden zahlreiche Textstellen und Abbildungen verbessert, weitere Übungsaufgaben aufgenommen und die Beschreibung von MATLAB der aktuellen Version angepasst. „Das Buch vermittelt in idealer Weise theoretisch begründetes Verständnis mit praktischen Gesichtspunkten und Beispielen aus verschiedenen Bereichen.“ Prof. Dr.-Ing. habil. Dr. h.c. Bernhard Lampe, Universität Rostock „Das Buch wird von meinen Studenten und Doktoranden sehr geschätzt, weil es zum einen den Grundlagenstoff klar und vollständig bringt, zum anderen weiterführende Themen und Prinzipien in knapper und verständlicher Form ergänzt.“ Prof. Dr.-Ing. Boris Lohmann, Technische Universität München Die Zielgruppen Studierende der Ingenieurwissenschaften an Universitäten und Fachhochschulen Der Autor Jan Lunze, 1970 bis 1974 Studium der Technischen Kybernetik an der Technischen Universität Ilmenau, 1980 Promotion auf dem Gebiet der dezentralen Regelung, 1983 Habilitation über robuste Regelung, seit 2001 Leiter des Lehrstuhls für Automatisierungstechnik und Prozessinformatik der Ruhr-Universität Bochum.
Article
In order to make the factory of the future vision a reality, various requirements need to be met. There is a need to continuously qualify the human worker about new and changing technology trends since the human is the most flexible entity in the production system. This demands introducing novel approaches for knowledge-delivery and skill transfer. This paper introduces the design, implementation and evaluation of an advanced virtual training system, which has been developed in the EU-FP7 project VISTRA. The domain of interest is automotive manufacturing since it is one of the leading industries in adopting future factory concepts and technologies such as cyber-physical systems and internet of things. First of all, the authors motivate the topic based on the state-of-the-art concerning training systems for manual assembly and relevant technologies. Then, the main challenges and research questions are presented followed by the design and implementation of the VISTRA project including its methodologies. Furthermore, the results of experimental and technical evaluation of the system are described and discussed. In the conclusion, the authors give an outlook at the implementation and evaluation of the example application in related industries.
Conference Paper
Virtual environments and other applications of computer-generated scenes are emerging. This increases the demand for tools for efficient creation of the virtual worlds and interaction with the components in the virtual world. Introduction of immersiveness in the modeling process seems promising, but a number of issues related to immersive modeling still have to be investigated. The authors present a framework for an immersive modeling system with a hybrid 2D/3D user interface. The designer is immersed in the design space, which improves his awareness of the design objects under consideration. 2D and 3D human-computer interaction techniques are combined in the user interface to improve the efficiency of the modeler. While D interaction is natural for a number of design tasks, 2D interaction is particularly helpful for interaction with menus and dialog boxes, for precise manipulations and for editing operations that are subject to constraints