Yvonne Jung’s research while affiliated with Darmstadt University of Applied Sciences and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (75)


Object-Specific and Generic Difference Detection for Non-Destructive Testing Methods
  • Conference Paper

October 2024

·

7 Reads

·

Yvonne Jung

·


Visualization of deviations between different geometries using a multi-level voxel-based representation

July 2023

·

27 Reads

·

1 Citation

We present an approach for visualizing deviations between a 3d printed object and its digital twin. The corresponding 3d visualization for instance allows to highlight particularly critical sections that indicate high deviations along with corresponding annotations. Therefore, the 3d printing thus needs to be reconstructed in 3d, again. However, since the original 3d model that served as blueprint for the 3d printer typically differs topology-wise from the 3d reconstructed model, the corresponding geometries cannot simply be compared on a per-vertex basis. Thus, to be able to easily compare two topologically different geometries, we use a multi-level voxel-based representation for both data sets. Besides using different appearance properties to show deviations, a quantitative comparison of the voxel-sets based on statistical methods is added as input for the visualization. These methods are also compared to determine the best solution in terms of the shape differences and how the results differ, when comparing either voxelized volumes or hulls. The application VoxMesh integrates these concepts into an application and provides the possibility to save the results in form of voxel-sets, meshes and point clouds persistently, that can either be used by third party software or VoxMesh to efficiently reproduce and visualize the results of the shape analysis.



Updating 3D Planning Data based on Detected Differences between Real and Planning Data of Building Interiors[

January 2021

·

8 Reads

This paper presents a system to determine differences between 3D reconstructed interiors and their corresponding 3D planning data with the aim of correcting identified differences and updating the 3D planning data based on these deviations. Therefore, a point-based comparison algorithm was developed with which deviations can be recognized regardless of the topology of the data used. Usually, resolution and topology of a 3D reconstruction do not match the CAD data. Here, our solution overcomes this problem by segmenting and extracting objects relevant for comparison (e.g., doors, windows) from the reconstruction and planning data separately with a subsequent analysis of the proximity of these objects to connected walls within the corresponding data set. Starting from the connection points of a segmented object to its walls, adjacent spatial data is located for a correction of detected differences to update the 3D planning data. The quality of the result of the developed process is shown in different examples localizing doors and windows to find deviations. In addition, detected differences between the planning and the measurement data are visualized and compared with the ground truth state of the building interior.


Updating 3D Planning Data based on Detected Differences between Real and Planning Data of Building Interiors

January 2021

·

20 Reads

·

3 Citations

This paper presents a system to determine differences between 3D reconstructed interiors and their corresponding3D planning data with the aim of correcting identified differences and updating the 3D planning data based onthese deviations. Therefore, a point-based comparison algorithm was developed with which deviations can berecognized regardless of the topology of the data used. Usually, resolution and topology of a 3D reconstruction donot match the CAD data. Here, our solution overcomes this problem by segmenting and extracting objects relevantfor comparison (e.g., doors, windows) from the reconstruction and planning data separately with a subsequentanalysis of the proximity of these objects to connected walls within the corresponding data set. Starting from theconnection points of a segmented object to its walls, adjacent spatial data is located for a correction of detecteddifferences to update the 3D planning data. The quality of the result of the developed process is shown in differentexamples localizing doors and windows to find deviations. In addition, detected differences between the planningand the measurement data are visualized and compared with the ground truth state of the building interior.



Enhancing the AR Experience with Machine Learning Services

July 2019

·

105 Reads

·

4 Citations

·

·

Kai Weber

·

[...]

·

Yvonne Jung

In this paper, we present and evaluate a web service that offers cloud-based machine learning services to improve Augmented Reality applications on mobile and web clients with special regards to tracking quality and registration of complex scenes that require an application-specific coordinate frame. Specifically, our service aims at reducing camera drift that still occurs in modern AR frameworks as well as helps with the initial camera alignment in a known scene by estimating the absolute camera pose using a configurable context-based image segmentation in combination with an adaptive image classification. We demonstrate real-world applications that utilize our web service and evaluate the performance and accuracy of the underlying image segmentation and the camera pose estimation. We also discuss the initial configuration along with the semi-automatic process of generating training data, and the training of the machine learning models for the corresponding tasks.



Improving mobile MR applications using a cloud-based image segmentation approach with synthetic training data

June 2018

·

39 Reads

·

3 Citations

In this paper, we show how the quality of augmentation in mobile Mixed Reality applications can be improved using a cloud-based image segmentation approach with synthetic training data. Many modern Augmented Reality frameworks are based on visual inertial odometry on mobile devices and therefore have limited access to tracking hardware (e.g., depth sensor). Consequently, tracking still suffers from drift that makes it difficult to utilize in use cases that require a higher precision. To improve tracking quality, we propose a cloud tracking approach that uses machine learning based image segmentation to recognize known objects in a real scene, which allows us to estimate a precise camera pose. Augmented Reality applications that utilize our web service can use the resulting camera pose to correct drift from time to time, while still using local tracking between key frames. Moreover, the device's position in the real world, when starting the application, is usually used as reference coordinate system. Therefore, we simplify the authoring of MR applications significantly due to a well-defined coordinate system, which is context-based and not dependend on the starting position of a user. We present all steps from web-based initialization over the generation of synthetic training data up to usage in production. In addition, we describe the underlying algorithms in detail. Finally, we show a mobile Mixed Reality application, which is based on this novel approach and discuss its advantages.



Citations (51)


... For the use case of construction site monitoring, Dietze et al. presented a collaborative web-based XR system, focusing mainly on the automatic detection of deviations during construction [10]. Other interesting approaches for various types of XR collaboration within a construction use case are [11] and [12] who are considering a similar use case (furniture and interior design) from different perspectives: While Zhang et al. are mostly focusing on the transparency of also asynchronous changes in the environment, Prabhakaran mostly concentrates on the real-time interactions between users. ...

Reference:

Exploring the Role of XR Collaboration in the Construction Industry
Supporting Web-based Collaboration for Construction Site Monitoring
  • Citing Conference Paper
  • November 2021

... This correlates with the fact that due to advances in augmented reality (AR), 3d sensing, and 3d scanning, many new mobile devices, such as the Samsung S21 Ultra or Apple's iPad, have advanced technologies for acquiring 3d data integrated into their product lineup. In terms of mobile devices, the technologies used (e.g., SfM, ToF, LiDAR) primarily serve to enrich a real scene with digital content, but are also used in the field of 3d reconstruction, as can be seen in the example of Microsoft's HoloLens mixed reality headset or Apple's LiDAR sensor, which are used, for example, in the context of digital construction monitoring and surveying [WWWH21,DGJ21]. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ...

Updating 3D Planning Data based on Detected Differences between Real and Planning Data of Building Interiors
  • Citing Conference Paper
  • January 2021

... The acquisition, analysis and processing of 3d data based on depth data is still a current research area, as examples in the fields of visual computing like digital construction monitoring [DGJ20] show. This correlates with the fact that due to advances in augmented reality (AR), 3d sensing, and 3d scanning, many new mobile devices, such as the Samsung S21 Ultra or Apple's iPad, have advanced technologies for acquiring 3d data integrated into their product lineup. ...

Visualization of Differences between Spatial Measurements and 3D Planning Data
  • Citing Conference Paper
  • November 2020

... Moreover, an implementation of ML Reinforcement Learning algorithms which effectively reduce the mobile AR computing latency in channel fading environment is proposed [13] but has no impact on the elasticity of AR to frontal scene changes. The introduction of cloud-based ML services to improve AR mobile applications orients around reducing camera drift using contextbased image segmentation [20]. This research points more to a combination of Computer Vision algorithms rather than a CML approach as our methods. ...

Enhancing the AR Experience with Machine Learning Services
  • Citing Conference Paper
  • July 2019

... However, geometric simplification may also affect the resolution of the geometries [16]. The second is data streaming [17][18][19]. To adopt data streaming technology, both the data format and the visualization tools must be carefully designed and upgraded on top of streaming data standards. ...

Optimized streaming of large web 3D applications
  • Citing Conference Paper
  • October 2017

... There have been many works, in the domain of the pre-viz, that have addressed and attempted to solve different sub-problems. For example, the ANSWER framework [8,25] and recent mixed-reality based tools by [23,24,52,54] represent advancements in the GUIbased story-boarding and augmented reality for pre-visualization, respectively. The "One Man Movie" [15,16] is another significant leap which incorporates the aspects of VR as part of a comprehensive pre-viz scene authoring system, although animating 3D virtual characters remains challenging, which often is an integral part of any compelling 3D/VR content. ...

A Notation Based Approach To Film Pre-vis
  • Citing Conference Paper
  • January 2010

... This new technology generated more realistic images, thus making it possible to depict anatomical details more accurately. Englert et al. proposed a streaming framework to optimize large-scale 3D applications and allowed real-time interaction independent of network bandwidth or rendering units [12]. ...

A streaming framework for instant 3D rendering and interaction
  • Citing Conference Paper
  • November 2015

... In this study, X3DOM was chosen mainly because it allows the result to be displayed without plugins. X3DOM itself is a framework as in [41] and [42], which allows users to display the world in a compatible browser without having to install a plug-in. X3DOM uses DOM in HTML5 and utilizes WebGL through JavaScript library [43]. ...

Stable dynamic webshadows in the X3DOM framework
  • Citing Article
  • May 2015

Expert Systems with Applications

... Discrepancy between life spans of long-lasting products such are buildings and software tools used for their creation should not be reflected on digital product representations which are needed during the whole product life cycle. X3DOM technology can be used to represent geometry in web browsers and help with the plant layout design process [39]. They use a dynamic representation to store geometry information by recording the modelling history. ...

Enhancing the plant layout design process using X3DOM and a scalable web3D service architecture
  • Citing Article
  • August 2014