Project

iKPT4.0 – interaktive körpernahe Produktionstechnik (eng: Interactive Bodynear Production Technique)

Updates
0 new
0
Recommendations
0 new
0
Followers
0 new
3
Reads
0 new
53

Project log

Marcel Tiator
added a research item
The objective of our research is to enhance the creation of interactive environments such as in VR applications. An interactive environment can be produced from a point cloud that is acquired by a 3D scanning process of a certain scenery. The segmentation is needed to extract objects in that point cloud to, e.g., apply certain physical properties to them in a further step. It takes a lot of effort to do this manually as single objects have to be extracted and post-processed. Thus, our research aim is the real-world, cross-domain, automatic, semantic segmentation without the estimation of specific object classes.
Philipp Ladwig
added 2 research items
Machines that are used in industry often require dedicated technicians to fix them in case of defects. This involves travel expenses and certain amount of time, both of which may be significantly reduced by installing small extensions on a machine as we describe in this paper. The goal is that an authorized local worker, guided by a remote expert, can fix the problem on the real machine himself. Our approach is to equip a machine with multiple inexpensive LEDs (light-emitting diodes) and a simple micro controller, which is connected to the internet, to remotely light up certain LEDs, that are close to machine parts of interest. The blinking of an LED can be induced on a virtual 3D model (digital twin) of the machine by a remote expert using a virtual reality application to draw the local worker's attention to certain areas. We conducted an initial user study on this concept with 36 participants and found significantly shorter completion times and less errors for our approach compared to only voice guidance with no visual LED feedback.
Marcel Tiator
added a research item
We propose a method to segment a real world point cloud as perceptual grouping task (PGT) by a deep reinforcement learning (DRL) agent. A point cloud is divided into groups of points, named superpoints, for the PGT. These superpoints should be grouped to objects by a deep neural network policy that is optimised by a DRL algorithm. During the PGT, a main and a neighbour superpoint are selected by one of the proposed strategies, namely by the superpoint growing or by the smallest superpoint first strategy. Concretely, an agent has to decide if the two selected superpoints should be grouped together and receives reward if determinable during the PGT. We optimised a policy with the proxi-mal policy optimisation (PPO) [SWD + 17] and the dueling double deep q-learning algorithm [HMV + 18] with both proposed superpoint selection strategies with a scene of the ScanNet data set [DCS + 17]. The scene is transformed from a labelled mesh scene to a labelled point cloud. Our intermediate results are optimisable but it can be shown that the agent is able to improve its performance during the training. Additionally, we suggest to use the PPO algorithm with one of the proposed selection strategies for more stability during the training.
Marcel Tiator
added a research item
The segmentation of point clouds is conducted with the help of deep reinforcement learning (DRL) in this contribution. We want to create interactive virtual reality (VR) environments from point cloud scans as fast as possible. These VR environments are used for secure and immersive trainings of serious real life applications such as the extinguishing of a fire. It is necessary to segment the point cloud scans to create interactions in the VR. Existing geometric and semantic point cloud segmentation approaches are not powerful enough to automatically segment point cloud scenes that consist of diverse unknown objects. Hence, we tackle this problem by considering point cloud segmentation as markov decision process and applying DRL. More specifically, a deep neural network (DNN) sees a point cloud as state, estimates the parameters of a region growing algorithm and earns a reward value. The point cloud scenes originate from virtual mesh scenes that were transformed to point clouds. Thus, a point to segment relationship exists that is used in the reward function. Moreover, the reward function is developed for our case where the true segments do not correspond to the assigned segments. This case results from, but is not limited to, the usage of the region growing algorithm. Several experiments with different point cloud DNN architectures such as PointNet [13] are conducted. We show promising results for the future directions of the segmentation of point clouds with DRL.
Philipp Ladwig
added a research item
Mixed Reality is defined as a combination of Reality, Augmented Reality, Augmented Virtuality and Virtual Reality. This innovative technology can aid with the transition between these stages. The enhancement of reality with synthetic images allows us to perform tasks more easily, such as the collaboration between people who are at different locations. Collaborative manufacturing, assembly tasks or education can be conducted remotely, even if the collaborators do not physically meet. This paper reviews both past and recent research, identifies benefits and limitations, and extracts design guidelines for the creation of collaborative Mixed Reality applications in technical settings.
Philipp Ladwig
added a research item
Mixed Reality is defined as a combination of Reality, Augmented Reality, Augmented Virtuality and Virtual Reality. This innovative technology can aid with the transition between these stages. The enhancement of reality with synthetic images allows us to perform tasks more easily, such as the collaboration between people who are at different locations. Collaborative manufacturing, assembly tasks or education can be conducted remotely, even if the collaborators do not physically meet. This paper reviews both past and recent research, identifies benefits and limitations, and extracts design guidelines for the creation of collaborative Mixed Reality applications in technical settings.
Philipp Ladwig
added a research item
More than three decades of ongoing research in immersive modelling has revealed many advantages of creating objects in virtual environments. Even though there are many benefits, the potential of immersive modelling has only been partly exploited due to unresolved problems such as ergonomic problems, numerous challenges with user interaction and the inability to perform exact, fast and progressive refinements. This paper explores past research, shows alternative approaches and proposes novel interaction tools for pending problems. An immersive modelling application for polygon meshes is created from scratch and tested by professional users of desktop modelling tools, such as Autodesk Maya, in order to assess the efficiency, comfort and speed of the proposed application with direct comparison to professional desktop modelling tools.