Conference Paper

Towards Precise, Fast and Comfortable Immersive Polygon Mesh Modelling: Capitalising the Results of Past Research and Analysing the Needs of Professionals

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

More than three decades of ongoing research in immersive modelling has revealed many advantages of creating objects in virtual environments. Even though there are many benefits, the potential of immersive modelling has only been partly exploited due to unresolved problems such as ergonomic problems, numerous challenges with user interaction and the inability to perform exact, fast and progressive refinements. This paper explores past research, shows alternative approaches and proposes novel interaction tools for pending problems. An immersive modelling application for polygon meshes is created from scratch and tested by professional users of desktop modelling tools, such as Autodesk Maya, in order to assess the efficiency, comfort and speed of the proposed application with direct comparison to professional desktop modelling tools.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... We have developed an application in order to apply recent research outcomes and we want to share our lessons learned of combining two tracking systems. Our application is an immersive 3D mesh modeling tool which we have developed and evaluated previously [13]. Our tool allows creating 3D meshes with the aid of an HMD and two 6 Degree-of-Freedom controllers and is inspired by common desktop modeling applications such as Blender and Autodesk Maya. ...
... However, comfort and usability is important, if long-term applications are required, but research of comfort is scarce. Ladwig, Herder and Geiger [13] consider and evaluate comfort for MR application. Lubos et al. [15] revealed important outcomes for comfortable interaction and did first steps into this direction. ...
... Consider usability and comfort If a long-term usage is desired, take a comfortable interface for the user into account and consider human factors [13,15,27]. ...
Conference Paper
Full-text available
Mixed Reality is defined as a combination of Reality, Augmented Reality, Augmented Virtuality and Virtual Reality. This innovative technology can aid with the transition between these stages. The enhancement of reality with synthetic images allows us to perform tasks more easily, such as the collaboration between people who are at different locations. Collaborative manufacturing, assembly tasks or education can be conducted remotely, even if the collaborators do not physically meet. This paper reviews both past and recent research, identifies benefits and limitations, and extracts design guidelines for the creation of collaborative Mixed Reality applications in technical settings.
... We have developed an application in order to apply recent research outcomes and we want to share our lessons learned of combining two tracking systems. Our application is an immersive 3D mesh modeling tool which we have developed and evaluated previously [13]. Our tool allows creating 3D meshes with the aid of an HMD and two 6 Degree-of-Freedom controllers and is inspired by common desktop modeling applications such as Blender and Autodesk Maya. ...
... However, comfort and usability is important, if long-term applications are required, but research of comfort is scarce. Ladwig, Herder and Geiger [13] consider and evaluate comfort for MR application. Lubos et al. [15] revealed important outcomes for comfortable interaction and did first steps into this direction. ...
... Consider usability and comfort If a long-term usage is desired, take a comfortable interface for the user into account and consider human factors [13,15,27]. ...
Chapter
Full-text available
Mixed Reality is defined as a combination of Reality, Augmented Reality, Augmented Virtuality and Virtual Reality. This innovative technology can aid with the transition between these stages. The enhancement of reality with synthetic images allows us to perform tasks more easily, such as the collaboration between people who are at different locations. Collaborative manufacturing, assembly tasks or education can be conducted remotely, even if the collaborators do not physically meet. This paper reviews both past and recent research, identifies benefits and limitations, and extracts design guidelines for the creation of collaborative Mixed Reality applications in technical settings.
... The fact that any changes made to a model in the VR environment cannot be transferred back to the CAD file may result in data redundancy and inconsistencies (Zorriassatine, 2003). In this regard, some authors have explored 3D modeling as a VR-based process where geometry is created directly in the virtual environments (Chu et al., 1997;Arangarasan & Gadh, 2000;Cappello et al., 2007;Ladwig et al., 2017). Several commercial tools such as Masterpiece Studio, Gravity Sketch, or Google Tilt Brush, have also been developed, mainly for artistic purposes. ...
Article
Recent advances in graphics hardware technology are enabling the development of increasingly more sophisticated virtual reality (VR) applications. However, the representation of 3D virtual models in this medium requires finding a proper balance between visual quality and computational complexity. Virtual reality-based software should be optimized in a manner that the functions responsible for representing and rendering the 3D scene do not execute any tasks that may cause delays in the rendering process. In this paper, we describe, implement, and validate a multi-agent architecture for VR-based geometric modeling that enables managing user interaction and the geometric database independently from the representation of the virtual model. Our results show that the proposed approach can significantly improve rendering refresh rates when compared to conventional methods.
Article
Full-text available
Building a real-world immersive 3D modeling application is hard. In spite of the many supposed advantages of working in the virtual world, users quickly tire of waving their arms about and the resulting models remain simplistic at best. The dream of creation at the speed of thought has largely remained unfulfilled due to numerous factors such as the lack of suitable menu and system controls, inability to perform precise manipulations, lack of numeric input, challenges with ergonomics, and difficulties with maintaining user focus and preserving immersion. The focus of our research is on the building of virtual world applications that can go beyond the demo and can be used to do real-world work. The goal is to develop interaction techniques that support the richness and complexity required to build complex 3D models, yet minimize expenditure of user energy and maximize user comfort. We present an approach that combines the natural and intuitive power of VR interaction, the precision and control of 2D touch surfaces, and the richness of a commercial modeling package. We also discuss the benefits of collocating 2D touch with 3D bimanual spatial input, the challenges in designing a custom controller targeted at achieving the same, and the new avenues that this collocation creates.
Conference Paper
Full-text available
We present an immersive 3D modeling application with stereoscopic graphics, head tracking, and 3D input devices. The application was built in three weeks on top of Blender, an open source 3D modeling software, and relies solely on affordable, off-the-shelf hardware like PlayStation Move controllers. Our goal was to create an easy to use 3D modeling environment that employs both 2D and 3D interaction techniques and contains several modeling tools. We conducted a basic user study where novice and professional 3D artists created 3D models with our application. The study participants thought that the application was fun and intuitive to use, but accurate posing of objects was difficult. We also examined the participants' beliefs about future use of immersive technology in 3D modeling. The short implementation time of the application, its many features, and the 3D models created by the study participants set an example of what can be achieved with open source software and off-the-shelf hardware.
Conference Paper
Full-text available
CaveCAD is our in-house developed 3D modeling tool, which runs in immersive virtual reality environments, such as CAVEs. We built it from the ground up, in collaboration with architects, to explore how immersive 3D interaction systems can support 3D modeling tasks. CaveCAD offers typical 3D modeling functions, such as geometry creation, modification of existing geometry, assignment of surface materials and textures, the use of libraries of 3D components, geographical placement functions, and shadows. CaveCAD goes beyond traditional 3D modeling tools by utilizing direct 3D interaction methods. We evaluated our modeling system by running a small pilot study with four participants: two novice users and two expert users were tasked to build Disney World's magic castle.
Conference Paper
Full-text available
MakeVR is an intuitive and accessible digital sandbox for making 3D objects scenes with game-like simplicity for beginners and with advanced tools for experts. It presents a professional CAD engine through a natural immersive two-handed interface. Users reach into space to move themselves through a geometric playground, placing primitive shapes and more complex objects into the scene and then reaching out to modify them via booleans, sweeps, deformation, and other CAD operations. We conducted a preliminary user evaluation of four participant case studies and plan to use this evaluation to improve the system.
Article
Full-text available
This paper presents an approach for the integration of Virtual Reality (VR) and Computer-Aided Design (CAD). Our general goal is to develop a VR–CAD framework making possible intuitive and direct 3D edition on CAD objects within Virtual Environments (VE). Such a framework can be applied to collaborative part design activities and to immersive project reviews. The cornerstone of our approach is a model that manages implicit editing of CAD objects. This model uses a naming technique of B-Rep components and a set of logical rules to provide straight access to the operators of Construction History Graphs (CHG). Another set of logical rules and the replay capacities of CHG make it possible to modify in real-time the parameters of these operators according to the user’s 3D interactions. A demonstrator of our model has been developed on the OpenCASCADE geometric kernel, but we explain how it can be applied to more standard CAD systems such as CATIA. We combined our VR–CAD framework with multimodal immersive interaction (using 6 DoF tracking, speech and gesture recognition systems) to gain direct and intuitive deformation of the objects’ shapes within a VE, thus avoiding explicit interactions with the CHG within a classical WIMP interface. In addition, we present several haptic paradigms specially conceptualized and evaluated to provide an accurate perception of B-Rep components and to help the user during his/her 3D interactions. Finally, we conclude on some issues for future researches in the field of VR–CAD integration.
Conference Paper
Designing spatial user interfaces for virtual reality (VR) applications that are intuitive, comfortable and easy to use while at the same time providing high task performance is a challenging task. This challenge is even harder to solve since perception and action in immersive virtual environments differ significantly from the real world, causing natural user interfaces to elicit a dissociation of perceptual and motor space as well as levels of discomfort and fatigue unknown in the real world. In this paper, we present and evaluate the novel method to leverage joint-centered kinespheres for interactive spatial applications. We introduce kinespheres within arm's reach that envelope the reachable space for each joint such as shoulder, elbow or wrist, thus defining 3D interactive volumes with the boundaries given by 2D manifolds. We present a Fitts' Law experiment in which we evaluated the spatial touch performance on the inside and on the boundary of the main joint-centered kinespheres. Moreover, we present a confirmatory experiment in which we compared joint-centered interaction with traditional spatial head-centered menus. Finally, we discuss the advantages and limitations of placing interactive graphical elements relative to joint positions and, in particular, on the boundaries of kinespheres.
Article
In recent decades, "post-WIMP" interactions have revolutionized user interfaces (UIs) and led to improved user experiences. However, accounts of post-WIMP UIs typically do not provide theoretical explanations of why these UIs lead to superior performance. In this article, we use Norman's 1986 model of interaction to describe how post-WIMP UIs enhance users' mental representations of UI and task. In addition, we present an empirical study of three UIs; in the study, participants completed a standard three-dimensional object manipulation task. We found that the post-WIMP UI condition led to enhancements of mental representation of UI and task. We conclude that the Norman model is a good theoretical framework to study post-WIMP UIs. In addition, by studying post-WIMP UIs in the context of the Norman model, we conclude that mental representation of task may be influenced by the interaction itself; this supposition is an extension of the original Norman model.
Article
Three-dimensional modeling has long been regarded as an ideal application for virtual reality (VR), but current VR-based 3D modeling tools suffer from two problems that limit creativity and applicability: (1) the lack of control for freehand modeling, and (2) the difficulty of starting from scratch. To address these challenges, we present Lift-Off, an immersive 3D interface for creating complex models with a controlled, handcrafted style. Artists start outside of VR with 2D sketches, which are then imported and positioned in VR. Then, using a VR interface built on top of image processing algorithms, 2D curves within the sketches are selected interactively and "lifted" into space to create a 3D scaffolding for the model. Finally, artists sweep surfaces along these curves to create 3D models. Evaluations are presented for both long-term users and for novices who each created a 3D sailboat model from the same starting sketch. Qualitative results are positive, with the visual style of the resulting models of animals and other organic subjects as well as architectural models matching what is possible with traditional fine art media. In addition, quantitative data from logging features built into the software are used to characterize typical tool use and suggest areas for further refinement of the interface.
Article
This paper describes a two-handed interface that enables intuitive interaction with 3D multimedia environments. Two user studies demonstrated the effectiveness of the two-handed interface in fundamental 3D object manipulation and viewpoint manipulation tasks. Trained participants docked and constructed 3D objects 4.5–4.7 times as fast as a standard mouse interface and 1.3–2.5 times as fast as a standard one-handed wand interface. 19 of 20 participants preferred the two-handed interface over the mouse and wand interfaces. 16 participants felt very comfortable with the two-handed interface and 4 felt comfortable. No statistically significant differences in performance were found between monoscopic and stereoscopic displays although 17 of 20 participants preferred the stereoscopic display over the monoscopic display.
Conference Paper
dm is a three dimensional (3D) surface modeling program that draws techniques of model manipulation from both CAD and drawing programs and applies them to modeling in an intuitive way. 3dm uses a head-mounted display (HMD) to simplify the problem of D model manipulation and understanding. A HMD places the user in the modeling space, making three dimensional relationships more understandable. As a result, 3dm is easy to learn how to use and encourages experimentation with model shapes.
Article
An experimental system for computer-aided design of free-form surfaces in three dimensions is described. The surfaces are represented in the system as parametric basis splines. The principal features of the system are: (1) the surfaces are rendered as isoparametric line drawings on a head-mounted display, and they are designed with the aid of a three-dimensional “wand,” which allows 3-D movements of the points controlling the shapes of the surfaces, (2) all of the interactions with the surfaces are in real-time, and (3) the mathematical formulations used assume no knowledge of them by the user of the system. Also examined are some of the features that should be part of a practical 3-D system for designing space-forms.
Article
This article describes HoloSketch, a virtual reality-based 3D geometry creation and manipulation tool. HoloSketch is aimed at providing nonprogrammers with an easy-to-use 3D “What-You-See-Is-What-You-Get” environment. Using head-tracked stereo shutter glasses and a desktop CRT display configuration, virtual objects can be created with a 3D wand manipulator directly in front of the user, at very high accuracy and much more rapidly than with traditional 3D drawing systems. HoloSketch also supports simple animation and audio control for virtual objects. This article describes the functions of the HoloSketch system, as well as our experience so far with more-general issues of head-tracked stereo D user interface design.
Article
A sketching system for spline-based free-form surfaces on the Responsive Workbench is presented. We propose 3D tools for curve drawing and deformation techniques for curves and surfaces, adapted to the needs of designers. The user directly draws curves in the virtual environment, using a tracked stylus as an input device. A curve network can be formed, describing the skeleton of a virtual model. The non-dominant hand positions and orients the model while the dominant hand uses the editing tools. The curves and the resulting skinning surfaces can interactively be deformed.
OpenMesh: A Generic and Efficient Polygon Mesh Data Structure
  • M Botsch
  • Steinberg S
  • S Bischoff
  • Kobbelt L
BOTSCH M., STEINBERG S., BISCHOFF S., KOBBELT L.: OpenMesh: A Generic and Efficient Polygon Mesh Data Structure. In OpenSG Symposium 2002 (2002). 2
Towards Immersive Modeling -Challenges and Recommendations: A Workshop Analyzing the Needs of Designers
  • J Deisinger
  • Blach R
  • G Wesche
  • Breining R
  • Si-Mon A
[DBW * 00] DEISINGER J., BLACH R., WESCHE G., BREINING R., SI-MON A.: Towards Immersive Modeling -Challenges and Recommendations: A Workshop Analyzing the Needs of Designers. In In Proc. of the Figure 7: Model created with the proposed system 6th Eurographics Workshop on Virtual Environments (2000), pp. 145-156. 1, 2, 3
CaveCAD: Architectural design in the CAVE
  • C E Hughes
  • Zhang L
  • J P Schulze
  • E Edelstein
  • Macagno E
[HZS * 13] HUGHES C. E., ZHANG L., SCHULZE J. P., EDELSTEIN E., MACAGNO E.: CaveCAD: Architectural design in the CAVE. In 2013 IEEE Symposium on 3D User Interfaces (3DUI) (March 2013), pp. 193194. doi:10.1109/3DUI.2013.6550244. 2, 3
The Design and Evaluation of Marking Menus
  • G P Kurtenbach
KURTENBACH G. P.: The Design and Evaluation of Marking Menus. PhD thesis, Toronto, Ont., Canada, Canada, 1993. UMI Order No. GAXNN-82896. 3