Conference PaperPDF Available

Human-Machine Collaboration in Virtual Reality for Adaptive Production Engineering


Abstract and Figures

This paper outlines the main steps towards an open and adaptive simulation method for human-robot collaboration (HRC) in production engineering supported by virtual reality (VR). The work is based on the latest software developments in the gaming industry, in addition to the already commercially available hardware that is robust and reliable. This allows to overcome VR limitations of the industrial software provided by manufacturing machine producers and it is based on an open-source community programming approach and also leads to significant advantages such as interfacing with the latest developed hardware for realistic user experience in immersive VR, as well as the possibility to share adaptive algorithms. A practical implementation in Unity is provided as a functional prototype for feasibility tests. However, at the time of this paper, no controlled human-subject studies on the implementation have been noted, in fact, this is solely provided to show preliminary proof of concept. Future work will formally address the questions that are raised in this first run.
No caption available
Content may be subject to copyright.
2351-9789 © 2017 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license
Peer-review under responsibility of the scientific committee of the 27th International Conference on Flexible Automation and Intelligent Manufacturing
doi: 10.1016/j.promfg.2017.07.255
Procedia Manufacturing 11 ( 2017 ) 1279 1287
Available online at
27th International Conference on Flexible Automation and Intelligent Manufacturing, FAIM2017,
27-30 June 2017, Modena, Italy
Human-machine collaboration in virtual reality for adaptive
production engineering
Andrea de Giorgio*, Mario Romero, Mauro Onori, Lihui Wang
KTH Royal Institute of Technology, Brinellvägen 68, SE-100 44, Stockholm, Sweden
This paper outlines the main steps towards an open and adaptive simulation method for human-robot collaboration (HRC) in
production engineering supported by virtual reality (VR). The work is based on the latest software developments in the gaming
industry, in addition to the already commercially available hardware that is robust and reliable. This allows to overcome VR
limitations of the industrial software provided by manufacturing machine producers and it is based on an open-source community
programming approach and also leads to significant advantages such as interfacing with the latest developed hardware for
realistic user experience in immersive VR, as well as the possibility to share adaptive algorithms. A practical implementation in
Unity is provided as a functional prototype for feasibility tests. However, at the time of this paper, no controlled human-subject
studies on the implementation have been noted, in fact, this is solely provided to show preliminary proof of concept. Future work
will formally address the questions that are raised in this first run.
© 2017 The Authors. Published by Elsevier B.V.
Peer-review under responsibility of the scientific committee of the 27th International Conference on Flexible Automation and
Intelligent Manufacturing.
Keywords: Virtual Reality; Augmented Reality; Unity Game Engine; Human-Robot Collaboration; Industry 4.0; Robotics; Adaptive Production.
* Corresponding author. Tel.: +46-8790-9065.
E-mail address:
© 2017 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license
Peer-review under responsibility of the scientific committee of the 27th International Conference on Flexible Automation and
Intelligent Manufacturing
1280 Andrea de Giorgio et al. / Procedia Manufacturing 11 ( 2017 ) 1279 – 1287
1. Introduction
Virtual reality (VR) is not a new concept, but some VR technologies that have recently been introduced for the
gaming industry are quite innovative and manufacturing industry is starting to test them, and eventually adopt them.
This paper investigates the relevance of an approach that considers both new software and hardware solutions.
Using software that has been developed for a different use may be deceptive and lead to unsolved problems. The
aim of this paper is, however, not to solve all problems at once, but to address advantages and disadvantages,
together with the open questions that arise by proceeding toward one promising direction of research, that is using
game engines as simulation software to control AR/VR hardware and industrial machines.
A special focus is placed on human-robot collaboration (HRC) opportunities with VR, which the authors find
promising premises for a physical approach of HRC with augmented reality (AR). In fact, when HRC is increasing
daily and the industry is moving from mass production to personalized production, looking for new software
solutions based on a large community of developers, such as game developers, may prove to be useful not only for
industrial production simulation but also for an adaptive improvement of the production process itself for both
machines and humans. The advantages given by an open-source and cross-producers development approach are self-
evident. Just to give an example, machine learning algorithms could play a main role in adaptations of a given
process when shared in form of add-on libraries (which may be further enhanced through cloud-based solutions).
This paper also proposes a simple implementation in Unity of a collaborative robot manipulator as a prototype
that allows a human operator to interact with the VR environment through HTC Vive virtual reality tracked headset
and hand controllers. Advantages and disadvantages of this approach are discussed and will be fully validated in
future work.
1.1. Virtual reality
Virtual reality has been a research topic for a long time. The term VR itself was used for the first time by Jaron
Lanier back in 1989 [1]. In 1992 the Commission of European Communities (CEC) recognized VR as a viable
technology to be included in future calls for proposals [2]. Since then, hundreds of research projects have explored
its potential use in different disciplines, but only in 2012 the Kickstarter project named “Oculus Rift” could bring it
to the general public by raising funds and developing an affordable high-quality Head Mounted Display (HMD). The
popularity of HMDs has started what has been called a second wave of VR, that is still propagating through
academic and industrial researchers and the gaming community [3].
The latest technology has generated HMDs such as the already mentioned Oculus Rift, now at its second version,
the HTC Vive, the PlayStation VR, the Open Source Virtual Reality (OSVR) or Zeiss VR One, plus a series of
mobile HMDs that enhance mobile phone screens into HMD devices for VR such as Google Cardboard, Samsung
GearVR Innovator Edition and Gameface Mark IV. This means that immersive VR has finally become accessible to
everyone. Together with HMDs, there exist input/output (I/O) devices that can be adopted for almost any kind of
user interactions in VR: from haptic devices to tracking devices, controllers and depth cameras.
Another essential part of VR, together with the hardware described above, is the software that enables such
paradigm to be experienced. The main research in this topic has been formulated around the use of Computer Aided
Design (CAD) to reproduce real world items as models in a virtual environment or cyber workspace [4]. However,
reproductions of real objects as computer models do not qualify yet as VR. The determinant factor is the possibility
to exploit the virtual representation for a simulation. In fact, a comprehensive definition of VR is given as
“computer-generated simulations of three-dimensional objects or environments with seemingly real, direct, or
physical user interaction” [5]. This is where the various roads start diverging from one another: gaming industry has
focused on virtual worlds, so that players could interact with them by controlling an avatar, while general purpose
industrial and academic research has focused on simulation software, in which the users can experience the VR as a
mean to visualize the results of such simulations.
Andrea de Giorgio et al. / Procedia Manufacturing 11 ( 2017 ) 1279 – 1287
1.2. Human-machine collaboration in virtual reality
Human-Robot Collaboration (HRC) in production engineering is a research topic for which Augmented Reality
(AR) and VR have provided interfaces that, respectively, expands the quantities of features that operators can watch
in their field of view [6] or replace it completely with a virtual world. Typical industrial production applications span
from manufacturing process simulation [7], [8], which are able to provide real time enhanced information used for
inspection [9] or with focus on training [10], [11], to collaborative factory planning during product design [12] or re-
design [13]. In fact, VR can be used for collaborative (re)designing of production systems when analyzing and
evaluating changes prior to implementation. This makes possible to prevent costly design mistakes [13].
Even though some works have considered the human factor as part of the industrial process and adjusted the VR
to accurately include the operator movements in the simulation [10], [14], [15], it is often the case that the operator
experiences the VR/AR only from a static position, e.g. standing still or seated, where the input sensors are located.
Another approach is to immerge the operator in a cave [16], a projected VR wall, a space that is obviously limited
by physical constraints. The gaming industry on the other hand has worked on methods to embody the sensors and
free the player from physical constraints in the VR/AR environment, providing an enhanced sense of immersion: the
ability to move in the virtual world and interact with anything that appears around the player. Even large
collaborative teams with one or more operators wearing the VR HMDs are supported. Gaming hence becomes the
field in which innovation in AR/VR is being heavily pushed, especially thanks to the large number of end-users, or
players, who are also very prone to accept beta versions and become early testers in order to experience it first. A
similar innovation in industry requires years of testing, changes of industrial standards, replacement of machines and
upgrade of factory design, not to mention costly investments and management decisions.
1.3. A production engineering perspective on virtual reality
The aim of this article is to explore the latest advancements in the gaming industry that can be adopted in
production engineering in order to overcome the main limitations of the industrial software provided by
manufacturing machine producers, with a particular focus on VR immersive applications based on an open-source
community approach.
The remainder of this paper is divided into four sections. In section 2, related work is presented. The ideal steps
toward the use of game engines in production engineering are presented in section 3. A practical application is
presented and discussed in section 4. Finally, in sections 5, conclusions and future work.
2. Related work
While most of the related work on VR and HRC for production engineering has been cited in the introduction in
order to pose the groundwork for this paper, the majority of them share hardware that comes from the gaming
industry, but never simulation software. Only few cases include hybrid approaches, so they are mentioned below.
“BeWare of the robot” is a Virtual Reality Training System (VRTS) developed in UnityTM game engine platform
[10] that simulates a shop-floor environment accessible through an HMD, also using a Microsoft KinectTM sensor to
capture the operator’s movements and virtualize them. An avatar is used to render the operator’s body in the virtual
UnityTM has also been used, together with the Robot Operating System (ROS) [17], as middleware for immersive
VR teleoperation by driving a mobile robot [18] or as real-time simulator for multi-unmanned aerial vehicle local
planning [19], [20], therefore approaching the use industrial robot simulations from the gaming perspective.
1282 Andrea de Giorgio et al. / Procedia Manufacturing 11 ( 2017 ) 1279 – 1287
3. The contribution of game engines toward human-machine collaboration in virtual reality
3.1. Key elements for immersive VR collaboration
The advantage of an immersive VR is given by the focus and longer attention. For example, a study reveals how low
spatial ability learners are more positively affected by VR learning [21]. The key elements to obtain an immersive
VR experience are the following [22]:
A virtual world. A collection of objects in a space governed by rules and relationships, where the objects are
CAD models of industrial machines and other industrial equipment that completes a factory scene, the rules are
defined using classes in Unity, which are called GameObjects, and the relationships are defined by components and
scripts that can be attached to any GameObjects. All together these objects form a virtual world.
Immersion. It refers to the status of the industrial operator being involved in an activity within a virtual space, to
an extension that their mind is separated from the physical space they are living in. A game engine such as Unity
provides full support for HMD devices making the full immersive modality a default setting for the simulation.
Feedback. This element gives the operator the ability to observe the results (outputs) of their activities (input) in
the VR. The standard feedback with HMD devices is visual/auditory, but it has also become common the use of
haptic devices that can provide a sense of touch [23]. Taste and smell remain difficult to explore.
Interactivity. The ability to interact with the virtual world is fundamental. Sensors and devices allow to capture
the operator’s body actions and transform them into virtual actions. Navigation and direct manipulation are the two
main aspects of an immersive virtual reality. HMDs such as the HTC ViveTM are often sold in bundle with some
controllers that operators can hold in their hands. Voice and gesture commands can also be captured by microphones
and cameras mounted in the environment.
Participants. Human operators are an essential element of the VR experience. They can be grouped by
experience and offered a different VR representation based on their capacity to interact with the virtual objects. A
good advantage of the VR is that it allows an operator to be trained simply by using it. This is achieved by
structuring the virtual world in different levels that are called, in turn, each time that the operator acquires enough
knowledge to perform more complex operations. Exactly as a player advances level by level in a videogame.
3.2. Objects representation with game engines
CAD-VR data exchange is an important issue brought up by the VR community because CAD systems used by
the industry to develop their product models are in most cases unsuitable for producing optimal objects
representations for VR simulations. In fact, VR graphic engines make use of scene-graphs for visualization, e.g.
Openscenegraph, OpenSG or OpenGL Performer, which are hierarchical data structures such as triangulated mesh
geometry, spatial transforms, lighting, material properties, etc. and the scene-graphs renderers provide methods to
exploit this data structure at interactive frame rates. Converting CAD data into a scene graph consists of producing
several polygonal representations of each part and during this translation process, the parametric information of the
CAD model and pre-existing texture maps generally do not even get imported into the VR application. In virtual
assembly simulations there are generally two representations of the same model: one for visualization and another
for constraint modeling algorithms that are used to perform the assembly task; these are unified in the game engine
under prefab objects that can be easily instantiated and destroyed using object programming. Similarly, physics
modeling applications also use dual model representations: high-fidelity model for visualization and a coarser
representation used for interactive physics calculations; the most important and challenging task is the improvement
of the physical simulations [24] which can be aided by the use of game engines.
Even if the conversion of CAD models threatens to slow-down processes, advantages are gained by using the
game engines as VR software, because they consists of both physics and visualization simulators which are
integrated and optimized for a realistic user experience.
Andrea de Giorgio et al. / Procedia Manufacturing 11 ( 2017 ) 1279 – 1287
3.3. Open community approach
A peculiar aspect of the game engines is that they come with a set of tools that are developed and constantly
improved with the help of a community of end-users. All the objects that are collected or created by one user, can be
shared in online libraries, e.g. the Unity Assets Store. This allows other users to quickly reuse and eventually
redesign existing solutions instead of implementing them from scratch.
Having large libraries of existing solutions available in production engineering has the potential of making the
design and development of new processes easier, not to mention facilitating the knowledge sharing.
3.4. Remote collaborative tasks
Another interesting application that can be enhanced by VR experience is remote collaboration. It is known that
3D models have been used to guide remote robotic assemblies [25], and collaborative robot monitoring promises an
increased sustainability over the manufacturing process [26], [27]. VR can bring this forward by making the
interface for remote control a telepresence immersive application, where all the advantages of the gaming style can
be exploited to give the operator a realistic experience and full control of their remote actions.
A promising technology that could contribute to this is the ability to capture and stream a real object
representation through a point cloud in the VR. Because the streamed point cloud is lighter than a full video stream
it helps to keep the number of transmitted frames per second high, even with a slow connection [25]. A point can be
modelled with scripts in the game engine to represent objects that enter the operator’s virtual world making the
virtual telepresence realistic and giving enough feedback to guide simultaneous responses of the operator.
4. A practical application: collaborative robot manipulator in virtual reality
In order to showcase the possible advantages of using a game engine as simulation software for both an industrial
machine and the VR environment, a practical application has been designed as a functional prototype. The
application has been used for a preliminary pilot study and will be improved and verified by means of controlled
human-subject studies in our future work.
The project aims to reproduce an ABB IRB 120 robot manipulator in the VR that can be moved by the operator
wearing an HMD with the goal of performing a simple collaborative assembly task. The choice for the HMD fell on
an HTC ViveTM virtual reality tracked headset, together with its hand controllers because of the high tracking
accuracy and the state-of-the-art VR display.
The virtual ABB IRB 120 robot manipulator used in the simulations (see fig. 1 left) is controlled through an
algorithm imported in Unity as an asset, that is a hybridization of genetic algorithms and particle swarm
optimization for inverse kinematics [28]. It allows virtual arms, composed by any number of joints, to be animated
with natural human-like movements. The end effector (EE) of the robot manipulator is tied to the spatial position
and orientation of the controller that the operator holds in the real world. Once the operator moves their arm, the
position of the EE is recomputed and through the algorithm, together with all the other joint positions, so that the
movement leading to the new position appears smooth and as natural as possible. If the robot manipulator was a
human arm, even though with a different number of degrees of freedom (DOF), its movement would look as close
as possible to a human action.
1284 Andrea de Giorgio et al. / Procedia Manufacturing 11 ( 2017 ) 1279 – 1287
Fig. 1. (left) Simulated robot manipulator. The robot manipulator, specifically an ABB IRB 120, is simulated in the virtual reality using Unity.
(right) Joint chain. The joints of the virtual robot manipulator are attached to a joint chain in Unity that allows to define the kinematics.
Unity has a built-in kinematics system, but it is dedicated and limited to the simulation of humanoids, i.e. human-
like characters. Anyhow, Unity can do half the job by easily defining a structure for the joints to move, namely a
joint chain, as shown in fig. 1 (right). The joint chain poses a structure for the CAD model parts of the robot
manipulator and allows the attached script to act on it. Unity also provides some basic settings for each joint that can
be used as default when the dynamics are not assigned to an external script. Joint limitations are defined in Unity
and maximum speeds are defined at script level. The latter choice is due to the quality of the dynamics that is
negatively affected by gaming optimizations that are included in Unity to make the simulation of humanoids more
realistic. For this reason, it is recommendable to use external scripts for both the kinematics and the dynamics of
robot manipulators or any machines composed of multiple joints with a high number of DOF.
As stated earlier, the operator’s arm movements do not correspond to a joint-joint mapping with the robot
manipulator movements. Instead, the operator suggests the desired EE position for the robot manipulator by holding
the controller in their hand and the computation is left to the kinematics algorithm that solves the following issues:
Evaluate the relative EE position corresponding to the operator’s arm extension from the headset;
Adapt the robot space to the reachability of the user arm extension, so that the maximum reachable distance by
the robot is comparable to the maximum extent of the operator’s arm;
Find a way to let the robot move to positions which the operator’s arm cannot easily reach by smoothing the
robot movements over a user quick change of pose to increase their spread.
The first problem is solved by delivering tactile feedback to the user in the form of vibration whenever the
Inverse Kinematic (IK) system is unable to reach the target position.
In order to solve the second problem, valid trajectories are calculated between the solutions provided by the IK
system. This might give the user the impression that the robot makes unnecessary or automatic movements, but the
robot simply changes the pose to avoid singularities and so circumvent the joint limitations.
The third and final problem is solved by allowing the user, rather than the robot, to reposition themselves. The
chosen interaction mode thus becomes a direct control by relative positions on command: the robot follows the
relative movement of the controller, but only when the operator is pressing a trigger button on the controller in their
Andrea de Giorgio et al. / Procedia Manufacturing 11 ( 2017 ) 1279 – 1287
A tradeoff between speed and accuracy must be chosen. Repositioning of either the robot or the user leads to
some difficulty in keeping a stable position of the EE, especially when performing precise actions. Slowing down
the robot motion attenuates the effect of subtle operator movements so it increases the robot accuracy. Conversely,
using a full speed robot could be interesting for the operator to experience, but makes the control very hard to
manage. Different speeds can also be set by using a variable actuator such as a pressure sensor on the controller
instead of a binary on/off trigger button. This regulates the speed based on the operator’s needs.
Direct control of the robotic arm has been difficult when all the DOF were available, that is because the control is
not performed joint to joint, as there is no correspondence between human and robotic arm in terms of DOF. The
application became much more usable, in particular for novices, when the wrist (last three joints) was locked to a
specified direction, for example to point down where the target for grabbing was located. So, it turns out that one
effective trick to improve control consists of assigning a trigger button to lock/unlock the position of the three last
joints. This helps to keep the orientation of the EE fixed, allowing the operator to easily perform precise movements
such as insertions and grabbing.
The overall VR interaction with the robot manipulator has been divided into three levels: posing, recording and
playing. In posing mode, the operator guides the robot manipulator freely toward a certain pose. Once the recording
mode is started, all the movements are saved as a trajectory. When the playing mode is active the robot follows its
recorded trajectory in loop, independently of the operator movements, allowing them to assist to the process but not
interfering with it.
It is interesting to observe the advantage brought by the user perspective in VR. The difference between a first
person interaction with the robotic manipulator and the use of a teach pendant is that VR allows for the space
coordinates to be aligned with the relative position of the operator instead of the robot manipulator origin in the
space. For example, in VR, every time that an operator moves the End Effector (EE) of the robot manipulator with a
gesture to his right, “right” is interpreted as the space direction to the right side of the operator’s gaze. This means
that “right” is a unitary vector constantly updated with the operator’s gaze movements and the robotic movement
corresponding to the command is perfectly intuitive.
On the other hand, the teach pendant will always interpret a movement of the control pad joystick to the right, as a
movement in a specific direction of the robot manipulator axes, independently of the position of the operator. This
method requires the operator to be aligned to the robot axes in order to make sense of the commands, or even to
perform a mental transformation of the desired movement direction into the corresponding joystick direction that
would perform such movement. Either the operator knows a priori the transformation, or it will be necessary to
guess it with some test movements, therefore slowing down the operator’s job with the machine.
Fig. 2. Assembly task as a game. The virtual ABB IRB 120 robot manipulator is presented as an assembly game where students can test their
skills as operators. The student controlling the movement wears the head mounted display for VR and holds a controller in their arm. The robot
manipulator end effector follows the movement and direction of the controller, adapting to the closest natural movement allowed by its six DOF.
Other students can watch the scene from the operator’s perspective on the external monitor.
1286 Andrea de Giorgio et al. / Procedia Manufacturing 11 ( 2017 ) 1279 – 1287
5. Conclusions and future work
The practical application, although very simple and meant as an informal pilot study, has led to observations that
look promising for production engineering simulations based on VR. For example, it helped in understanding how to
manage the correspondence of different degrees of freedom when controlling a robotic arm with a human arm. The
first user perspective in the control of machines makes it very different from a typical teach pendent activation of the
actions. Advantages can be seen on the ability for the operator to embody the machines and learn “on their skin”
how to perform the manufacturing process. Therefore, HRC assumes a different and far richer form already in VR,
even though only with an AR application the operator can obtain a full physical HRC.
It is worth considering the advantage of using open libraries for assets, including scripts. As seen in this paper,
each feature in the game engines can be shared as assets which include CAD models, materials, textures, scripts,
renderings, sound effects, animations, etc. If features such as CAD models have been shared for years between
production software users, new possibilities arise when sharing scripts in simulation software. This is not a new idea
since programmers already share portions of code through specific websites, e.g. GitHub, SourceFourge or
BitBucket, but new is the fact that scripts can be shared as plug-and-play items that can be directly attached to a VR
environment in a simulation software and they can be shared together with the object that is affected by the script.
They will be jointly loaded by the simulator and ready to be used. This corresponds to an open community for
industrial applications that uses shared tools with customized behaviors, running in the same simulation software.
The approach presented is destined to face great challenges in order to make the gaming software fully
compatible with the software standards needed in production engineering simulation. However, the simple
application presented, together with the outlined advantages, encourages further studies in such direction.
The future work that will be carried out includes several open questions presented in this paper, including a
whole new set of possibilities. For example, when an industrial machine is modeled as a prefab in a game engine
such as Unity, it can be exported as an asset, which includes both the CAD parts and the scripts that regulate the
machine complete behavior in the VR. Adaptive production planning could exploit such scripts to simulate a quickly
adaptable design to the given manufacturing task.
Fundamental for the remote control of the machines is to ensure that actions in the VR can correspond to real
world actions. For example, a mobile robot manipulator could take the place of the operator who is interacting with
the VR environment and perform their actions as output in the real world, especially if in a remote location.
More advanced practical cases will be designed and developed to apply and further test the observations and open
questions posed by this paper.
The authors would like to thank Andreas Linn, Haisheng Yu, Lisa Schmitz, Mathilde Caron and Rodrigo Roa
Rodríguez for their contribution through the development of the virtual robot manipulator application [29].
[1] H. Rheingold, Virtual reality: The Revolutionary Technology of Computer-Generated Artificial Worlds - and How It Promises to Transform
Society, First edit. Summit Books, 1991.
[2] J. Encarnacao, M. Gobel, and L. Rosenblum, “European activities in virtual reality,” IEEE Comput. Graph. Appl., vol. 14, no. 1, pp. 66–74,
Jan. 1994.
[3] C. Anthes, R. J. Garcia-Hernandez, M. Wiedemann, and D. Kranzlmuller, “State of the art of virtual reality technology,” in 2016 IEEE
Aerospace Conference, 2016, pp. 1–19.
[4] L. Wang, B. Wong, W. Shen, and S. Lang, “A Java 3d-enabled cyber workspace,” Commun. ACM, vol. 45, no. 11, pp. 45–49, Nov. 2002.
[5] J. D. N. Dionisio, W. G. B. III, and R. Gilbert, “3D Virtual worlds and the metaverse,” ACM Comput. Surv., vol. 45, no. 3, pp. 1–38, Jun.
[6] W. Schreiber, T. Alt, and M. Edelmann, “Augmented reality for industrial applications—a new approach to increase productivity,” Proc.,
[7] T. S. Mujber, T. Szecsi, and M. S. J. Hashmi, “Virtual reality applications in manufacturing process simulation,” J. Mater. Process. Technol.,
vol. 155, pp. 1834–1838, 2004.
[8] D. V. Dorozhkin, J. M. Vance, G. D. Rehn, and M. Lemessi, “Coupling of interactive manufacturing operations simulation and immersive
Andrea de Giorgio et al. / Procedia Manufacturing 11 ( 2017 ) 1279 – 1287
virtual reality,” Virtual Real., vol. 16, no. 1, pp. 15–23, Mar. 2012.
[9] J. Zhou, I. Lee, B. Thomas, R. Menassa, A. Farrant, and A. Sansome, “Applying spatial augmented reality to facilitate in-situ support for
automotive spot welding inspection,” in Proceedings of the 10th International Conference on Virtual Reality Continuum and Its Applications
in Industry - VRCAI ’11, 2011, p. 195.
[10] E. Matsas and G.-C. Vosniakos, “Design of a virtual reality training system for human–robot collaboration in manufacturing tasks,” Int. J.
Interact. Des. Manuf., pp. 1–15, Feb. 2015.
[11] F. Lin, L. Ye, V. G. Duffy, and C.-J. Su, “Developing virtual environments for industrial training,” Inf. Sci. (Ny)., vol. 140, no. 1, pp. 153–
170, 2002.
[12] N. Menck, X. Yang, C. Weidig, P. Winkes, C. Lauer, H. Hagen, B. Hamann, and J. C. Aurich, “Collaborative Factory Planning in Virtual
Reality,” Procedia CIRP, vol. 3, pp. 317–322, 2012.
[13] E. Lindskog, J. Vallhagen, and B. Johansson, “Production system redesign using realistic visualisation,” Int. J. Prod. Res., vol. 55, no. 3, pp.
858–869, 2016.
[14] S. Qiu, X. Fan, D. Wu, Q. He, and D. Zhou, “Virtual human modeling for interactive assembly and disassembly operation in virtual reality
environment,” Int. J. Adv. Manuf. Technol., vol. 69, no. 9–12, pp. 2355–2372, Dec. 2013.
[15] X. Wang, S. K. Ong, and A. Y. C. Nee, “Real-virtual components interaction for assembly simulation and planning,” Robot. Comput. Integr.
Manuf., vol. 41, pp. 102–114, 2016.
[16] C. Cruz-Neira, D. J. Sandin, and T. A. DeFanti, “Surround-screen projection-based virtual reality,” in Proceedings of the 20th annual
conference on Computer graphics and interactive techniques - SIGGRAPH ’93, 1993, pp. 135–142.
[17] M. Quigley, K. Conley, B. Gerkey, and J. Faust, “ROS: an open-source Robot Operating System,” ICRA Work., 2009.
[18] R. Codd-Downey, P. M. Forooshani, A. Speers, H. Wang, and M. Jenkin, “From ROS to unity: Leveraging robot and virtual environment
middleware for immersive teleoperation,” in 2014 IEEE International Conference on Information and Automation (ICIA), 2014, pp. 932–
[19] Y. Hu and W. Meng, “ROSUnitySim: Development and experimentation of a real-time simulator for multi-unmanned aerial vehicle local
planning,” Simulation, vol. 92, no. 10, pp. 931–944, Oct. 2016.
[20] W. Meng, Y. Hu, J. Lin, F. Lin, and R. Teo, “ROS+unity: An efficient high-fidelity 3D multi-UAV navigation and control simulator in GPS-
denied environments,” in IECON 2015 - 41st Annual Conference of the IEEE Industrial Electronics Society, 2015, pp. 002562–002567.
[21] E. A.-L. Lee and K. W. Wong, “Learning with desktop virtual reality: Low spatial ability learners are more positively affected,” Comput.
Educ., vol. 79, pp. 49–58, 2014.
[22] M. A. Muhanna, “Virtual reality and the CAVE: Taxonomy, interaction challenges and research directions,” J. King Saud Univ. - Comput.
Inf. Sci., vol. 27, no. 3, pp. 344–361, 2015.
[23] D. Wang, Y. Zhang, W. Zhou, H. Zhao, and Z. Chen, “Collocation Accuracy of Visuo-Haptic System: Metrics and Calibration,” IEEE
Trans. Haptics, vol. 4, no. 4, pp. 321–326, Oct. 2011.
[24] B. Frohlich, H. Tramberend, A. Beers, M. Agrawala, and D. Baraff, “Physically-based manipulation on the Responsive Workbench,” in
Proceedings IEEE Virtual Reality 2000 (Cat. No.00CB37048), pp. 5–11.
[25] L. Wang, A. Mohammed, and M. Onori, “Remote robotic assembly guided by 3D models linking to a real robot,” CIRP Ann. - Manuf.
Technol., vol. 63, no. 1, pp. 1–4, 2014.
[26] L. Wang, “Collaborative robot monitoring and control for enhanced sustainability.,” Int. J. Adv. Manuf., 2015.
[27] L. Wang, P. Orban, A. Cunningham, and S. Lang, “Remote real-time CNC machining for web-based manufacturing,” Robot. Comput. Integr.
Manuf., vol. 20, no. 6, pp. 563–571, 2004.
[28] S. Starke, N. Hendrich, S. Magg, and J. Zhang, “An Efficient Hybridization of Genetic Algorithms and Particle Swarm Optimization for
Inverse Kinematics.”
[29] A. Linn, H. Yu, L. Schmitz, M. Caron, and R. Roa Rodríguez, “ViRobot.” [Online]. Available:
... ;Vila et al. (2017);Hortelano et al. (2017);Bologa et al. (2017); Erol e Sihn (2017);Kabugo et al. (2020) Bi et al. (2016 Bullinger et al. (2017);Giorgio et al. (2017); Sivanathan, Ritchie e Lim (2017); Telles, Vianna e Le Roux(2018); Hee, Lee e Shvetsova(2019) Fonte: elaborada pela autora É importante observar que as etapas da sistemática são estabelecidas pelos gerentes ou administradores encarregados das áreas de tecnologia e produção, por meio de grupos de discussões. Essas discussões e aplicações devem se estender por todos os representantes dos níveis organizacionais envolvidos com o desenvolvimento de software.Os próximos capítulos (4 e 5) tratarão sobre a modelagem e a análise de especialistas, visando abordagens combinadas de adequação da sistemática. ...
... Com fábricas extremamente automatizadas, existe o questionamento quanto ao risco de desemprego em larga escala, o que geraria grandes problemas sociais, especialmente se a força de trabalho não estiver qualificada para assumir novas funções em um ambiente digitalizado. Além disso, há necessidade de entender os impactos nas relações de consumo(GIORGIO et al., 2017). Uma vez que o consumidor será responsável por gerar as especificações que serão aplicadas ao produto no ...
... O estudo indica uma base para a discussão, sendo os seguintes requisitos aconselháveis de análise em formato da atual situação e o planejamento entendimento dos demais estágios. Devem ser discutidos nessa fase os possíveis riscos, como os 6Cs da estrutura de big data ou estrutura de dados extensa e complexa.Devem ser discutidos nessa fase os possíveis riscos como a discussão do impacto das relações de consumo e produto extremamente exclusivo (determinar como deverá ser regulada a insatisfação do cliente e as possíveis devoluções)(GIORGIO et al., 2017).O desafio de gerir a insatisfação dos clientes e administrar os produtos exclusivos, estão relacionados à capacidade e organização da gestão da produção e da gestão estratégica, por isso a importância de entender as relações de consumo e evitar os riscos de insatisfação. ...
Full-text available
A identificação e implementação das tecnologias e inovações, por meio do desenvolvimento de estratégias empresariais, são fatores que contribuem para o sucesso organizacional. Neste contexto, a indústria 4.0 pode contribuir para o desenvolvimento das indústrias que buscam pelo diferencial tecnológico e inovador. O objetivo desta pesquisa é desenvolver uma sistemática para o desenvolvimento de software na indústria 4.0. Por serem temas atuais e ligados à tecnologia e inovação, ainda são pouco desenvolvidos nos estudos acadêmicos. Por meio de uma revisão bibliográfica, utilizando técnicas de bibliometria e análise de conteúdo, identificaram-se os requisitos gerais do desenvolvimento de software na indústria 4.0, e para cada requisito, quais são os desafios, riscos, lacunas, vantagens e tendências. A sistemática passa por análises de especialistas acadêmicos, sendo considerada como teórica com as incorporações da prática. O método AHP foi utilizado para priorizar os requisitos, resultando na sistemática ajustada, tendo como elementos e pontuação relativa dos especialistas: análise de oportunidades 34,98%, eficiência operacional 2.0 19,41%, otimização do modelo de negócios 15,57%, tecnologia da informação 20,26%, integração de TI 12,29%, gerenciamento de segurança de TI 7,97%, novas interfaces e dados 17,93%, análise e gerenciamento de dados 6,90%, novo gerenciamento de propriedade intelectual 6,69%, aplicações baseadas em nuvem 4,34%, gestão integrada e inteligente 10,75%, cadeia de suprimentos inteligente 3,85%, gerenciamento do ciclo de vida 3,81%, logística inteligente 3,09%, mudança e aprendizagem 16,08%, negócio corporativo 9,33%, aprendizagem organizacional 6,75%. Entre os critérios, o requisito que obtive maior peso foi a análise de oportunidades e entre os subcritérios os de maior peso foram: eficiência operacional, otimização do modelo de negócios e integração de TI. Por fim, é aplicada uma análise de especialistas de empresas que desenvolvem software para entender como os requisitos são utilizados nessas empresas, chegando à sistemática final. A pesquisa por meio do AHP, mostrou a importância da análise de oportunidades e da eficiência operacional nas organizações, justificando que as empresas se preocupam em identificar as oportunidades de negócios e gerenciar os processos de forma eficiente. Esses resultados também mostraram a etapa de preparação da sistemática como a mais relevante. As entrevistas demonstraram a preocupação com a cybersegurança e as ferramentas de integração de TI, porém mostrou-se o gap em relação ao desenvolvimento da gestão integrada e inteligente, com destaque para a logística inteligente.
... By integrating the control system, the computing power of the computer, and the connection between each physical device and the computer computing network were emphasized. Giorgio et al. [12] proposed to take unity software as the development basis of the virtual reality platform, established the mechanical arm model through drawing software and integrated it into the virtual reality platform, and achieved the integration of software and hardware by writing scripts and combining the controller of the real mechanical arm. Meanwhile, Dumitrache and Borangiu [13] constructed an arm simulation environment for teaching and research. ...
Full-text available
This study applied the technology acceptance model to the cyber–physical integration technique in an automation platform. A total of 34 students from a technological university in central Taiwan responded to a survey following the completion of a six-week teaching course. The course helps students develop cyber–physical integration concepts and improve their learning outcomes. Data were collected to examine the path relationships among all variables (i.e., perceived enjoyment, perceived usefulness, perceived ease of use, attitude toward using, and behavioral intention to use) influencing the acceptance of the automation platform learning. Noteworthily, there is a correlation between the dimensions of the technology acceptance model, and all hypotheses are valid.
... Unity was the platform of choice to develop the virtual robot program, supported by SteamVR plugin, as a continuation of a previous pilot study [21]. One of the authors started developing the program during attendance in a VR course at KTH, then we perfected it with new features to exploit it for this research: the final version is "ViRobot v15" program [22]. ...
Conference Paper
Full-text available
Despite the recent increase in Virtual Reality (VR) technologies employed for training manufacturing operators on industrial robotic tasks, the impact of VR methods compared to traditional ones is still unclear. This paper presents an experimental comparison of the two training approaches, with novice operators performing the same manufacturing tasks with a VR robot and with a real robot. The hardware selected is an ABB IRB 120 industrial robot, a HTC Vive head mounted display to operate it, besides a corresponding VR model developed in Unity. Twenty-four students performed two actions — drawing and “pick and place” -– in tasks with increasing difficulty, with both the VR model and the real robot. Completion time and task pass rate are adopted to estimate the learning efficiency, while a questionnaire evaluates the users’ satisfaction. The results show that students using VR overall need less elapsed time to complete all tasks, and they record a higher pass rate. The questionnaire answers show that 83% of participants find the VR model helpful in familiarizing with the real robot, and 75% are in favor of using the virtual tool for training novice operators. Users also report that moving the real robot is more complex than the virtual one; adjusting the speed is harder and the possibility of causing damage is worrisome, whereas the VR robot feels safer to operate and easier to drive. The majority of students are satisfied with the design of the tasks, and feel content with the experience. The main finding is that learning from a VR model allows to master driving a real robot quickly and easily. VR training is more useful than conventional methods because it reduces the learning time, allows for training without hindering production, lowers the risk perception, and improves safety for operators and industrial equipment.
... Main examples of the implementation of VR solutions in HRC contexts refer to the simulation of future working environment, before its realization, in order to validate the envisaged solutions against initial requirements [13] or the support to design cooperative tasks, from movements definitions to task distribution [14][15][16]. The main areas of application of VR for industrial HRC range from collaborative assembly [10,17], to welding [16], to robot control [18], to manipulation and training [10]. ...
Full-text available
This paper presents an integrated approach for the design of human–robot collaborative workstations in industrial shop floors. In particular, the paper presents how to use virtual reality (VR) technologies to support designers in the creation of interactive workstation prototypes and in early validation of design outcomes. VR allows designers to consider and evaluate in advance the overall user experience, adopting a user-centered perspective. The proposed approach relies on two levels: the first allows designers to have an automatic generation and organization of the workstation physical layout in VR, starting from a conceptual description of its functionalities and required tools; the second aims at supporting designers during the design of Human–Machine Interfaces (HMIs) by interaction mapping, HMI prototyping and testing in VR. The proposed approach has been applied on two realistic industrial case studies related to the design of an intensive warehouse and a collaborative assembly workstation for automotive industry, respectively. The two case studies demonstrate how the approach is suited for early prototyping of complex environments and human-machine interactions by taking into account the user experience from the early phases of design.
... To evaluate the aforementioned hypotheses, an HRC task was designed in a Virtual Environment (VE). Virtual Reality (VR) was preferred to simulate the testing environment due to the numerous benefits it offers, such as cost effectiveness (a real robot is not required), the absence of risk to humans (potentially harmful real collision with the robotic arm is avoided), immersiveness, accuracy, reduced task completion times and increased user situation awareness (Cobb et al., 1995;de Giorgio et al., 2017;Dimitrokalli et al., 2020;Malik et al., 2020;Nathanael et al., 2016a;Or et al., 2009;Oyekan et al., 2019;Rückert et al., 2018;Tang et al., 2019). However, a downside of VR is that a virtual robot may distort the perceived feeling of safety and thus may drive users in taking more risks than when collaborating with an actual robot in an industrial cell. ...
The present study reports on a human-robot collaboration experiment involving an industrial task with the specific aim of exploring the effects of (i) fostering human anticipatory behavior towards the robot, through visual cues of the robot's next move and (ii) robot adaptiveness to the human actions through reducing its motion speed with respect to human movement's proximity. For investigating these effects a generic collaborative picking and sorting task was designed, implemented and tested by volunteer participants, in a Virtual Reality simulation environment. Results demonstrated that, showing robot's intent through anticipatory cues significantly increased team efficiency, human safety and collaborative fluency in conjunction with a positive subjective inclination towards the robot. Robot adaptiveness significantly increased human safety without decreasing task efficiency and fluency, compared to a control condition.
... Prior research identified immersion as an essential precondition in the collection of behavioral data through VR that can be projected on the real counterpart scenario (Bailenson, 2018). Since robots in their various appearances and features can be simulated with enough fidelity within the VR sandbox application to match their real counterparts, it can be assumed that the reactions from participants exposed to these virtual robots allow for valid predictions for real HRC setups (de Giorgio et al., 2017). This is backed by the works of Bailenson (2018), who describes the usage of VR technology in a diverse array of social studies, i.e., perspective-taking scenarios where participants assume a different role within an unfamiliar context (Bailey and Bailenson, 2017;Roswell et al., 2020). ...
Full-text available
Human-Robot Collaboration (HRC) has the potential for a paradigm shift in industrial production by complementing the strengths of industrial robots with human staff. However, exploring these scenarios in physical experimental settings is costly and difficult, e.g., due to safety considerations. We present a virtual reality application that allows the exploration of HRC work arrangements with autonomous robots and their effect on human behavior. Prior experimental studies conducted using this application demonstrated the benefits of augmenting an autonomous robot arm with communication channels on subjective aspects such as perceived stress. Motivated by current safety regulations that hinder HRC to expand its full potential, we explored the effects of the augmented communication on objective measures (collision rate and produced goods) within a virtual sandbox application. Explored through a safe and replicable setup, the goal was to determine whether communication channels that provide guidance and explanation on the robot can help mitigate safety hazards without interfering with the production effectiveness of both parties. This is based on the theoretical foundation that communication channels enable the robot to explain its action, helps the human collaboration partner to comprehend the current state of the shared task better, and react accordingly. Focused on the optimization of production output, reduced collision rate, and increased perception of safety, a between-subjects experimental study with two conditions (augmented communication vs non-augmented) was conducted. The results revealed a statistically significant difference in terms of production quantity output and collisions with the robot, favoring the augmented conditions. Additional statistically significant differences regarding self-reported perceived safety were found. The results of this study provide an entry point for future research regarding the augmentation of industrial robots with communication channels for safety purposes.
... In order to be perceived as an immersive environment, virtual reality must fulfil the following five characteristics (Abdelhameed, 2013;Rademacher, 2014;Giorgio et al., 2017):  A virtual world is the computer-generated space with all the objects it contains. These are administered by rules and relationships. ...
Full-text available
The aim of this paper is to show to how far VR technology can help in the implementation of methods in product development. First, an overview of the product development approach and VR technology is given. Then, an evaluation scheme is developed to examine product development methods for their VR suitability. Finally, the results are evaluated and recommendations for future research are given.
Smart manufacturing promotes the demand of new interfaces for communication with autonomies such as big data analysis, digital twin, and self-decisive control. Collaboration between human and the autonomy becomes imperative factor for improving productivity. However, current human-machine interfaces (HMI) such as 2D screens or panels require human knowledge of process and long-term experience to operate, which is not intuitive for beginning workers or is designed to work with the autonomy. This study proposes a human interface framework of cyber-physical system (CPS) based on virtual reality, named as immersive and interactive CPS (I²CPS), to create an interface for human-machine-autonomy collaboration. By combination of data-to-information protocol and middleware, MTConnect and Robot Operating System (ROS), heterogeneous physical systems were integrated with virtual assets such as digital models, digital shadows, and virtual traces of human works in the virtual reality (VR) based interface. All the physical and virtual assets were integrated in the interface that human, autonomy, and physical system can collaborate. Applying constraints in the VR interface and deploying virtual human works to industrial robots were demonstrated to verify the effectiveness of the I²CPS framework, showing collaboration between human and autonomy: augmentation of human skills by autonomy and virtual robot teaching to generate automatic robot programs.
Conference Paper
Full-text available
In this paper, we will introduce our newly developed 3D simulation system for miniature unmanned aerial vehicles (UAVs) navigation and control in GPS-denied environments. As we know, simulation technologies can verify the algorithms and identify potential problems before the actual flight test and to make the physical implementation smoothly and successfully. To enhance the capability of state-of-the-art of research-oriented UAV simulation system, we develop a 3D simulator based on robot operation system (ROS) and a game engine, Unity3D. Unity3D has powerful graphics and can support high-fidelity 3D environments and sensor modeling which is important when we simulate sensing technologies in cluttered and harsh environments. On the other hand, ROS can provide clear software structure and simultaneous operation between hardware devices for actual UAVs. By developing data transmitting interface and necessary sensor modeling techniques, we have successfully glued ROS and Unity together. The integrated simulator can handle real-time multi-UAV navigation and control algorithms, including online processing of a large number of sensor data.
Full-text available
In this paper, we present a novel real-time three-dimensional simulation system, ROSUnitySim, for local planning by miniature unmanned aerial vehicles (UAVs) in cluttered environments. Unlike commonly used simulation systems in robotic research—e.g., USARSim, Gazebo, etc.—in this work our development is based on a robot operation system (ROS) and with a different game engine, Unity3D. Compared with Unreal Engine, which is used in USARSim, Unity3D is much easier for entry level developers and has more users in the industry. On the other hand, as we know, ROS can provide a clear software structure and simultaneous operation between hardware devices for actual UAVs. By developing a data transmitting interface, a communication module and detailed environment and sensor modeling techniques, we have successfully glued ROS and Unity3D together for real-time UAV simulations. Another key point of our work is that we propose an efficient multi-UAV simulation structure and successfully simulate multiple UAVs, which is a challenging task, running 40Hz LIDAR (Light detection and ranging) sensing and communications in complex environments. The simulator structure is almost the same as real flight tests. Hence, by using the developed simulation system, we can easily verify develop flight control and navigation algorithms and save substantial effort in flight tests.
Full-text available
One of the main goals of virtual reality is to provide immersive environments that take participants away from the real life into a virtual one. Many investigators have been interested in bringing new technologies, devices, and applications to facilitate this goal. Few, however, have focused on the specific human–computer interaction aspects of such environments. In this article we present our literature review of virtual reality and the Cave Automated Virtual Environment (CAVE). In particular, the article begins by providing a brief overview of the evolution of virtual reality. In addition, key elements of a virtual reality system are presented along with a proposed taxonomy that categorizes such systems from the perspective of technologies used and the mental immersion level found in these systems. Moreover, a detailed overview of the CAVE is presented in terms of its characteristics, uses, and mainly, the interaction styles inside it. Insights of the interaction challenges and research directions of investigating interaction with virtual reality systems in general and the CAVE in particular are thoroughly discussed as well.
Human resolution of collocation error between haptic and stereoscopic displays influences the design of visuo-haptic rendering algorithms, yet it is not well characterized. In the present study, we propose quantified metrics to measure the visuo-haptic collocation error and a prototype based on half-silvered mirror is established to validate the metrics. After defining collocation error in terms of the spatial correspondence between a tool and a surface, a mathematical model is derived that relate collocation error to the visual and haptic rendering modules within the computational pipeline. A calibration method consisting of Perspective Calibration (PC) and Model Calibration (MC) is then proposed to compensate for manufacturing and assembly tolerances. Based on measurement values by a precise measurement apparatus, i.e. the FARO Arm, parameters for the PC and MC were determined. System performance is evaluated by measuring the collocation error between a real handle and its visual avatar. The average collocation error was 1.8mm within the XwYwOw plane, and the error never exceeded 7mm within an 80¿80¿80mm workspace.
Conference Paper
This paper presents a novel biologically-inspired approach to solving the inverse kinematics problem efficiently on arbitrary joint chains. It provides high accuracy, convincing success rates and is capable of finding suitable solutions for full pose objectives in real-time while incorporating joint constraints. The algorithm tackles the problem by evolutionary optimization and merges the benefits of genetic algorithms with those of swarm intelligence which results in a hybridization that is inspired by individual social behaviour. A multi-objective fitness function is designed which follows the principle of natural evolution within continually changing environments. A further simultaneous exploitation of local extrema then allows obtaining more accurate solutions where dead-end paths can be detected by a simple heuristic. Experimental results show that the presented solution performs significantly more robustly and adaptively than traditional or various related methods and might also be applied to other problems that can be solved by optimization techniques.
The process of redesigning production systems is usually complex, for which virtual design tools are available. These tools are used to analyse and evaluate planned changes prior to implementation, making it possible to identify and prevent costly design mistakes. Despite this, design mistakes arise during and after the implementation. A source for design mistakes is incorrect or insufficient spatial data of the production systems used in the virtual design tools. The aim of this paper is to show how to reduce the time required for planning and implementing the redesign by supporting the process with realistic visualisation, created from accurate spatial data of the real production systems. Three industrial studies were carried out to evaluate how address realistic visualisation in order to support the redesign process. The result shows terrestrial 3D laser scanning to be suitable for capturing spatial data for realistic visualisation of production systems. The realistic visualisation can be used to virtually analyse design alternatives of the production systems, by, for example, combining the 3D laser scan data with 3D CAD models. The realistic visualisation enabling effective and accurate planning, which gives the opportunity to reduce the time required for planning and implementing redesigned production systems.
Assembly motion simulation is crucial during conceptual product design to identify potential assembly issues and enhance assembly efficiency. This paper presents a novel assembly simulation system incorporating real-virtual components interaction in an augmented-reality based environment. With this system, users can manipulate and assemble virtual components to real components during assembly simulation using natural gestures through an enhanced bare-hand interface. Accurate and realistic movements of virtual components can be obtained based on constraints analysis, determination of resultant forces from contacts with real components and manipulation from the user's hands (forces and torques applied, etc.) during assembly simulation. A prototype system has been developed, and a case study of an automobile clutch was conducted to validate the effectiveness and intuitiveness of the system.