Technical ReportPDF Available

MINERVA e-Learning Innovative Methods and VR for Enterprises

Authors:
  • JO Group - Digital Transformation and European Projects
  • JO Group, Italy, Catania

Abstract and Figures

The project aims to realize an innovative platform for corporate learning, based on e-Learning systems capable of integrating with training courses realized in Virtual Reality (VR). The purpose is to offer to the users an innovative and stimulating learning environment in which the digital training contents are based on objects and tools that simulate real life situations. This new platform allows manifacture enterprises to test new modes of corporate training, allowing the creation of immersive environments for enterprises' training processes via machines and equipments, so that employees are capable of simulating both simple operations, such as assembling machines' equipment, and more complex ones, that imply potential criticalities in the manifacturing process. Through this report, we will explore the technical choices deployed for the implementation of the project. This is just the initial part of the MINERVA project whose realization will require medium-term work. The functions described, as well as the issues and problems detected, are part of an on-going check aimed to run perfectly the project.
Content may be subject to copyright.
0
https://www.jogroup.eu/en/
https://www.pmf-research.eu/en/
https://ht-apps.eu/en/
MINERVA
e-Learning
Innovative Methods
and VR for
Enterprises
1
Table of Contents
INTRODUCTION ........................................................................................................... 2
Basic information and scope of the research ............................................................ 2
Definition & development of VR modules methodology and virtual courses ............ 2
Datasets & tests ....................................................................................................... 4
Experimental procedure: procedural description & methodology ............................ 4
VR system comparison ............................................................................................. 6
1. TECHNICAL DESCRIPTION ....................................................................................... 9
1.1. glTF 2.0 for 3D models: a solution for developing VR and AR apps ................ 9
1.2. Workflow ...................................................................................................... 9
1.3. gLTF 2.0 format ........................................................................................... 10
1.4. Materials & Maps (Textures) ...................................................................... 12
1.5. Blender & node programming ..................................................................... 13
1.6. BSDF principled and material output .......................................................... 14
1.7. Texture mapping of MINERVA project 3D models ....................................... 15
1.8. Bake & exporting of .glTF (embedded) ........................................................ 16
1.9. glTF viewer ................................................................................................. 18
1.10. Issues in the usage of .glTF files and final considerations ............................ 18
2. Project MINERVA: Photovoltaic system ............................................................... 20
2.1. Implementation of the project .................................................................... 20
2.2. Electric meter & texturing ........................................................................... 21
2.3. The MINERVA logo & the photovoltaic system ............................................ 22
2.4. Issues in the Implementation & Concluding Remarks .................................. 25
REFERENCES .......................................................................................................... 27
MATERIALS INDEX ...................................................................................................... 29
2
INTRODUCTION
Basic information and scope of the research
The project aims to realize an innovative platform for corporate learning, based on e-
learning systems capable of integrating with training courses realized in Virtual Reality
(VR). The purpose is to offer to the users an innovative and stimulating learning
environment in which the digital training contents are based on objects and tools that
simulate real life situations. This new platform allows manifacture enterprises to test
new modes of corporate training, allowing the creation of immersive environments for
enterprises' training processes via machines and equipments, so that employees are
capable of simulating both simple operations, such as assembling machines' equipment,
and more complex ones, that imply potential criticalities in the manifacturing process.
Through some devices (smartphones or tablets, goggles and 3D glasses) it will be possible
to guide the worker within an enviroment that perfectly reproduces reality. The
advantages are in terms of employees' security, since it allows to learn dangerous
operations at zero risks for their health, and in terms of operating costs, given that it
nullifies costs linked to typical human mistakes made by unskilled workers.
The VR project will concern the ignition process of a small photovoltaic system, through
the activation of two disconnectors, and thanks to the connection of a cable inside an
inverter. These will activate a lamp, which in turn will take energy from a solar panel
(island/stand alone type system).
Through this report, we will explore the technical choices deployed for the
implementation of the project. This is just the initial part of the MINERVA project whose
realization will require medium-term work. The functions described, as well as the issues
and problems detected, are part of an on-going check aimed to run perfectly the project.
Definition & development of VR modules methodology and virtual
courses
The development of VR applications is not an easy task, rather is embedded with high
complexity. A growing interest in the tools that allow users to develop or participate in
the production of these softwares is evident, in particular the awareness raised by media
3
increased VR popularity and produced a higher demand of instruments that enable users
to create their own virtual worlds.
The development of VR applications need the involvement of experts in the field of VR
and augmented reality: 3D models developers, animators, graphic designers, sound
technicians and writers. Some of these experts may collaborate with programmers to
create virtual world prototypes. An important aspect is that both, project designers and
evaluators, must possess expertise in human-machine interaction, given that VR is a
technology in which, more than in any other conventional applications, the human factor
is a decisive and important feature.
In order to successfully overcome the realization of virtual courses, after an accurate
literature review and an analysis of important experiences and empirical evidences, the
following will be defined:
VR Product Life Cycle
methodology for the creation of virtual worlds
methodology on human-machine interaction
The innovative e-learning 4.0 is a significant step for the implementation of training in
different contexts. In particular for those who face issues and difficulties, it contributes
in the reduction of the ‘digital divide’ and becomes enabling also for ‘heavy’ work
enterprise sectors, which today are considered out from the range of action of traditional
e-learning.
Corporate training prescribes in many cases a tracking of the completed activities
according to internationally known standards and it is necessary that the fulfilment of
training obligations is often attested by certificates. This function, normally available in
the field of e-learning, has not reached yet VR contexts. For this reason, the
characteristics of the e-learning and VR modules will be taken into account, as well as
the mechanisms of sending and receiving data will be established, which in turn will have
to be interfaced through the use of the interchange bridge, a fundamental tool to ensure
the correctness of the data and compliance with parameters, such as response times,
jitters, etc. On the basis of the decisions taken in the previous phases it will be established
whether the interchange data system will take place only at the beginning and at the end
of the course, or in real time during the entire execution of virtual reality activities.
4
Datasets & tests
After few tests, run for every single system unity with the aim of verifying the correctness
of the functioning of the entire process, some documents and reports will be produced,
reporting tabs, features, the inputs (both correct ones and possible mistakes) and the
results that we want to achieve. The model used will be that of Black Box Testing,
specifically the functional testing technique will be adopted and the tables used will
follow the logic of Decisional Table Testing.
To test the individual components of the system, some datasets with pre-determined
characteristics will be prepared, distinguished in order to know the expected output, that
is, the operations to be carried out correctly with respect to the expected activity and
those that generate errors. This dataset will include both the software aspect (input data)
and the hardware aspect (gestures to be performed, connection to the device network).
Test activities with this dataset will then be planned to verify how much the results
obtained do not deviate from those expected. For each table above mentioned a figure
will be established with the aim of testing the result with both the correct and the wrong
dataset, and will be provided with the information necessary to fill these tables with the
results obtained.
Experimental procedure: procedural description & methodology
Evaluating the teaching material used in VR and e-learning systems is very complex,
because it is a question of analysing (and then integrating) aspects related to cognitive
and dynamic psychology, pedagogy and training sciences, technology and business
organization. Testing is one of the main components of course evaluation.
Therefore, the following aspects will be taken into account and set forth to run the
experiment:
quantitative aspects: length of the training course, time of fruition/usage,
equivalent time of in-presence training
qualitative aspects: contents, methodology, technology, usability, coherence
methodology: considered as the way in which the user is actively involved in
increasingly complex experiences, with an educational system that manifests a
growing "intelligence" (tests, interactive exercises, path and dynamic simulations,
etc.)
5
reporting: courses developed in a VR environment must take into account the data
collection methodology of the e-learning module, therefore they will have to
produce one or more outputs consistent with the overall system
The project will be developed in two phases. In the first one, a tutorial will be made
available in order to understand the usage, function and aim of the project. The second
phase will deal with the carrying out.
FIRST PHASE: TUTORIAL
1. The user, that will find himself immediately plunged inside the room, will follow
the 2D writings and arrows placed inside the 3D environment. These will direct
him to the inverter, allowing him to remove the panel, and to connect the only
disconnected cable (next to the display). After that, the user can close the inverter
with the panel previously removed (the panel will be removed by unscrewing 4
screws at the corners, re-screwing them once the action is completed to be able
to close it);
2. After that the user will have to go out of the room to activate the disconnector
inside the electrical panel placed on a wall next to the photovoltaic panel. Once
the disconnector is activated, without which the inverter would not be working,
the user will enter the room again (making this path will become familiar with the
environment and VR movements). Here the inverter with LED panel will be turned
on;
3. At this point, the next step will be to activate the second disconnector positioned
near the inverter;
4. By doing this, the user will be able to move and press the switch positioned on the
wall, which will turn on the neon lamp.
SECOND PHASE: EXECUTION
In the second phase, the user will be inside the 3D room, as in phase one. The difference
will be that the dissecting and inverter elements will be placed on the floor without
following an order. The user's ability must be to place the various elements on the correct
walls, and to activate the whole system in the correct order (as learned in phase 1),
without having indications. The order of the elements can be guessed by the position of
the cables but also by the arrangements learned in phase 1 of the tutorial.
6
VR system comparison
In the following section a list of different VR headsets will be illustrated. Each system
under consideration will also include a list of features, technical features and
development possibilities. Possible pros and cons for the development of applications for
each system will also be considered. Other models have been searched but not included
in the list, as they are deemed not to comply with the requirements of the application
that are currently under development.
OCULUS QUEST
OCULUS RIFT - S
HTC Vive
Valve Index
7
OCULUS QUEST
The main advantage of this device consists in the feature of being all in one. That is to
say that the VR goggle doesn’t require a PC connection to function, given that it possesses
an internal memory of 64 or 128 GB (non-expandable) that can be used to save
applications and files directly to the device. Nevertheless, the goggles must be
synchronized to a mobile device (Android or iOS) to be used.
The system comprehends a complex tracking system capable of detecting the
movements of those who wear them and reproduce their movements in the virtual
world. OCULUS QUEST also provides two controllers that can be used to track and
reproduce hands’ movements to handle elements such as: library, card settings and the
Oculus store. Moreover, according to the producer, in the near future we might see an
extension of its features in other apps and functionalities.
OCULUS RIFT S
In terms of requirements and features, Oculus Rift S is similar to Oculus Quest. It has
the same controllers as Quest and is able to reproduce hand movements, an excellent
high-definition screen, high refresh rate and a wider field of view. Notwithstanding the
efficiency of these googles, some disadvantages must be listed:
it requires to be wired
it requires a connection to a high-performance PC that is able to provide for the
engine of the device
there must be no interactions with mobile devices
Although they are not relevant deficiencies, the disadvantages considered can impact
with the functioning of the project. The use of a cable widely restricts the user's
movements, for this reason there would be a need to simulate a use-case.
HTC VIVE
This device is very similar to the Oculus Rift S. It has two precision controllers, two laser
tracking devices that inform the user of possible crashes into an obstacle and a high
refresh rate of 90 Hz. Just like the Oculus Rift S, this device is fully wired and requires
the use of a high-performance PC. Being also a wired laser tracking device, it is expected
to have many cables scattered around the room.
8
VALVE INDEX
This device was developed entirely by Valve, conceived from the beginning to provide an
astonishing VR experience. Equipped with a 130° field of view, a very high refresh rate of
144 Hz and an IPS panel, this device is able to give a higher image quality than the
headsets examined so far. Unfortunately, both controllers and laser trackers are sold
separately. The model is also wired, but more expensive in terms of costs given the high-
quality features of the goggles.
9
1. TECHNICAL DESCRIPTION
1.1. glTF 2.0 for 3D models: a solution for developing VR and AR
apps
The development of VR and AR apps for the MINERVA project through A-Frame (an Open
Source web framework for creating VR apps), required the export of 3D models useful to
the project, in .gltf format. After several consideration on the format, we convened, at
the end of the modeling process, that it was more convenient for the developer, that the
modeler packed meshes, UV maps, textures and animations into a single file, a .gltf
(Embedded).
The process was performed for all 3D models of the AR/VR project, which include the
following elements: the terrace of a skyscraper, a wall with glass, a photovoltaic panel, a
counter for electricity, a cable channel, windows, an inverter, a switch, a fire door and a
huge neon representing the logo of the "MINERVA" project.
1.2. Workflow
Once we have followed a type of Hard Surface modeling for all the models (lit. "hard
surface", a type of procedural modeling that consists in modifying primitive solids up to
the three-dimensional design of inanimate and non-organic objects, such as machines,
armor, weapons, architectural elements, etc.), using Maya Autodesk software, the
resulting 3D model was exported to .fbx, so that it was possible to pack the meshes
divided into several layers, with the respective UV maps models within the file. This
division has been useful for the texturization of individual models, and subsequently, for
spatial repositioning of the various elements within the real-time rendering of A-Frame.
The single file .fbx obtained was then imported into the Adobe Substance Painter
software, and was subsequently carried out through a process of texturization by painting
(lit. "painting", a process that uses UV maps to draw the textures of a 3D model) for every
single element contained within the .fbx file. Once the texturization process is completed,
folders have been created for the individual texture maps for each element of the .fbx
file, so that they can then be implemented and packaged through node programming on
Blender.
10
Through Blender it was possible to study the mapping of the various textures of the
models, in relation to the problem of compatibility with the .gltf format.
After a series of tests, it was possible to obtain a .gltf (Embedded) file of all the various
3D models made, in which meshes, animations, UV maps and textures - already fused
and modifiable - are compatible with the A-Frame framework and inside the file.
1.3. gLTF 2.0 format
The glTF™ (GL Transmission Format) is a format used to transmit and load 3D models on
the web and on native applications. The .gltf reduces the size of the 3D models and boot
processes necessary to "unpack and process" models contained within it. In few words,
it is a format that contains models optimized for real-time rendering on the web, able to
contain all the information necessary to display a model entirely (=including mesh, UV
map, textures and animations).
During the research and development for the MINERVA project, the .gtlf 2.0 was used as
the main format for the realization of 3D models.
From now on we will consider working on the Blender software, thus taking into account
the rendering features available from the exporter glTF 2.0 of the aforementioned
programs.
Here are its main features, which came out during the tests and which subsequently led
to the realization of the complete files.
USABILITY
Blender has within it an importer/exporter of glTF 2.0 that allows the following
features:
Mesh
Material (Principled BSDF) and Shadeless (Unlit)
Texture (two-dimensional image of a material)
3D chamber
11
Point-like Lights (point lights, spot, and directional lights)
1
Animations (insertion of keyframes, shape keys and possible rigging)
MESH & OPTIMIZATION
The internal structure of .gltf, mimics the memory buffers used by graphics chips during
real-time rendering, so that the resources contained within it can be exploited in the best
way for apps on Desktop, Web, or Mobile and displayed as Optimizely as possible. It is
therefore useful to point out that the number of polygons, as well as a polygon form
comprising more than n polygons with n 4, are automatically reduced in the export
process to .gtlf
2
, so that they have as few polygons as possible and triangular in shape,
suitable for real-time rendering.
MATERIALS
The .gltf Basic Material system (Figure 1) supports a Physically Based Rendering
3
(PBR)
workflow with the following channels of information:
Base Colour
Metallic
Roughness
Baked Ambient Occlusion
Normal Map
Emissive
Blender's import system for .gltf works differently than exporting, but is not useful to
purposes related to this project, so it will not be dealt with in this report.
1
Environmental light is excluded because it would be given by any HDR to be implemented on the A-Frame framework.
2
Curves and other data that are not part of the mesh are not retained within the .gtlf file after the file is exported.
3
A texture reworking that simulates how light reacts on a model to simulate materials in real life.
12
1.4. Materials & Maps (Textures)
In order for a 3D model to be as realistic as possible, additionally to high details dictated
by mesh geometries (number of polygons, shape and arrangement in space in harmony
with the different elements), the materials associated with individual 3D objects must
have properties as similar as possible to characteristics typical of real materials (PBR).
In principle, materials are distinguished in natural materials (i.e. used for how they are
found in nature, e.g. stone), modified natural materials (i.e. materials that preserve their
internal composition unchanged, but are partially transformed by man, e.g. wood),
artificial materials (i.e. materials whose composition has been completely altered by
transformation processes, e.g. rubber).
All materials in turn differ in chemical-structural, physical, mechanical and technological
properties.
In the texture painting process, these properties translate into interaction with natural
and/or artificial light, and are:
Transparency
Reflectivity and/or shine
Roughness and/or smoothness
Colour and/or texture
4
The maps used during the MINERVA project are:
Diffuse Map (also called Albedo Map or Base Color), i.e. the image of the material
without any information related to light
Normal Map (a map whose purpose is to improve the details of the material
allowing to observe depressions or elevations such as holes, cracks, scratches, etc.)
Specular Map (black and white map in which the darker areas are poorly reflective,
while the lighter ones are more reflective)
Bump Map (a map that is used to highlight any imperfections)
4
In fact, it is useful to remember that the PBR is based on lighting techniques that simulate the behavior of light as it
happens in the real world. The process by which a texture (or map) is associated with an object is called texture mapping.
13
Alpha (Opacity) Mask Map (a clipping mask indicating the transparency channel)
Metalness Map (black and white map in which black pixels define an insulating
surface, while white pixels define a metal surface)
Roughness Map (black and white map that allows you to change the roughness of
a material)
Height Map (black and white map, where light areas represent embossed areas,
while dark areas represent bas-relief areas)
1.5. Blender & node programming
Before understanding what has been done at the practical level, to pack the mesh with
textures, it is useful to make a brief introduction to blender's Shader Editor.
The system useful for "packing" textures in the .gltf, associated with UV maps of the
various meshes, is the node programming of the Blender Shader Editor, which takes
advantage of the custom programming language, in which there is a tree structure that
consists of two types of fundamental substructures: the node, which contains the
information, and the arc, which establishes a hierarchical link between two nodes.
When the arc establishes a hierarchical link between two nodes, we speak of a parent
node (the information container), from which comes an oriented arc (the passage of
information) that connects it to a child node (the receiver of the information).
Fig. 2 “Texture Map example on a sphere (left to right): Base Colour, Displacement Map,
Reflection Map, Normal Map"
source: https://help.poliigon.com/en/articles/1712652-what-are-the-different-texture-maps-for
14
The Shader Editor is used to modify materials used within rendering. Within Blender,
these take on properties assigned by Blender's standard graphics engines: Cycles and
Eevee. When you proceed with the export to .gltf, the properties will change depending
on the graphics engine on which the file will be implemented. To begin with, you'll need
to assign a Material to the template and then edit that Material using the Shader Editor.
1.6. BSDF principled and material output
To connect textures to associated meshes, the BSDF Principled has been exploited, which
is a standard Blender parent node (usually when you create a material on Blender, it is
automatically assigned to the reference model), which contains all those basic properties
necessary for most texture maps to exploit the reference graphics engine within which
the reference 3D model is placed. In fact, if textures are not assigned to the BSDF
Principled and/or the values of the various maps are not changed, it is clear that no
changes will be noticed in the model, since the base material associated with at the
beginning would not be altered.
But how to make the graphics engine understand that our 3D model must possess those
requirements, not yet assigned? Simply by connecting the BSDF Principled parent node
through an arc to the material output child node, which, as the name says, is the output
node that will contain the information of all materials applied to the Mesh.
Fig. 3 “Principled BSDF e Material Output”
source: HTTPS://DO CS.BLE NDER.ORG/MAN UAL/IT/2.80/ADDONS/ IO_SCENE_GLTF 2.HTML
15
1.7. Texture mapping of MINERVA project 3D models
Given the right assumptions about what a Material is, how Blender Material standards
work, and the node programming, it is now possible to take a closer look at the Texture
Mapping and Packing process of the .gtlf.
After a series of tests, it was found that:
to set the Albedo Color Map just apply it with Color Space fixed to sRGB to the
Base color of the Principled BSDF. In this way, it allows the Material to assume
that color
as for Metalness, Specular, Roughness, they have to be setted respectively on
Metallic, Specular Roughness of principled BSDF, with Colour Space fixed on Non-
Colour. In this way, the Principled BSDF will understand that the Material must be
assigned only the information necessary to create the desired light effect, and
not even the colour
as for the Opacity Map, after several tests, it was made clear that it was
necessary to set the Material to Alpha Blend mode and finally export the file as a
.png to have the .gltf read the alpha channel of the image as well. Obviously here
you have to connect the Opacity Map to the Alpha channel, always with Colour
Space fixed on Non-Colour
with regard to the Height and the Normal Map, it is necessary to add the two
vectors of Bump and Normal Map, and leave the latter set to Tangent Space,
connecting the Colour of the Height Map to the Height of the Bump vector. The
same procedure will be applied for the Colour of the Normal Map to the Normal
Map of the Bump vector, in addition, it is also necessary to connect the Normal
Bump to the Normal Map vector and finally connect the Normal Texture thus
obtained to the Normal of the BSDF Principled
16
Fig. 4 “Shader Editor of the 3D skyscraper door asset used in Minerva project testing”
1.8. Bake & exporting of .glTF (embedded)
After having completed the Texture Mapping process of the individual 3D elements, we
proceeded with the exportation of the individual elements, selecting them once per
time, paying attention also when selecting the hierarchies of the object.
When the format for export is chosen, it is important that the Embedded glTF (.gltf) is
selected in such a way as to create a unique file that includes meshes, UV maps,
textures and animations for which it is clearly necessary to select all the above options
(Figure 5).
17
Fig. 5 “"Example of a .gltf export on Blender"
Once this was done for all meshes, the developer was able to proceed with the testing
of the implementation of the models within the framework.
18
1.9. glTF viewer
To test the various 3D models, the glTF Viewer was used, making it also possible to
ascertain the actual success of the packing of features through the "More details"
button, present at the web address:
https://gltf-viewer.donmccurdy.com
Here is how the skyscraper (Figure 6) appeared, on the whole, with all its elements, at
the end of the elaboration of the project:
Fig. 6 “3D model test of the skyscraper on glTF Viewer"
1.10. Issues in the usage of .glTF files and final considerations
Overall, the models thus exported with the .gltf 2.0 format, turned out to be excellent
from the point of view of image playback quality, but excessively heavy and impossible
to load on the cache when opening the executive on A-Frame on the Oculus Quest
device with 2K textures (2048x2048px for texture, a size usually used for apps that take
advantage of real-time rendering such as video games or simulators).
Using the .gltf format has been found to be particularly insidious due to several
limitations imposed by the format. The process of self-optimization of the 3D model,
imposed by the export of the .gltf, makes some features inaccessible, incompatible or
19
unlimited (which for example for a .fbx, more useful for developing 3D apps from
desktop device, you can make them intrinsic to the object).
Here's a list of format-imposed limitations that may suggest poor format employability,
according to the developer who tested them on the A-Frame framework:
excessive compression of the polygonal model and loss of texture quality
excessive web size despite file compression
complications in exporting, since the tool for converting to .gtlf is present in a few
software
lack of versatility in editing animations
incompatibility on relatively large models, such as buildings, even low poly
lag with maximum texture resolution at 2048x2048px
it requires devices with excessively high minimum requirements to be leveraged
by the web and Mobile
20
2. Project MINERVA: Photovoltaic
system
2.1. Implementation of the project
This part of the document explains step by step how we implementated the Minerva
project, its characteristics and the precise objective to be achieved with it, but also what
is needed to achieve it. The following report contains a list of physical objects to be
digitally modelled in 3D.
Let's start with the inverter modelling (Figure 7) because it presents a problem given the
number of mouldable polygons for this type of model. We absolutely cannot model all
the circuits that are inside the inverter because we risk exceeding a large number of
polygons for the mesh; the more polygons constitute a model, the harder it will be for
the PC or other devices to calculate it.
Fig. 7 - Inverter modelling
Therefore, after the modelling of the inverter, we move on to the creation of UV mapping
(coordinates that serve the application of textures), and afterwards to the texturing
phase (this is where we give colour and realisticity to everything).
Thanks to textures we get a 3D simulation of a fairly realistic inverter, without the use of
too many polygons.
The model in question must react based on the interactions it has with the user. For this
purpose, animations have been created to simulate the opening and closing for the lid of
the inverter, while as for the cable to be connected to the user, it has taken a lot of use
of a structure of bones/Joints inside, and then added a system that exploits the use of a
curve (which passes through the Joints), to obtain a faithful animation of the translation
of the cable with which the user will have to interact.
Once the inverter animations are finished, a compatibility issue arose regarding the Rig
(Adding Bones/Joints to Give Motion) of the imported models on Unity, which is why a
solution had to be found to solve the incompatibility through a more in-depth study on
the export of animations.
After various research and export tests, we manage to get the animated inverter model
with the finished textures, ready to be implemented by the programmer.
21
2.2. Electric meter & texturing
The electric meter has not given particular problems with regard to modelling, only a very
important rule however determines the realistic patterns of the models and clearly it is
the amount of polygons that a certain model must have to reach it.
The counter (Figure 8) has various bevels that are located under the opening of the plastic
door, the bevels give that extra touch of realism but not being able to recreate them
perfectly using many polygons, textures represent the solution to the problem.
Through a technique that consists in exporting the same model but with multiple
polygons, it was possible to use its mesh to recreate the Normal map (An RGB image that
simulates how far an extrusion can go or particular details that only a model with many
polygons possess).
Fig. 8 Counter bevels interface Fig. 9 - UV mapping
Obviously, before starting to texture the 3D model as usual, one of the things that takes
some time away is the creation of UV mapping (Figure 9), since it is literally required to
open the mesh and crush it on a plane (the UV matrix). Having a perfect UV mapping
means taking advantage of every single space of the matrix in order to have quality
textures.
The lid of the counter is closed, so the user will have to open it in order to interact with
the levers inside of it. In order to open these levers, it is necessary to create a special
animation that exploits the rotation on a hole, so with the use of keyframes we set its
starting and ending point. Through the interpolation of these two points we obtain an
animation curve (controllable by the graph editor) that allow to better control their speed
and acceleration.
22
Very easy to implement and model was the switch (Figure 10), as it is nothing more than
a simple cubic model with a button to be crushed by the user for the ignition of the Neon
(Minerva Logo).
Even in this case the model follows the creation of UV mapping and textures, but also of
the animation of the key, since it will have to respond visually with a rotation that will
simulate the on/off.
Clearly the models will not always be able to achieve
maximum realism, as they depend on the number of
polygons and the quality of the textures. The computing
power of the device (which will run the simulator) will
determine the resolution of textures.
The maximum resolution of each individual texture is 2K, but
it could most likely vary.
Fig. 10 - Switch model
2.3. The MINERVA logo & the photovoltaic system
Modelling the neon was not an easy task, not for the complexity of the model per se but
because of the amount of polygons it needs so as not to be too edgy.
The neon represents the Minerva logo (Figure 11), so it has been used an official logo in
.svg format in order to create a faithful extrusion from the same file.
The neon polygons were reduced as far as possible, then some lighting tests were taken
by taking advantage of a rendering system that is not in real time. The image of the logo,
as it is shown above in fig. 11, doesn't represent the definitive result that will be seen in
real time during the simulation.
Fig. 11 - Lighting tests on logo.
23
Finally, we move on to the modelling of the actor of the scene, namely the photovoltaic
panel (Figure 12). Even in this case the processes are basically the same, changes in the
procedure depend on the object that is created.
Fig. 12 Photovoltaic system
MODELLING
The modelling process starts with a reference photo, that allows to copy every single
piece of the panel while maintaining the limit of the polygonal count. When possible, it
can be used the image plane (image that serves as a tracing for the creation of any
model).
UV CREATION
When the model is ready, is time to bring back in a UV Map all the pieces that represent
the coordinates of the model, without losing the resolution of the pieces. An UV model
can be compared to a surprise egg, in which the paper that wraps it like a UV Map
(Coordinates). The print that is on the paper is nothing more than the texture that colours
the model.
TEXTURING
There are different types of textures (which serve to give the model realisticity) and are
renamed based on context.
In this case having to consider unity as a programming engine, we have available: Albedo
colour, Metallic, Roughness, Height Map and Normal Map.
24
Fig. 13 - Set of textures deployed and HDR 360° image for lights’ application
The skyscraper initially had some issues. The question we asked ourselves was, "How do
we shape the whole outdoor environment of a thousand buildings?" and the answer was
simple: we can't.
Let's imagine that we have models that exceed 100,000 polygons just to recreate all the
buildings, we have to take into account that to reproduce a simulator of such size on the
web is not possible, since you would accumulate GB of files that would greatly slow down
the initial loading, or maybe even hindering the simulator to start.
To solve the problem, we opted for the use of an HDR, a 360-degree image that,
reproduced on a sphere (a bit like the globe, but seen from the inside), allows you to
observe its environment. HDR is used to recreate the light and various reflections of 3D
models.
Once solved this problem, we started modelling all the elements needed to recreate the
terrace.
25
On the list the last things to model are: the room, the floor, the fence wall of the
skyscraper, the canopy (for the protection of the meter), the channel, the door, the frame
and the windows (to let in some light).
Fig. 14 Terrace model reproduction
The processes needed to create this environment have already been mentioned
(modelling, UV creation, texturing).
From the number of models to create, it is easy to guess how much work there is behind
it.
After many hours between model creation, UV mapping, 3D animations and Texture of
each individual element, the result is a test of a realistic rendering of the terrace that is
not in real time.
Obviously, the textures in real time would be less realistic without too many reflections
of light to be calculated by the PC or other devices. The reason behind it is related, as
already explained, to the performance and to the operation on multiple devices.
At this point there have been slight changes and improvements regarding texture
optimization and HDR (for better light rendering). The models are there, the textures as
well, but neither animations nor problems miss. Basically, there is nothing left to do
because the design side deals only with the creative part, but there are minor things to
be solved.
2.4. Issues in the Implementation & Concluding Remarks
The issues are related to the incompatibility of the models for the web. On Unity they
work great using .fbx format for export, but on the web this is not possible. For this
reason, it was important to look for formats that could be compatible, and among them
we find JSON and GLTF.
26
For some time, we have tried to use the JSON, but the problems were still persistent. In
fact, the only software able to export to JSON through an external plugin is Blender, not
the current version, but only the older one. JSON should allow the import of textures and
animations, but when importing the JSON model to the web this does not appear.
JSON format doesn't work. We then moved and tried the GLTF format, that allows to pack
everything inside the single exported file. So, we can export (additionally to the model)
the animations and textures into one file. But to do this, you need to use softwares that
allow exports to GLTF.
The aim of this report was to explore the steps made in the implementation of the Project
MINERVA. The project is on-going, for that reason changes might occur and additional
materials included.
After having explored the procedures, the methodology, as well as the tools and devices
so far listed, we described accurately the technical choices adopted to develop the VR/AR
training course.
The development of VR applications needs the involvement of experts in the field of VR
and augmented reality: 3D models developers, animators, graphic designers, sound
technicians and writers. Some of these experts may collaborate with programmers to
create virtual world prototypes. An important aspect is that both, project designers and
evaluators, must possess expertise in human-machine interaction, given that VR is a
technology in which, more than in any other conventional applications, the human factor
is a decisive and important feature.
27
REFERENCES
5
Abulrub, A. G., Attridge, A. N., & Williams, M. A. (2011).
Virtual reality in engineering education: the future of creative
learning. In Global engineering education conference (EDUCON),
2011 IEEE (pp. 751-757).
Bailenson, J. N., Yee, N., Blascovich, J., Beall, A. C.,
Lundblad, N., & Jin, M. (2008). The use of immersive virtual
reality in the learning sciences: digital transformations of
teachers, students, and social context. The Journal of the
Learning Sciences, 17(1), pp.102-141.
Chittaro, L., & Ranon, R. (2007). Web3D technologies in
learning, education and training: Motivations, issues,
opportunities. Computers & Education, 49(1), pp.318.
Dede, C. (2009). Immersive interfaces for engagement and
learning. Science, 323(5910), pp.66-69.
Gamberini, L., Cottone, P., Spagnolli, A., Varotto, D., &
Mantovani, G. (2003). Responding to a fire emergency in a
virtual environment: Different patterns of action for different
situations. Ergonomics, 46(8), pp.842858.
5
The literature reference is a collection of recommended papers, reports and texts useful to comprehend the entire
spectrum in which we envisage our project. For those who finds interesting to explore this reality and the application of
VR/AR in social and training contexts, we suggest an in-depth consultation of the references.
28
Jou, M., & Wang, J. (2013). Investigation of effects of virtual
reality environments on learning performance of technical skills.
Computers in Human Behaviour, 29(2), pp.433-438.
Mantovani, F., & Castelnuovo, G. (2003). The sense of
presence in virtual training: enhancing skills acquisition and
transfer of knowledge through learning experience in virtual
environments. In G. D. F. Riva (Ed.), Being there: concepts,
effects and measurement of user presence in synthetic
environments (pp. 167-182). Amsterdam: IOS Press.
Martirosov S., Kopecek P. (2017), Virtual Reality and its
Influence on Training and Education, Annals of DAAAM &
Proceedings, vol. 28, p. 708-717.
Mikropoulos, T. A. (2006). Presence: a unique characteristic
in educational virtual environments. Virtual Reality, 10(3-4),
pp.197-206.
Mitchell, P., Parsons, S., & Leonard, A. (2007). Using virtual
environments for teaching social understanding to 6
adolescents with autistic spectrum disorders. Journal of autism
and developmental disorders, 37(3), pp.589-600.
29
Moskaliuk, Bertram & Cress. (2013a). Impact of virtual
training environments on the acquisition and transfer of
knowledge. Cyberpsychology, Behavior, and Social Networking,
16(3), 210214.
Moskaliuk, Bertram & Cress. (2013b). Training in virtual
environments: Putting theory into practice. Ergonomics, 56(2),
pp.195204.
MATERIALS INDEX
30
31
32
33
34
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Recent advances in educational and training technology are offering an increasing number of innovative and promising learning environments including three-dimensional and two-dimensional virtual worlds as well as computer simulations. These environments differ a lot as to both their technological sophistication and to the types of skills taught, varying for example from immersive 3D environments of high-fidelity to simulations of complex relational situations, for the learning of "soft skills" of growing strategic interest to enterprises such as leadership, customer service, coaching, selling etc. The learning potential of virtual training relies on the possibility for learners to make a number of significant first-person experiences and to fail in a safe and protected environment. In order to be effective, the experience should seem real and engaging to participants, as "if they were in there": they should feel (emotionally and cognitively) present in the situation. The goal of this chapter is to investigate the relationships existing among the factors that are crucial to the emergence of a sense of presence in virtual training environments. This exploration aims at outlining a possible model of presence in virtual learning environments, trying to define on the one hand the key factors conveying it in training contexts and on the other hand how the sense of presence contributes to enhance learning efficacy and to support following transfer of knowledge and skills.
Article
Full-text available
This paper presents an experimental study of participants' response to the sudden appearance of a fire emergency in a virtual environment (VE) and of the adaptivity of their response pattern. A VE has been built in which participants meet two situations: first an explorative navigation and afterwards a hurried escape from the unexpected outbreak of fire. Fire intensity and participants' distance from the exit at the outbreak of fire have been varied as well, to create different degrees of danger and different degrees of difficulty in the task of leaving the premises. Participants' action has been collected automatically for quantitative analysis by registering each individual activation of the interaction devices (a triple button joystick). In addition, the movements in both virtual and real environment of additional groups of participants have been videorecorded for qualitative analysis. Results show that the appearance of the fire emergency triggers important changes in the way people move in the VE, and that such changes are all adaptive responses to an emergency situation. In conclusion, people show recognition of a dangerous situation in a VE and readily produce adaptive responses, making the VE suitable for emergency simulations and for use as an effective training tool.
Article
This article illustrates the utility of using virtual environments to transform social interaction via behavior and context, with the goal of improving learning in digital environments. We first describe the technology and theories behind virtual environments and then report data from 4 empirical studies. In Experiment 1, we demonstrated that teachers with augmented social perception (i.e., receiving visual warnings alerting them to students not receiving enough teacher eye gaze) were able to spread their attention more equally among students than teachers without augmented perception. In Experiments 2 and 3, we demonstrated that by breaking the rules of spatial proximity that exist in physical space, students can learn more by being in the center of the teacher's field of view (compared to the periphery) and by being closer to the teacher (compared to farther away). In Experiment 4, we demonstrated that inserting virtual co-learners who were either model students or distracting students changed the learning abilities of experiment participants who conformed to the virtual co-learners. Results suggest that virtual environments will have a unique ability to alter the social dynamics of learning environments via transformed social interaction.
Article
Practical training is what brings imagination and creativity to fruition, which relies significantly on the relevant technical skills needed. Thus, the current study has placed its emphasis on strengthening the learning of technical skills with emerging innovations in technology, while studying the effects of employing such technologies at the same time. As for the students who participated in the study, technical skills had been cultivated in the five dimensions of knowledge, comprehension, simulation, application, and creativity, in accordance to the set teaching objectives and the taxonomy for students learning outcome, while the virtual reality learning environment (VRLE) has also been developed to meet different goals as the various technical skills were being examined. In terms of the nature of technology, operation of machines, selection of process parameters, and process planning in technical skills, VRLE has also designed the six modules of “learning resource”, “digital content”, “collaborative learning”, “formative evaluation”, “simulation of manufacturing process”, and “practical exercise” in particular for providing students with assistance in the development on their technical skills on a specific, gradual basis. After assessing the technical skills that have been developed for the time period of one semester, the students have reported finding VRLE to be a significantly effective method when considering the three dimensions of “operation of machines”, “selection of process parameter”, and “process planning”, though not so much so when it came to the dimension of “nature of technology”. Among the six modules, “simulation of manufacturing process” and “practical exercise” were the two that were most preferred by students for the three dimensions considered.
Article
Web3D open standards allow the delivery of interactive 3D virtual learning environments through the Internet, reaching potentially large numbers of learners worldwide, at any time. This paper introduces the educational use of virtual reality based on Web3D technologies. After briefly presenting the main Web3D technologies, we summarize the pedagogical basis that motivate their exploitation in the context of education and highlight their interesting features. We outline the main positive and negative results obtained so far, and point out some of the current research directions.
Article
This article investigates the effect of presence on learning outcomes in educational virtual environments (EVEs) in a sample of 60 pupils aged between 11 and 13 years. We study the effect of personal presence, social presence and participant’s involvement on certain learning outcomes. We also investigate if the combination of the participant’s representation model in the virtual environment (VE) with the way it is presented gives a higher sense of presence that contributes to learning outcomes. Our results show that the existence of an avatar as the pupils’ representation enhanced presence and helped them to successfully perform their learning tasks. The pupils had a high sense of presence for both cases of the EVE presentation, projection on a wall and through a head mounted display (HMD). Our socialized virtual environment seems to play an important role in learning outcomes. The pupils had a higher sense of presence and completed their learning tasks more easily and successfully in the case of their egocentric representation model using the HMD.
Article
Immersion is the subjective impression that one is participating in a comprehensive, realistic experience. Interactive media now enable various degrees of digital immersion. The more a virtual immersive experience is based on design strategies that combine actional, symbolic, and sensory factors, the greater the participant's suspension of disbelief that she or he is “inside” a digitally enhanced setting. Studies have shown that immersion in a digital environment can enhance education in at least three ways: by allowing multiple perspectives, situated learning, and transfer. Further studies are needed on the capabilities of immersive media for learning, on the instructional designs best suited to each type of immersive medium, and on the learning strengths and preferences these media develop in users.
Article
Six teenagers with Autistic Spectrum Disorders (ASDs) experienced a Virtual Environment (VE) of a café. They also watched three sets of videos of real cafés and buses and judged where they would sit and explained why. Half of the participants received their VE experience between the first and second sets of videos, and half experienced it between the second and third. Ten naïve raters independently coded participants' judgments and reasoning. In direct relation to the timing of VE use, there were several instances of significant improvement in judgments and explanations about where to sit, both in a video of a café and a bus. The results demonstrate the potential of Virtual Reality for teaching social skills.
Virtual Reality and its Influence on Training and Education
  • S Martirosov
  • P Kopecek
Martirosov S., Kopecek P. (2017), Virtual Reality and its Influence on Training and Education, Annals of DAAAM & Proceedings, vol. 28, p. 708-717.