Content uploaded by Hafizur Rahaman
Author content
All content in this area was uploaded by Hafizur Rahaman on Dec 19, 2019
Content may be subject to copyright.
From Photo to 3D to Mixed Reality: A Complete Workflow for Cultural
Heritage Visualisation and Experience
Abstract:
The domain of cultural heritage is on the verge of adopting immersive technologies; not only to enhance
user experience and interpretation but also to satisfy the more enthusiastic and tech-savvy visitors and
audiences. However, contemporary academic discourse seldom provides any clearly defined and
versatile workflows for digitising 3D assets from photographs and deploying them to a scalable 3D
mixed reality (MxR) environment; especially considering non-experts with limited budgets. In this
paper, a collection of open access and proprietary software and services are identified and combined via
a practical workflow which can be used for 3D reconstruction to MxR visualisation of cultural heritage
assets. Practical implementations of the methodology has been substantiated through workshops and
participants’ feedback. This paper aims to be helpful to non-expert but enthusiastic users (and the
GLAM sector) to produce image-based 3D models, share them online, and allow audiences to
experience 3D content in a MxR environment.
1. Introduction
Digital documentation and preservation of historical and cultural artefacts has increasingly become an
international priority in recent years, because of concerns regards the destruction and damage inflicted
on internationally recognised heritage assets located in Afghanistan, Syria, Iraq, and most recently, in
Brazil. On the other hand, the emergence of cultural computing (CC) (Haydar et al., 2011; Wang, 2009)
and advancement in computer technologies have helped to smoothen the procedure and production of
3D documentation, representation and dissemination of cultural heritage data (Barsanti et al., 2014;
Portalés et al., 2009). In particular, the rise of affordable techniques (such as image-based photo
modelling) and free and open source software (FOSS) is remarkable.
The domain of cultural heritage has also extended its application of immersive technologies; with
augmented reality (AR), virtual reality (VR) and mixed reality (MxR) technologies supporting sensory
experiences through a combination of real and digital content. As a reflection of the present trends in
the digitisation of 3D heritage assets, we can quickly find studies and research on 3D modelling and
their application in AR/VR (Bruno et al., 2010; Rua and Alvito, 2011). Ample studies have been
published on image-based modelling software analysing their performance (Durand et al., 2011;
Grussenmeyer and Al Khalil, 2008; Wang, 2011), accuracy in 3D production (Bolognesi et al., 2014;
Deseilligny et al., 2011; Oniga et al., 2017), algorithms (Knapitsch et al., 2017), and scalability
(Knapitsch et al., 2017; Nguyen et al., 2012; Santagati et al., 2013). Additionally, we can also find
several studies on approaches in application and use of VR/AR in museums and exhibitions; applied
from various perspectives such as learning in the classroom (Wu et al., 2013), group use of AR in a
public space (Barry et al., 2012), developing virtual exhibitions (Anderson et al., 2010), supporting
interactive experiences through 3D reconstructions (Gkion et al., 2011), and providing low-cost
solutions for 3D interactive museum exhibitions (Monaghan et al., 2011) etc.
However, it is rare to find a complete production pipeline, or guide for non-experts who are interested
in digitising, sharing, and viewing 3D content in a mixed reality (MxR) environment; despite having a
restricted budget. To address the above issues, we present a methodology on 3D digitisation to MxR
visualisation of cultural heritage assets; based on proprietary and open access software and service. We
present two cases to explain the workflow. Photographs are taken from both mobile phone and digital
camera. Image-based modelling technique has been used for point cloud generation (with Regard 3D).
3D mesh has been generated and optimised from the obtained point cloud (with Mesh Lab) and later
uploaded online for public sharing and visualizing in AR/VR (with Sketchfab). Finally, development of
interactive visualisation in a MxR environment (Microsoft HoloLens) has been achieved with a game
engine (Unity 3D and MS Visual Studio). The aim of this paper is to help non-expert users to understand
the methodology and follow the workflow to produce image-based 3D models, share them online, and
experience the digital assets in VR/AR and even in MxR environment such as HoloLens.
2. Proposed methodology and detail workflow
The reality-virtuality continuum presented by Milgram and Kishino (1994) provided a conceptual
spectrum of visualisation technologies spanning the real world and virtual world (figure 1), and
introduced the core concept of Virtual Reality (VR), Augmented Reality (AR), Augmented Virtuality
(AV) and Mixed Reality (MxR). AR enhances users’ perception and understanding of the real world by
superimposing virtual information and object on top of the view of the real world. VR on the other hand,
completely detach the viewer from the real world with a computer-generated environment and offers
artificial presence to that virtual world (Carigniani 2011). Milgram and Kishino (1994) defined MxR as
a subclass of VR that merges the real and the virtual worlds. However, there are instances where the
terms AR and MxR are used interchangeably (Papagiannakis 2018, Raptis 2018).
Figure 1. The reality-virtuality continuum presented by Milgram and Kishino (1994).
In this paper, we present a methodology of producing 3D digital assets from photographs and later
deploy it in a MxR environment with interactivity in supporting cultural heritage visualisation and
learning. The 3D models are obtained by using image-based photo modelling techniques
(photogrammetry), and the MxR environment is developed in HoloLens. We demonstrate a detailed
workflow (figure 2) starting from photo acquisition to all the way to 3D reconstruction and AR/VR/MxR
visualisation. Additionally, we point out best practice and explain how to avoid some common pitfalls.
Figure 2. The overall workflow of 3D modelling to mixed reality visualisation.
3. Photogrammetry/Image based 3D modelling
Based on the software/service, image based 3D modelling can be done with the support of remote/cloud
computing or in local machine. Software or services such as ARC3D and Autodesk Remake use the
power of cloud computing to carry out the data processing. In contrast, Regard3D, PhotoScan,
Aspect3D, 3DF Zephyr, 3D SOM Pro etc. process the data on local client machines. Scope of this paper
doesn’t cover the cloud processing method; the workflow will focus only on those software/applications
that run on a local PC workstation. Generally these software packages follow six steps to produce 3D
reconstructions/3D models, which includes – (1) Image acquisition (or adding photos), (2) Feature
detection, matching, triangulation (or align photos), (3) Sparse reconstruction, bundle adjustment (or
point cloud generation), (4) Dense correspondence matching (or dense cloud generation), (5)
Mesh/surface generation, and (6) Texture generation. A few software packages also offer cloud/mesh
editing within a single package.
3.1 Image Acquisition:
Image-based 3D reconstruction software creates 3D point cloud with camera poses from uncalibrated
photographs. The software determines the geometric properties of objects from photographic images.
This process, therefore, requires comparing and reference points or matching pixels across a series of
photographs. Quality and certain number of photographs are consequently needed to allow the surface
to process, match and triangulates visual features and further generating 3D point-cloud. A mobile phone
camera (iPhone 6, 8 megapixels) has been used for photo shooting and twenty-two photos are taken for
case-1. Alternatively, fifty photographs for case-2 have been captured with a Canon 600D camera.
Photos are taken with the right amount of overlap while repositioning the camera for every photo (more
information about image acquisition and associated setting can be found in a paper by Lab, 2018).
3.2 Point Cloud Generation:
3.2.1 Selection of the software:
Structure from Motion (SfM) is just one of many techniques for 3D reconstruction of objects and
artefacts, and most often been recommended (Nikolov and Madsen, 2016). A wide variety of 3D
modelling programs are available based on SfM; ranging from simple home-brew systems to high-end
professional packages.
Table 1: Benchmarking of FOSS with PhotoScan (adopted from Rahaman and Champion, 2019).
PhotoScan
Visual SfM
PPT GUI
COLMAP
Regard3D
Pointcloud
CloudCompare result
N/A
(used as ground
truth data)
Failed to compare
because of noisy
cloud
A study from Rahaman and Champion (2019) attempted to measure the quality and accuracy of
produced point-clouds by four Free and Open Source (FOSS) software with a popular commercial one.
This study shows that FOSS can create a significant good result/point cloud compared to a commercial
product (figure 3), and it recommends Regard3D. As visualisation of cultural heritage is the primary
objective of this study, acquisition of a visually appealing 3D model generated from a low-cost solution,
therefore, has been given priority. Herein Regard3D has been used for creating a 3D point-cloud from
the datasets (i.e. a set of photographs).
3.2.3 Point cloud generation
Regard3D is a free and open source structure-from-motion program that supports multiple platforms
(Windows, OS X, and Linux). A simple and straightforward graphic user interface (GUI) presents the
details of whatever results it produced highlighted in the left tree view. Experimenting with settings is
thereby much easier since the user only has to click on a result to see a list of the arguments used to
generate it, as well as the running time of that selected step. Similar to other software, the user needs to
set a project path first and input a name to start a project (figure 3).
Figure 3. Workflow offered by Regard3D GUI.
Photographs are required to be set (step 1) and matches computed (Step 2). Next up is camera
registration. In other words the process of determining each camera's position and orientation in the
scene (Step 3), can be achieved by selecting the match results and click Triangulation. Based on this
simple sparse point cloud, users can "densify" the triangulation result (Step 4). From the tree view, it is
possible to highlight the result of the last step and choose ‘Create dense point cloud’. The dense could
(*.ply, *.pcd) can be exported at this stage. Users can also generate a mesh by clicking Create Surface
(Step 5). If the user has used the CMVS/PMVS in the previous stage, then ‘Poisson reconstruction’
becomes the only option to create the surface. Colourisation method can be selected either as coloured
vertices or texture. At this stage, the user can export the generated surface as *.obj file or directly export
to MeshLab as *.mtl file format (step 6).
Figure 4. Dense point cloud created by Regard3D.
3.3.3 Result/Outcome
The computation details and output of the three cases are presented in table 2.
Table 2: Point cloud generation with Regard 3D
Original object
Generated 3d/point-cloud
Process time
Case Study 01
Number of photos: 22
Image matching:
2m12.649s
Triangulation: 34.892s
Densification:
CMVS/PMVS –
4m27.982s
Surface generation: 1m
2.725s
Total time: 8.5min
Case Study 02
Number of photos: 50
Image matching:
27m41.151s
Triangulation: 2m
50.710s
Densification:
CMVS/PMVS –
20m38.443s
Surface generation: 1m
4.381s
Total time: 52m 21s
Nearby vegetation, trees and other buildings/structures often obstruct views for taking photographs of a
heritage building. Making a 3D model of a whole building/architecture based on image based photo
modelling technique therefore is often challenging. Only isolated buildings and structures are suitable
for the application of this image-based 3D modelling technique. Additionally, a larger file takes more
time in computation, we avoided larger file sizes in this study. The 3D model of the ‘Frog’ sculpture
was smaller and easier for testing the workability of the methodology and for conducting the workshop.
4. Mesh Generation and Editing
A mesh is a discrete representation of a geometric model in terms of its geometry, typology and
associated attributes (Comes et al., 2014). We used MeshLab (http://meshlab.net), a free and open source
software to develop mesh by the generated point-cloud from Regard3D. After importing the point-cloud
in the workspace, it requires cleaning (noise, outliers and irrelevant points). MeshLab provides various
tools for selection and removal of points/vertexes.
Figure 5. New surface (Poisson mesh) applied to the point cloud.
MeshLab also offers various tools for developing surface reconstruction (or mesh generation) such as
Ball Pivoting (Bernardini et al., 1999), VCG (Curless and Levoy, 1996) (ISTI Visual computer Lab),
and Screened Poisson Surface Reconstruction (Kazhdan and Hoppe, 2013). We used the Poisson
algorithm to generate the mesh (figure 4). Additional clearing of the surface may be required at this
stage if the process creates unintended surfaces. The acquired mesh can be exported or ‘mesh
simplification’ and ‘cap hole’ steps can be applied to enhance the 3D model. Mesh simplification
reduces the number of polygons while keeping the shape as close as the original. As the number of
polygons is reduced, the processing time decrees accordingly. ‘Cap holes’ on the other hand is self-
explanatory; it closes the holes where the previous mesh generation fails to provide/create any
surface/polygon.
Figure 6. 3D model after manual cleaning.
Texturing is the operation that offers visual skin/membrane coverage of the 3D models, so that the
virtual objects resemble the original. MeshLab can export a wide range of file formats, which supports
textures (e.g. *.x3d, *.obj) and vertex/points colour (*.ply). Although most commonly .obj, .vrml,
.3dxml and .dae are used for AR applications (Comes et al., 2014), we have exported the mesh as
‘Stanford Polygon File Format (*.ply)’ for further use.
5. Sharing and visualizing the 3D models in AR/VR
5.1 Storing the 3D model
At the time of writing this article, we could not find a clear and foolproof way to preserve/store 3D
assets. There are remarkable commercial, public, and hobbyist 3D repositories, ranging from local
institutions to international ones; such as CARARE, Europeana, Smithsonian, TurboSquid, Sketchfab
etc. It is also clear that despite recent EU and North American moves to create archives and digital
heritage infrastructure; 3D models are still not fully accessible to general public (Champion, 2018).
Most of the institutional repositories only allow downloading contents with restricted file format. In
some cases, only *.pdf files with embedded 3D (such as in CARARE) are allowed. Additionally, it is
often difficult to find models from specialised cultural heritage institutional repositories, as they are
typically not connected with external sites or portals. On the other hand, commercial repositories often
lack data provenance and metadata. However, commercial repositories can provide consistent formats
and protocols, and 3D models are relatively easier to find and access. But most of these portals (both
commercial and non-commercial) rarely provide other related information and resource links for further
study or use.
Table 3: Selected 3D repositories with common features
Name
Supported file
format
Fees
Accessibility
Data size
3D model
display
Public/institutional repositories
Smithsonian
(http://3d.si.edu)
STL, OBJ, Single
ASCII point cloud
Free
With few exceptions,
SIx3D offers access to
the data sets
Download limit
is not known
3D
Three D Scans
(http://threedscans.co
m/info)
OBJ, STL
Free
No copyright
restrictions
Unlimited
2D, 3D,
animated gif
CyArk
(http://www.cyark.org
)
LiDAR, point cloud,
photogrammetric
imagery
Free, require
online
application
Licensed under a
Creative Commons
Attribution-Non-
commercial 4.0
International License
Varies, prior
permission
required
2D, 3D
Europeana
http://www.europeana
.eu/portal/en
Jpeg, GIF, PNG,
PDF, Plain ASCII,
MP3, MPEG, AVI,
FBX, mtl, OBJ
Free
Most databases are not
accessible anymore
Not known
2D, 3D
EPOCH
(http://epoch-
net.org/site)
Pdf
Free
Not known
Unlimited
2D
CARARE
(http://pro.carare.eu)
PDF, 3D PDF
Free
Not known
Unlimited
3D inside
PDF
NASA 3D
Resources
(https://nasa3d.arc.nas
a.gov/)
.stl, .3ds
Free
Non-Commercial Use
only
Unlimited
2D
Commercial repositories
Sketchfab
(https://sketchfab.com
)
50 popular file
formats
Basic &
Education
access are
Free
Varies between paid
and free model
Limit based on
membership,
Unlimited
download
2D, 3D, AR,
VR
MyMiniFactory
(https://www.myminif
actory.com)
54 popular file
formats
Free/paid
option to
download
and print
Unlimited
uploads
2D
Blendswap
(https://www.blendsw
ap.com)
37 popular file
formats
Free
Varying Creative
Commons
Free 200MB
download/m,
Upload limit 90
MB
2D
3D Warehouse
(https://3dwarehouse.s
ketchup.com)
.skp
Free
General Model License
Agreement
50MB (max)
upload
2D, 3D
TurboSquid
(https://www.turbosqu
id.com)
16 popular file
formats
Free and
paid
Model Licenses:
Various
No restriction
2D, 3D
It is relatively apparent that (at the time of writing this paper), Sketchfab is a commercial platform with
flexible and versatile hosting that can support the general public, small institutes and non-profit
organisations to host, minor edit, share, trade and showcase their 3D models. These models can later be
shared online and viewed in AR/VR with the supplied application. The other commercial repositories
such as Turbo Squid, ShareCG, MyMiniFactory and Blendswap are mostly for the trading of 3D models
and are not intended for preservation. Additionally, they charge fees on trading and may not interested
in archiving (as their archiving policy is not clear).
5.2 Selection of the visualisation (AR/VR) platform
There are number of software frameworks at present to support AR/VR especially suited for cultural
heritage. The first criterion is the choice of a single package or solution that supports non-expert users
with a limited or restricted budget. Additionally, it is difficult to find a compelling platform or tools that
accept a wide variety of file formats, supports cross-platform deployment and visualisation.
There are certain points of overlap between AR and VR since some existing development platforms are
suitable for both experiences. A study from Mafkereseb (2018) on most commonly used current AR/VR
frameworks has featured their strengths and weaknesses in various settings. However, this study is
limited to exploring the tools and not the whole pipeline. Comes et al. (2014) studied 3D AR, AndAR
and VaD AR, however, this study focused on simplification of the 3D models rather than evaluating any
AR/VR platform. The study from Krevelen et al. (2010) presents the technicalities, development history
and characteristics of a wide range or AR technologies.
Portales et al. (2009) have explored various AR platforms and later adopted BazAR (a vision based open
source library) to deploy their 3D content. However, to use BazAR user needs to have vision based
relative technical knowledge of programming. The workflow adopted by Taboada (2011) for visualising
3D point cloud data in VR only, used OGRE 3D engine (game engine). Studies from Amin et al. (2015)
and Guidazzoli et al. (2017) have showcased feature list comparison of various software development
kit (SDK) for AR visualisation. Recently, AR toolkits based on visual-inertial odometry tracking have
been gaining attention. In particular, the Google ARCore and Apple ARkit look promising. However, it
is often difficult for a non-technical person to overcome the steep learning curve in order to use their
offered SDKs.
Although X3D and three.js models can run without hindrance in HTML formatted web pages, the issue
of choosing 3D file formats best suited for archiving or displaying is still a big challenge in the digital
heritage domain. 3DHOP (Potenziani et al., 2015) and Sketchfab are prominent among the relatively
few popular services that provide storing, viewing and exhibiting of 3D models online. The open source
and free 3DHOP is, however, restricted to *.NXS and *.PLY file format only. Sketchfab, on the other
hand, supports more than 50 formats and the free 'basic access’ also allows 3D models with a maximum
of 50MB for uploading files (at the time of writing this paper). A comprehensive study on 3D web
applications by Guidazzoli et al. (2017) also reveals that Sketchfab competes very well against others
for its documentation, ease of learning, GUI, reliability and overall graphics quality. We have adopted
Sketchfab in the workflow for online sharing and visualisation of the above mentioned 3D models.
5.3 AR/VR visualisation with Sketchfab
Sketchfab supports 50 extensions (as od 30 October, 2017), including compressed archives such as ZIP,
RAR and 7z. The GUI offered by Sketchfab for the user dashboard is simple and easy to use. First, the
user is required to create a user account in order to upload the 3D content. Generating an account is also
possible by signing up with Facebook, Google and Twitter while bypassing the default online form.
Sketchfab generally compresses the file first and then starts uploading to the server. We have used the
*.ply file saved previously from MeshLab.
The system asks for user input/information regarding the model name, description, categories and
keywords as metadata. Next, it starts regenerating and take the user to the ‘3D Setting’ mode.
Sketchfab’s GUI for 3D Settings offers various settings to adjust/control the model (figure 7). After
getting the desired output the user can press the ‘Save Settings’ and ‘Publish’ button to finalise the
process.
Figure 7. GUI for 3D settings offered by Sketchfab.
Figure 8. View the 3D in AR with Sketchfab app.
The universal 3D VR viewer supported by Sketchfab works on most operating systems (Windows, Mac,
Linux, iOS and Android) without any required plugin. A user can embed 3D or VR models on any
website, forum, or even in Facebook to share their content online. Peers can browse in 3D or VR without
leaving the user’s own website. The 3D models can also be viewed in VR using various HMD such as
Vive, Rift, Gear VR, or Cardboard navigation modes. Most interestingly, via the Sketchfab app installed
in a compatible mobile device, one can also view the 3D in AR (figure 8).
6. Developing interactivity and visualize in a MxR environment
Mixed Reality (MxR) blends the real-world with the virtual world. It combines interactivity and
immersion and offers immersive-interactive experience to view the real-virtual world (Papaginnakis
2018). MxR, therefore, aims to unite different properties of the Milgram and Kishino’s (1994)
continuum into a single immersive reality experience. Magic Leap, Meta 2, HoloLens are but a few of
the many current popular standalone head mounted displays (HMD) that offer a MxR experience. There
are some other alternatives (cheaper solutions) in the market such as Holoboard and Mira, which use a
smartphone for processing the data and visualisation. However, they are still in their development stage
and comparison with the more established group is out of the scope of this present study.
Microsoft HoloLens is an optical see-through Head-Mounted- Display (HMD) developed mainly for
AR/MxR experiences. The device can use sensual, and natural interface commands through gaze,
gesture, and voice. Gaze commands, such as head-tracking, allows the user to bring application focus to
whatever the user is perceiving. Various gestures such as bloom, air tap and pinch are supported for
multiple interactions with the virtual object or interface. Any virtual object or button can be selected
using an air tap method, similar to clicking an imaginary computer mouse. The tap and pinch can also
be used for a drag simulation to move a virtual object. Users can access the shell/interface through a
"bloom" gesture. Similar to pressing a Windows key on a Windows keyboard or tablet. Voice commands
can also be used to activate actions (source: https://www.microsoft.com/en-us/hololens/hardware,
access date 04 April 2019).
A large number of domains are utilising HoloLens for diverse application areas. Even if the utilisation
of HoloLens isn’t markedly observed in the Cultural Heritage (CH) domain, the last two years have
witnessed few Virtual Heritage (VH) applications that were developed using HoloLens. These
exemplary HoloLens based applications include Baskaran (2018), Bottino et al., (2017), Pollalis et al.,
(2017), Pollalis, Gilvin, et al. (2018), Pollalis, Minor, et al. (2018), Scott et al. (2018). However, these
articles focus on the experiential aspect of MxR in VH rather than the technical and procedural details
that could be of a huge benefit to domain’s professionals in terms of developing similar experiences as
presented in the articles. In this regard, our paper discusses the major steps required to deploy 3D models
into HoloLens for a mixed reality experience. The discussion below has been organised for non-expert
users of the technology.
6.1 Setting the environment with Unity 3D and importing 3D model
As briefly discussed at the introductory section, ‘Unity 3D’ (or Unity) is a popular cross-platform game
engine widely used to develop games. Due to the game engine’s popularity, most AR/VR headsets use
Unity as a development platform. Similarly, HoloLens uses Unity to develop the intended AR/MxR
experiences.
The first step to transferring 3D models to the HoloLens is configuring the Unity development
environment. There are two ways to achieve this. The first option is to use the Unity standard
configuration procedures. The second option is to use the Mixed Reality Toolkit, which is a Unity
package consisting of a collection of custom tools developed by Microsoft HoloLens team to facilitate
the development and deployment of AR/MxR experiences to the device (HoloLens). This article uses
the Mixed Reality Toolkit (figure 9).
The Mixed Reality Toolkit was downloaded from a Microsoft HoloLens GitHub repository and
imported into a Unity project as an asset package. After importing the Toolkit, configuring the project
environment was performed at two levels. The first level of configuration involves applying changes to
the “Project Settings” using the “Apply Mixed Reality Project Settings” option from the Mixed Reality
Toolkit menu bar (figure 9, step 2). This setting configures Unity at a project level. It needs to make
sure that, the ‘Settings for Universal Windows Platform’ is selected, and the ‘Virtual Reality Supported’
box from XR settings list is checked (figure 9, step 3 & 4). This configuration includes scripting
backend, rendering quality, and player settings. All configurations have been applied to the present
‘scene created’ in this specific Unity project.
Figure 9. Configuring Unity project and scene settings to ensure compatibility with HoloLens.
The second level of configuration applies changes to a specific scene created ‘under a project’. This has
been achieved using the “Apply Mixed Reality Scene Settings” option from the Mixed Reality Toolkit
menu bar (figure 9, step 2). For the scene level configuration, we used the toolkit to set the camera
position, add the custom HoloLens camera, and configure background colour and rendering settings.
After the proper configurations were applied to the “Project Settings” and “Scene Settings”, the 3D
model generated in section 4 was imported into the Unity project using the platform’s asset importing
option. Finally, gestural interactivity was implemented on the 3D model to allow users to interact with
and manipulate the model via the default gestures recognised by the HoloLens. The Mixed Reality
Toolkit has a number of scripts and tools for adding interaction mechanisms to the MxR experience. We
have used the toolkit to add gestural and gaze-based interaction mechanisms.
6.2 Building with Universal Windows Platform (UWP)
Unity can build projects for a number of platforms. In this paper, however, the project was built for the
Universal Windows Platform (UWP). UWP is an open source API developed by Microsoft and first
introduced in Windows 10 (source: https://visualstudio.microsoft.com/vs/features/universal-windows-
platform, dated: 1st March 2019). The purpose of this platform is to help develop universal apps that run
on Windows 10, Windows 10 Mobile, Xbox One and HoloLens without the need to write codes for each
target device. Hence, a single build can be deployed to multiple target devices. The steps followed before
building the project for UWP were: adding a scene to the ‘Built Settings’, enabling C# debugging; and
specifying HoloLens as a target device.
Figure 10 shows how to configure the built environment and the steps of building the project for UW.
First, it requires the user to select the ‘Build Settings’ from the File menu (figure 10, step 2 & 3) and
click ‘Add Open Scenes’ and select the scenes opted for deployment (be sure that the scenes are listed
in the ‘Scenes In Build’ box). A user needs to make sure the ‘Universal Windows Platform’ is selected
as Platform and then select the HoloLens as ‘Target Device’ (step 4). Check the ‘Unity C# Projects’ box
to enable C# debugging, and finally, click ‘Build’ (step 5 & 6). At this stage, all the files (including
*.sln) required for project deployment to the HoloLens will be created and stored at a location specified
by the user.
Figure 10. Building the project for Universal Windows Platform.
6.3 Debugging with Microsoft Visual Studio and deploying to HoloLens
At this stage, the *.sln file from the “Building with UWP” step discussed above is imported into
Microsoft Visual Studio for debugging and deploying to HoloLens. Figure 11 shows the output of the
deployment process. Deployment can commence by connecting the HoloLens via WiFi or using a USB
cable and launching the deployment process from the Debug menu.
Figure 11. Debugging and deploying the solution to HoloLens using Visual Studio.
The HoloLens must be connected to the computer via USB and the *.sln file created during the build
process is opened. To start the process, a user needs to select ‘Start Without Debugging’ from the
‘Debug” pull-down menu (figure 11, step 1). The system will show the output details of the deployment
process (figure 11, step 2). The user must make sure that the ‘Deploy’ count is 1 and ‘failed’ count is
zero. If the deployment is successful, the HoloLens will launch the deployed application by itself.
6.4 Mixed reality experience in the HoloLens
Usually, MxR experiences are developed to be used by a single user unless the experience is developed
for collaborative use. After the application has been released into the HoloLens, a user can also connect
the device to a bigger screen using WiFi for streaming the experience with others. It is difficult to
imagine what the HoloLens user is experiencing unless the experience is shared. As a workaround to
this issue, HoloLens allows the capturing and streaming of the users’ view to another screen, as long as
both devices are connected to the same WiFi network. In our case, we used this Mixed Reality Capture
capability to stream the HoloLens user’s experience (shown in Figure 12 below). There are a few
seconds lag between the actual experience and the streaming of content to the other person’s screen.
Figure 12. A user interacting with a 3D model via HoloLens.
7. Demonstration and user feedback
To validate the methodology a workshop was conducted at the Curtin library makerspace, Curtin
University, Australia, on 23 November 2018. Fourteen participants attended the workshop, ranging from
novice to expert computer users with an age range of 18 to 60 years. During the workshop, the data sets
were supplied to the participants, and they were asked to follow the steps from the instructors. Most of
the participants managed to produce the 3D model and reach the final level of deploying the content to
the HoloLens. Due to the limited number of HoloLens (one set) and permitted time of the workshop
(two hours), only one 3D asset was deployed and interactions were set with simple gestures. The
environment with the embedded 3D assets was then shared with the participants to visualise and interact
with them. The participants experienced the MxR environment and provided feedback in an informal
post-experience discussion. This discussion, however, gives us some remarkable points to ponder for
the future:
Partial workflow (i.e. 3D modelling, editing and AR/VR visualisation with the FOSS) is supported
by cross platforms. However, deployment of 3D assets to HoloLens (for MxR) requires Windows
10 operating systems, which prohibited the Mac operating system users from immediate
participation or for following the workflow.
The workflow was found to be workable and easy to follow. The learning of the gesture control to
interact the 3D models in a MxR environment based on HoloLens requires time and practise for
first-time users. However, they managed to learn the gestures within a short period.
The workshop duration was limited to two hours, which evoked complaints from the participants,
and we were advised to host an extended session.
8. Conclusion
In this paper, we present a complete workflow for experiencing a 3D model in a MxR environment
captured from real-world objects by using proprietary and open access software and service. The
workflow starts with digitising 3D artefacts based on image-based photo modelling (photogrammetry),
converting a 3D point cloud to a 3D mesh, saving and sharing the 3D model to an online repository,
viewing the 3D model in VR/AR, and finally deploying the 3D content to a MxR environment (MS
HoloLens) and interacting with the virtual content.
The workflow was demonstrated to fourteen participants in a workshop session, and the users' feedback
was collected. User feedback validates the workflow as easy to learn, workable and effective; with a
few minor issues. We therefore believe this paper will help non-expert users, as well as small museums,
heritage institutes, interested communities and local groups who are interested in digitising their 3D
collections, sharing them online and visualising the 3D contents in an AR/VR/MxR environment;
especially if their budget is limited and they do not have extensive experience in photogrammetry,
modelling, or programming.
References
Amin, D., Govilkar, S., 2015. Comparative Study of Augmented Reality Sdks. International Journal on
Computational Science & Applications 5, 11-26.
Anderson, E.F., McLoughlin, L., Liarokapis, F., Peters, C., Petridis, P., De Freitas, S.J.V.r., 2010. Developing
Serious Games for Cultural Heritage: A State-of-the-art Review. Virtual Reality 14, 255-275.
Barry, A., Thomas, G., Debenham, P., Trout, J., 2012. Augmented Reality in i Public Space: The Natural
History Museum, London. Computer 45, 42-47.
Barsanti, S.G., Remondino, F., Fenández-Palacios, B.J., Visintini, D., 2014. Critical Factors and Guidelines
for 3D Surveying and Modelling in Cultural Heritage. International Journal of Heritage in the Digital
Era 3, 141-158.
Baskaran, A., 2018. Holograms and History, Inside the Collection. Museum of Applied Arts & Sciences.
Bekele, M.K., Pierdicca, R., Frontoni, E., Malinverni, E.S., Gain, J., 2018. A Survey of Augmented, Virtual,
and Mixed Reality for Cultural Heritage. Journal on Computing and Cultural Heritage 11, 1-36.
Bernardini, F., Mittleman, J., Rushmeier, H., Silva, C., Taubin, G., 1999. The Ball-Pivoting Algorithm for
Surface Reconstruction. IEEE transactions on visualization and computer graphics 5, 349-359.
Bolognesi, M., Furini, A., Russo, V., Pellegrinelli, A., Russo, P., 2014. Accuracy of Cultural Heritage 3D
Models by RPAS and Terrestrial Photogrammetry, pp. 113-119.
Bottino, A.G., García, A.M., Occhipinti, E., 2017. Holomuseum: A Prototype of Interactive Exhibition with
Mixed Reality Glasses HoloLens.
Bruno, F., Bruno, S., De Sensi, G., Luchi, M.-L., Mancuso, S., Muzzupappa, M., 2010. From 3D
Reconstruction to Virtual Reality: A Complete Methodology for Digital Archaeological Exhibition.
Journal of Cultural Heritage 11, 42-49.
Carmigniani, J., Furht, B., Anisetti, M., Ceravolo, P., Damiani, E., Ivkovic, M., 2010. Augmented reality
technologies, systems and applications. Multimedia Tools and Applications 51, 341-377.
Champion, E., 2018. The Role of 3D Models in Virtual Heritage Intrastructures, in: Benardou, A., Champion,
E., Dallas, C., Hughes, L.M. (Eds.), Cultural Heritage Infrastructures in Digital Humanities. NY
Routledge, Abingdon, Oxon New York, p. 172.
Comes, R., Neamţu, C., Buna, Z., Badiu, I., Pupeză, P., 2014. Methodology to Create 3D Models for
Augmented Reality Applications Using Scanned Point Clouds. Mediterr Archaeol Ar 14, 35-44.
Curless, B., Levoy, M., 1996. A Volumetric Method for Building Complex Models From Range Images,
Proceedings of the 23rd annual conference on Computer graphics and interactive techniques. ACM,
pp. 303-312.
Deseilligny, M.P., Luca, L.D., Remondino, F., 2011. Automated Image-Based Procedures for Accurate
Artifacts 3D Modeling And Orthoimage Generation, Proc. CIPA.
Durand, H., Engberg, A., Pope, S.T., 2011. A Comparison of 3d Modeling Programs. ATON
Project/CREATE, Department of Music, University of California, Santa Barbara, USA, pp.1-9.
Gkion, M., Patoli, M., White, M., 2011. Museum Interactive Experiences Through a 3D Reconstruction of
the Church of Santa Chiara, IASTED Computer Graphics and Virtual Reallity Conference,
Cambridge, United Kingdom.
Grussenmeyer, P., Al Khalil, O., 2008. A Comparison of Photogrammetry Software Packages for the
Documentation of Buildings.
Guidazzoli, A., Liguori, M.C., Chiavarini, B., Verri, L., Imboden, S., De Luca, D., Ponti, F.D., 2017. From
3D Web to VR historical scenarios: A Cross-media Digital Heritage Application for Audience
Development, Virtual System & Multimedia (VSMM), 2017 23rd International Conference on. IEEE,
pp. 1-8.
Haydar, M., Roussel, D., Maïdi, M., Otmane, S., Mallem, M.J.V.r., 2011. Virtual and Augmented Reality for
Cultural Computing and Heritage: A Case Study of Virtual Exploration of Underwater Archaeological
Sites (preprint). Virtual Reality 15, pp.311-327.
Kazhdan, M., Hoppe, H., 2013. Screened Poisson Surface Reconstruction. ACM Transactions on Graphics
(ToG) 32, 29.
Knapitsch, A., Park, J., Zhou, Q.-Y., Koltun, V., 2017. Tanks and Temples: Benchmarking Large-scale Scene
Reconstruction. ACM Transactions on Graphics (TOG) 36, 78.
Lab, D., 2018. Getting Started: A Guide to Photogrammetry, 3D Capturing Technology. University of
Michigan 3D Lab, Michigan, USA, p. 16.
Mancera-Taboada, J., Rodríguez-Gonzálvez, P., González-Aguilera, D., Finat, J., San José, J., Fernández,
J.J., Martínez, J., Martínez, R., 2011. From the Point Cloud to Virtual and Augmented Reality: Digital
Accessibility for Disabled People in San Martin’s Church (Segovia) and its Surroundings,
International Conference on Computational Science and Its Applications. Springer, pp. 303-317.
Milgram, P., Kishino, F., 1994. A taxonomy of mixed reality visual displays. IEICE Transactions on
Information and Systems 77, 1321-1329.
Monaghan, D., O'Sullivan, J., O'Connor, N.E., Kelly, B., Kazmierczak, O., Comer, L., 2011. Low-cost
Creation of a 3D Interactive Museum Exhibition, Proceedings of the 19th ACM international
conference on Multimedia. ACM, pp. 823-824.
Nguyen, M.H., Wünsche, B., Delmas, P., Lutteroth, C., 2012. 3D Models from the Black Box: Investigating
the Current State of Image-Based Modeling, Proceedings of the 20th International Conference on
Computer Graphics, Visualisation and Computer Vision (WSCG 2012), Pilsen, Czech Republic, June.
Nikolov, I., Madsen, C., 2016. Benchmarking Close-range Structure from Motion 3D Reconstruction
Software Under Varying Capturing Conditions, Euro-Mediterranean Conference. Springer, pp. 15-26.
Oniga, E., Chirilă, C., Stătescu, F., 2017. Accuracy Assessment of a Complex Building 3d Model
Reconstructed from Images Acquired with a Low-Cost Uas, ISPRS-International Archives of the
Photogrammetry, Remote Sensing and Spatial Information Sciences, Nafplio, Greece, pp. 551-558.
Papagiannakis, G., Geronikolakis, E., Pateraki, M., López-Menchero, V.M., Tsioumas, M., Sylaiou, S.,
Liarokapis, F., Grammatikopoulou, A., Dimitropoulos, K., Grammalidis, N., 2018. Mixed Reality,
Gamified Presence, and Storytelling for Virtual Museums. Encyclopedia of Computer Graphics
Games, 1-13.
Pollalis, C., Fahnbulleh, W., Tynes, J., Shaer, O., 2017. HoloMuse: Enhancing Engagement With
Archaeological Artifacts Through Gesture-based Interaction with Holograms, Proceedings of the
Eleventh International Conference on Tangible, Embedded, and Embodied Interaction. ACM, pp. 565-
570.
Pollalis, C., Gilvin, A., Westendorf, L., Futami, L., Virgilio, B., Hsiao, D., Shaer, O., 2018. ARtLens:
Enhancing Museum Visitors' Engagement with African Art, Proceedings of the 19th International
ACM SIGACCESS Conference on Computers and Accessibility. ACM, pp. 195-200.
Pollalis, C., Minor, E., Westendorf, L., Fahnbulleh, W., Virgilio, I., Kun, A.L., Shaer, O., 2018. Evaluating
Learning with Tangible and Virtual Representations of Archaeological Artifacts, Proceedings of the
Twelfth International Conference on Tangible, Embedded, and Embodied Interaction. ACM, pp. 626-
637.
Portalés, C., Lerma, J.L., Pérez, C., 2009. Photogrammetry and Augmented Reality for Cultural Heritage
Applications. The Photogrammetric Record 24, 316-331.
Potenziani, M., Callieri, M., Dellepiane, M., Corsini, M., Ponchio, F., Scopigno, R., 2015. 3DHOP: 3D
Heritage Online Presenter. Comput Graph-Uk 52, pp.129-141.
Rahaman, H., Champion, E., 2019. From Images to 3D Reconstruction: A Feature Comparison on Proprietary
and Open Access Photogrammetry Workflow for Cultural Heritage. Digital Humanities Quarterly,
(manuscript submitted for publication).
Raptis, G.E., Fidas, C., Avouris, N., 2018. Effects of mixed-reality on players’ behaviour and immersion in
a cultural tourism game: A cognitive processing perspective. International Journal of Human-
Computer Studies 114, 69-79.
Rua, H., Alvito, P., 2011. Living the Past: 3D models, Virtual Reality and Game Engines as Tools for
Supporting Archaeology and the Reconstruction of Cultural Heritage – The Case-study of the Roman
Villa of Casal de Freiria. Journal of Archaeological Science 38, 3296-3308.
Santagati, C., Inzerillo, L., Di Paola, F., 2013. Image Based Modeling Techniques for Architectural Heritage
3D Digitalization: Limits and Potentialities, International Archives of the Photogrammetry, Remote
Sensing and Spatial Information Sciences, XXIV International CIPA Symposium, Strasbourg, France.
Scott, M.J., Parker, A., Powley, E., Saunders, R., Lee, J., Herring, P., Brown, D., Krzywinska, T., 2018.
Towards an Interaction Blueprint for Mixed Reality Experiences in GLAM Spaces: The Augmented
Telegrapher at Porthcurno Museum.
Van Krevelen, D., Poelman, R., 2010. A survey of Augmented Reality Technologies, Applications and
Limitations. International journal of virtual reality 9, 1.
Wang, F.-Y.J.I.I.S., 2009. Is culture computable?, pp. 2-3.
Wang, Y.-F., 2011. A Comparison Study of Five 3D Modeling Systems Based on the SfM Principles.
Technical Report 2011-01. Visualize Inc., Goleta, USA, pp. 1-30.
Wu, H.-K., Lee, S.W.-Y., Chang, H.-Y., Liang, J.-C., 2013. Current Status, Opportunities and Challenges of
Augmented Reality in Education. Computers & education 62, pp. 41-49.