ArticlePDF Available

Abstract and Figures

The domain of cultural heritage is on the verge of adopting immersive technologies; not only to enhance user experience and interpretation but also to satisfy the more enthusiastic and tech-savvy visitors and audiences. However, contemporary academic discourse seldom provides any clearly defined and versatile workflows for digitising 3D assets from photographs and deploying them to a scalable 3D mixed reality (MxR) environment; especially considering non-experts with limited budgets. In this paper, a collection of open access and proprietary software and services are identified and combined via a practical workflow which can be used for 3D reconstruction to MxR visualisation of cultural heritage assets. Practical implementations of the methodology has been substantiated through workshops and participants’ feedback. This paper aims to be helpful to non-expert but enthusiastic users (and the GLAM sector) to produce image-based 3D models, share them online, and allow audiences to experience 3D content in a MxR environment.
Content may be subject to copyright.
From Photo to 3D to Mixed Reality: A Complete Workflow for Cultural
Heritage Visualisation and Experience
Abstract:
The domain of cultural heritage is on the verge of adopting immersive technologies; not only to enhance
user experience and interpretation but also to satisfy the more enthusiastic and tech-savvy visitors and
audiences. However, contemporary academic discourse seldom provides any clearly defined and
versatile workflows for digitising 3D assets from photographs and deploying them to a scalable 3D
mixed reality (MxR) environment; especially considering non-experts with limited budgets. In this
paper, a collection of open access and proprietary software and services are identified and combined via
a practical workflow which can be used for 3D reconstruction to MxR visualisation of cultural heritage
assets. Practical implementations of the methodology has been substantiated through workshops and
participants’ feedback. This paper aims to be helpful to non-expert but enthusiastic users (and the
GLAM sector) to produce image-based 3D models, share them online, and allow audiences to
experience 3D content in a MxR environment.
1. Introduction
Digital documentation and preservation of historical and cultural artefacts has increasingly become an
international priority in recent years, because of concerns regards the destruction and damage inflicted
on internationally recognised heritage assets located in Afghanistan, Syria, Iraq, and most recently, in
Brazil. On the other hand, the emergence of cultural computing (CC) (Haydar et al., 2011; Wang, 2009)
and advancement in computer technologies have helped to smoothen the procedure and production of
3D documentation, representation and dissemination of cultural heritage data (Barsanti et al., 2014;
Portalés et al., 2009). In particular, the rise of affordable techniques (such as image-based photo
modelling) and free and open source software (FOSS) is remarkable.
The domain of cultural heritage has also extended its application of immersive technologies; with
augmented reality (AR), virtual reality (VR) and mixed reality (MxR) technologies supporting sensory
experiences through a combination of real and digital content. As a reflection of the present trends in
the digitisation of 3D heritage assets, we can quickly find studies and research on 3D modelling and
their application in AR/VR (Bruno et al., 2010; Rua and Alvito, 2011). Ample studies have been
published on image-based modelling software analysing their performance (Durand et al., 2011;
Grussenmeyer and Al Khalil, 2008; Wang, 2011), accuracy in 3D production (Bolognesi et al., 2014;
Deseilligny et al., 2011; Oniga et al., 2017), algorithms (Knapitsch et al., 2017), and scalability
(Knapitsch et al., 2017; Nguyen et al., 2012; Santagati et al., 2013). Additionally, we can also find
several studies on approaches in application and use of VR/AR in museums and exhibitions; applied
from various perspectives such as learning in the classroom (Wu et al., 2013), group use of AR in a
public space (Barry et al., 2012), developing virtual exhibitions (Anderson et al., 2010), supporting
interactive experiences through 3D reconstructions (Gkion et al., 2011), and providing low-cost
solutions for 3D interactive museum exhibitions (Monaghan et al., 2011) etc.
However, it is rare to find a complete production pipeline, or guide for non-experts who are interested
in digitising, sharing, and viewing 3D content in a mixed reality (MxR) environment; despite having a
restricted budget. To address the above issues, we present a methodology on 3D digitisation to MxR
visualisation of cultural heritage assets; based on proprietary and open access software and service. We
present two cases to explain the workflow. Photographs are taken from both mobile phone and digital
camera. Image-based modelling technique has been used for point cloud generation (with Regard 3D).
3D mesh has been generated and optimised from the obtained point cloud (with Mesh Lab) and later
uploaded online for public sharing and visualizing in AR/VR (with Sketchfab). Finally, development of
interactive visualisation in a MxR environment (Microsoft HoloLens) has been achieved with a game
engine (Unity 3D and MS Visual Studio). The aim of this paper is to help non-expert users to understand
the methodology and follow the workflow to produce image-based 3D models, share them online, and
experience the digital assets in VR/AR and even in MxR environment such as HoloLens.
2. Proposed methodology and detail workflow
The reality-virtuality continuum presented by Milgram and Kishino (1994) provided a conceptual
spectrum of visualisation technologies spanning the real world and virtual world (figure 1), and
introduced the core concept of Virtual Reality (VR), Augmented Reality (AR), Augmented Virtuality
(AV) and Mixed Reality (MxR). AR enhances users’ perception and understanding of the real world by
superimposing virtual information and object on top of the view of the real world. VR on the other hand,
completely detach the viewer from the real world with a computer-generated environment and offers
artificial presence to that virtual world (Carigniani 2011). Milgram and Kishino (1994) defined MxR as
a subclass of VR that merges the real and the virtual worlds. However, there are instances where the
terms AR and MxR are used interchangeably (Papagiannakis 2018, Raptis 2018).
Figure 1. The reality-virtuality continuum presented by Milgram and Kishino (1994).
In this paper, we present a methodology of producing 3D digital assets from photographs and later
deploy it in a MxR environment with interactivity in supporting cultural heritage visualisation and
learning. The 3D models are obtained by using image-based photo modelling techniques
(photogrammetry), and the MxR environment is developed in HoloLens. We demonstrate a detailed
workflow (figure 2) starting from photo acquisition to all the way to 3D reconstruction and AR/VR/MxR
visualisation. Additionally, we point out best practice and explain how to avoid some common pitfalls.
Figure 2. The overall workflow of 3D modelling to mixed reality visualisation.
3. Photogrammetry/Image based 3D modelling
Based on the software/service, image based 3D modelling can be done with the support of remote/cloud
computing or in local machine. Software or services such as ARC3D and Autodesk Remake use the
power of cloud computing to carry out the data processing. In contrast, Regard3D, PhotoScan,
Aspect3D, 3DF Zephyr, 3D SOM Pro etc. process the data on local client machines. Scope of this paper
doesn’t cover the cloud processing method; the workflow will focus only on those software/applications
that run on a local PC workstation. Generally these software packages follow six steps to produce 3D
reconstructions/3D models, which includes (1) Image acquisition (or adding photos), (2) Feature
detection, matching, triangulation (or align photos), (3) Sparse reconstruction, bundle adjustment (or
point cloud generation), (4) Dense correspondence matching (or dense cloud generation), (5)
Mesh/surface generation, and (6) Texture generation. A few software packages also offer cloud/mesh
editing within a single package.
3.1 Image Acquisition:
Image-based 3D reconstruction software creates 3D point cloud with camera poses from uncalibrated
photographs. The software determines the geometric properties of objects from photographic images.
This process, therefore, requires comparing and reference points or matching pixels across a series of
photographs. Quality and certain number of photographs are consequently needed to allow the surface
to process, match and triangulates visual features and further generating 3D point-cloud. A mobile phone
camera (iPhone 6, 8 megapixels) has been used for photo shooting and twenty-two photos are taken for
case-1. Alternatively, fifty photographs for case-2 have been captured with a Canon 600D camera.
Photos are taken with the right amount of overlap while repositioning the camera for every photo (more
information about image acquisition and associated setting can be found in a paper by Lab, 2018).
3.2 Point Cloud Generation:
3.2.1 Selection of the software:
Structure from Motion (SfM) is just one of many techniques for 3D reconstruction of objects and
artefacts, and most often been recommended (Nikolov and Madsen, 2016). A wide variety of 3D
modelling programs are available based on SfM; ranging from simple home-brew systems to high-end
professional packages.
Table 1: Benchmarking of FOSS with PhotoScan (adopted from Rahaman and Champion, 2019).
PhotoScan
Visual SfM
PPT GUI
COLMAP
Regard3D
Pointcloud
CloudCompare result
N/A
(used as ground
truth data)
A study from Rahaman and Champion (2019) attempted to measure the quality and accuracy of
produced point-clouds by four Free and Open Source (FOSS) software with a popular commercial one.
This study shows that FOSS can create a significant good result/point cloud compared to a commercial
product (figure 3), and it recommends Regard3D. As visualisation of cultural heritage is the primary
objective of this study, acquisition of a visually appealing 3D model generated from a low-cost solution,
therefore, has been given priority. Herein Regard3D has been used for creating a 3D point-cloud from
the datasets (i.e. a set of photographs).
3.2.3 Point cloud generation
Regard3D is a free and open source structure-from-motion program that supports multiple platforms
(Windows, OS X, and Linux). A simple and straightforward graphic user interface (GUI) presents the
details of whatever results it produced highlighted in the left tree view. Experimenting with settings is
thereby much easier since the user only has to click on a result to see a list of the arguments used to
generate it, as well as the running time of that selected step. Similar to other software, the user needs to
set a project path first and input a name to start a project (figure 3).
Figure 3. Workflow offered by Regard3D GUI.
Photographs are required to be set (step 1) and matches computed (Step 2). Next up is camera
registration. In other words the process of determining each camera's position and orientation in the
scene (Step 3), can be achieved by selecting the match results and click Triangulation. Based on this
simple sparse point cloud, users can "densify" the triangulation result (Step 4). From the tree view, it is
possible to highlight the result of the last step and choose Create dense point cloud’. The dense could
(*.ply, *.pcd) can be exported at this stage. Users can also generate a mesh by clicking Create Surface
(Step 5). If the user has used the CMVS/PMVS in the previous stage, then Poisson reconstruction
becomes the only option to create the surface. Colourisation method can be selected either as coloured
vertices or texture. At this stage, the user can export the generated surface as *.obj file or directly export
to MeshLab as *.mtl file format (step 6).
Figure 4. Dense point cloud created by Regard3D.
3.3.3 Result/Outcome
The computation details and output of the three cases are presented in table 2.
Table 2: Point cloud generation with Regard 3D
Original object
Generated 3d/point-cloud
Process time
Case Study 01
Number of photos: 22
Image matching:
2m12.649s
Triangulation: 34.892s
Densification:
CMVS/PMVS
4m27.982s
Surface generation: 1m
2.725s
Total time: 8.5min
Case Study 02
Number of photos: 50
Image matching:
27m41.151s
Triangulation: 2m
50.710s
Densification:
CMVS/PMVS
20m38.443s
Surface generation: 1m
4.381s
Total time: 52m 21s
Nearby vegetation, trees and other buildings/structures often obstruct views for taking photographs of a
heritage building. Making a 3D model of a whole building/architecture based on image based photo
modelling technique therefore is often challenging. Only isolated buildings and structures are suitable
for the application of this image-based 3D modelling technique. Additionally, a larger file takes more
time in computation, we avoided larger file sizes in this study. The 3D model of the ‘Frog’ sculpture
was smaller and easier for testing the workability of the methodology and for conducting the workshop.
4. Mesh Generation and Editing
A mesh is a discrete representation of a geometric model in terms of its geometry, typology and
associated attributes (Comes et al., 2014). We used MeshLab (http://meshlab.net), a free and open source
software to develop mesh by the generated point-cloud from Regard3D. After importing the point-cloud
in the workspace, it requires cleaning (noise, outliers and irrelevant points). MeshLab provides various
tools for selection and removal of points/vertexes.
Figure 5. New surface (Poisson mesh) applied to the point cloud.
MeshLab also offers various tools for developing surface reconstruction (or mesh generation) such as
Ball Pivoting (Bernardini et al., 1999), VCG (Curless and Levoy, 1996) (ISTI Visual computer Lab),
and Screened Poisson Surface Reconstruction (Kazhdan and Hoppe, 2013). We used the Poisson
algorithm to generate the mesh (figure 4). Additional clearing of the surface may be required at this
stage if the process creates unintended surfaces. The acquired mesh can be exported or ‘mesh
simplification’ and ‘cap hole’ steps can be applied to enhance the 3D model. Mesh simplification
reduces the number of polygons while keeping the shape as close as the original. As the number of
polygons is reduced, the processing time decrees accordingly. ‘Cap holes’ on the other hand is self-
explanatory; it closes the holes where the previous mesh generation fails to provide/create any
surface/polygon.
Figure 6. 3D model after manual cleaning.
Texturing is the operation that offers visual skin/membrane coverage of the 3D models, so that the
virtual objects resemble the original. MeshLab can export a wide range of file formats, which supports
textures (e.g. *.x3d, *.obj) and vertex/points colour (*.ply). Although most commonly .obj, .vrml,
.3dxml and .dae are used for AR applications (Comes et al., 2014), we have exported the mesh as
‘Stanford Polygon File Format (*.ply)’ for further use.
5. Sharing and visualizing the 3D models in AR/VR
5.1 Storing the 3D model
At the time of writing this article, we could not find a clear and foolproof way to preserve/store 3D
assets. There are remarkable commercial, public, and hobbyist 3D repositories, ranging from local
institutions to international ones; such as CARARE, Europeana, Smithsonian, TurboSquid, Sketchfab
etc. It is also clear that despite recent EU and North American moves to create archives and digital
heritage infrastructure; 3D models are still not fully accessible to general public (Champion, 2018).
Most of the institutional repositories only allow downloading contents with restricted file format. In
some cases, only *.pdf files with embedded 3D (such as in CARARE) are allowed. Additionally, it is
often difficult to find models from specialised cultural heritage institutional repositories, as they are
typically not connected with external sites or portals. On the other hand, commercial repositories often
lack data provenance and metadata. However, commercial repositories can provide consistent formats
and protocols, and 3D models are relatively easier to find and access. But most of these portals (both
commercial and non-commercial) rarely provide other related information and resource links for further
study or use.
Table 3: Selected 3D repositories with common features
Name
Supported file
format
Fees
Accessibility
Data size
3D model
display
Public/institutional repositories
Smithsonian
(http://3d.si.edu)
STL, OBJ, Single
ASCII point cloud
Free
With few exceptions,
SIx3D offers access to
the data sets
Download limit
is not known
3D
Three D Scans
(http://threedscans.co
m/info)
OBJ, STL
Free
No copyright
restrictions
Unlimited
2D, 3D,
animated gif
CyArk
(http://www.cyark.org
)
LiDAR, point cloud,
photogrammetric
imagery
Free, require
online
application
Licensed under a
Creative Commons
Attribution-Non-
commercial 4.0
International License
Varies, prior
permission
required
2D, 3D
Europeana
http://www.europeana
.eu/portal/en
Jpeg, GIF, PNG,
PDF, Plain ASCII,
MP3, MPEG, AVI,
FBX, mtl, OBJ
Free
Most databases are not
accessible anymore
Not known
2D, 3D
EPOCH
(http://epoch-
net.org/site)
Pdf
Free
Not known
Unlimited
2D
CARARE
(http://pro.carare.eu)
PDF, 3D PDF
Free
Not known
Unlimited
3D inside
PDF
NASA 3D
Resources
(https://nasa3d.arc.nas
a.gov/)
.stl, .3ds
Free
Non-Commercial Use
only
Unlimited
2D
Commercial repositories
Sketchfab
(https://sketchfab.com
)
50 popular file
formats
Basic &
Education
access are
Free
Varies between paid
and free model
Limit based on
membership,
Unlimited
download
2D, 3D, AR,
VR
MyMiniFactory
(https://www.myminif
actory.com)
54 popular file
formats
Free/paid
option to
download
and print
Unlimited
uploads
2D
Blendswap
(https://www.blendsw
ap.com)
37 popular file
formats
Free
Varying Creative
Commons
Free 200MB
download/m,
Upload limit 90
MB
2D
3D Warehouse
(https://3dwarehouse.s
ketchup.com)
.skp
Free
General Model License
Agreement
50MB (max)
upload
2D, 3D
TurboSquid
(https://www.turbosqu
id.com)
16 popular file
formats
Free and
paid
Model Licenses:
Various
No restriction
2D, 3D
It is relatively apparent that (at the time of writing this paper), Sketchfab is a commercial platform with
flexible and versatile hosting that can support the general public, small institutes and non-profit
organisations to host, minor edit, share, trade and showcase their 3D models. These models can later be
shared online and viewed in AR/VR with the supplied application. The other commercial repositories
such as Turbo Squid, ShareCG, MyMiniFactory and Blendswap are mostly for the trading of 3D models
and are not intended for preservation. Additionally, they charge fees on trading and may not interested
in archiving (as their archiving policy is not clear).
5.2 Selection of the visualisation (AR/VR) platform
There are number of software frameworks at present to support AR/VR especially suited for cultural
heritage. The first criterion is the choice of a single package or solution that supports non-expert users
with a limited or restricted budget. Additionally, it is difficult to find a compelling platform or tools that
accept a wide variety of file formats, supports cross-platform deployment and visualisation.
There are certain points of overlap between AR and VR since some existing development platforms are
suitable for both experiences. A study from Mafkereseb (2018) on most commonly used current AR/VR
frameworks has featured their strengths and weaknesses in various settings. However, this study is
limited to exploring the tools and not the whole pipeline. Comes et al. (2014) studied 3D AR, AndAR
and VaD AR, however, this study focused on simplification of the 3D models rather than evaluating any
AR/VR platform. The study from Krevelen et al. (2010) presents the technicalities, development history
and characteristics of a wide range or AR technologies.
Portales et al. (2009) have explored various AR platforms and later adopted BazAR (a vision based open
source library) to deploy their 3D content. However, to use BazAR user needs to have vision based
relative technical knowledge of programming. The workflow adopted by Taboada (2011) for visualising
3D point cloud data in VR only, used OGRE 3D engine (game engine). Studies from Amin et al. (2015)
and Guidazzoli et al. (2017) have showcased feature list comparison of various software development
kit (SDK) for AR visualisation. Recently, AR toolkits based on visual-inertial odometry tracking have
been gaining attention. In particular, the Google ARCore and Apple ARkit look promising. However, it
is often difficult for a non-technical person to overcome the steep learning curve in order to use their
offered SDKs.
Although X3D and three.js models can run without hindrance in HTML formatted web pages, the issue
of choosing 3D file formats best suited for archiving or displaying is still a big challenge in the digital
heritage domain. 3DHOP (Potenziani et al., 2015) and Sketchfab are prominent among the relatively
few popular services that provide storing, viewing and exhibiting of 3D models online. The open source
and free 3DHOP is, however, restricted to *.NXS and *.PLY file format only. Sketchfab, on the other
hand, supports more than 50 formats and the free 'basic access’ also allows 3D models with a maximum
of 50MB for uploading files (at the time of writing this paper). A comprehensive study on 3D web
applications by Guidazzoli et al. (2017) also reveals that Sketchfab competes very well against others
for its documentation, ease of learning, GUI, reliability and overall graphics quality. We have adopted
Sketchfab in the workflow for online sharing and visualisation of the above mentioned 3D models.
5.3 AR/VR visualisation with Sketchfab
Sketchfab supports 50 extensions (as od 30 October, 2017), including compressed archives such as ZIP,
RAR and 7z. The GUI offered by Sketchfab for the user dashboard is simple and easy to use. First, the
user is required to create a user account in order to upload the 3D content. Generating an account is also
possible by signing up with Facebook, Google and Twitter while bypassing the default online form.
Sketchfab generally compresses the file first and then starts uploading to the server. We have used the
*.ply file saved previously from MeshLab.
The system asks for user input/information regarding the model name, description, categories and
keywords as metadata. Next, it starts regenerating and take the user to the ‘3D Setting’ mode.
Sketchfab’s GUI for 3D Settings offers various settings to adjust/control the model (figure 7). After
getting the desired output the user can press the ‘Save Settings’ and ‘Publish’ button to finalise the
process.
Figure 7. GUI for 3D settings offered by Sketchfab.
Figure 8. View the 3D in AR with Sketchfab app.
The universal 3D VR viewer supported by Sketchfab works on most operating systems (Windows, Mac,
Linux, iOS and Android) without any required plugin. A user can embed 3D or VR models on any
website, forum, or even in Facebook to share their content online. Peers can browse in 3D or VR without
leaving the user’s own website. The 3D models can also be viewed in VR using various HMD such as
Vive, Rift, Gear VR, or Cardboard navigation modes. Most interestingly, via the Sketchfab app installed
in a compatible mobile device, one can also view the 3D in AR (figure 8).
6. Developing interactivity and visualize in a MxR environment
Mixed Reality (MxR) blends the real-world with the virtual world. It combines interactivity and
immersion and offers immersive-interactive experience to view the real-virtual world (Papaginnakis
2018). MxR, therefore, aims to unite different properties of the Milgram and Kishino’s (1994)
continuum into a single immersive reality experience. Magic Leap, Meta 2, HoloLens are but a few of
the many current popular standalone head mounted displays (HMD) that offer a MxR experience. There
are some other alternatives (cheaper solutions) in the market such as Holoboard and Mira, which use a
smartphone for processing the data and visualisation. However, they are still in their development stage
and comparison with the more established group is out of the scope of this present study.
Microsoft HoloLens is an optical see-through Head-Mounted- Display (HMD) developed mainly for
AR/MxR experiences. The device can use sensual, and natural interface commands through gaze,
gesture, and voice. Gaze commands, such as head-tracking, allows the user to bring application focus to
whatever the user is perceiving. Various gestures such as bloom, air tap and pinch are supported for
multiple interactions with the virtual object or interface. Any virtual object or button can be selected
using an air tap method, similar to clicking an imaginary computer mouse. The tap and pinch can also
be used for a drag simulation to move a virtual object. Users can access the shell/interface through a
"bloom" gesture. Similar to pressing a Windows key on a Windows keyboard or tablet. Voice commands
can also be used to activate actions (source: https://www.microsoft.com/en-us/hololens/hardware,
access date 04 April 2019).
A large number of domains are utilising HoloLens for diverse application areas. Even if the utilisation
of HoloLens isn’t markedly observed in the Cultural Heritage (CH) domain, the last two years have
witnessed few Virtual Heritage (VH) applications that were developed using HoloLens. These
exemplary HoloLens based applications include Baskaran (2018), Bottino et al., (2017), Pollalis et al.,
(2017), Pollalis, Gilvin, et al. (2018), Pollalis, Minor, et al. (2018), Scott et al. (2018). However, these
articles focus on the experiential aspect of MxR in VH rather than the technical and procedural details
that could be of a huge benefit to domain’s professionals in terms of developing similar experiences as
presented in the articles. In this regard, our paper discusses the major steps required to deploy 3D models
into HoloLens for a mixed reality experience. The discussion below has been organised for non-expert
users of the technology.
6.1 Setting the environment with Unity 3D and importing 3D model
As briefly discussed at the introductory section, Unity 3D’ (or Unity) is a popular cross-platform game
engine widely used to develop games. Due to the game engine’s popularity, most AR/VR headsets use
Unity as a development platform. Similarly, HoloLens uses Unity to develop the intended AR/MxR
experiences.
The first step to transferring 3D models to the HoloLens is configuring the Unity development
environment. There are two ways to achieve this. The first option is to use the Unity standard
configuration procedures. The second option is to use the Mixed Reality Toolkit, which is a Unity
package consisting of a collection of custom tools developed by Microsoft HoloLens team to facilitate
the development and deployment of AR/MxR experiences to the device (HoloLens). This article uses
the Mixed Reality Toolkit (figure 9).
The Mixed Reality Toolkit was downloaded from a Microsoft HoloLens GitHub repository and
imported into a Unity project as an asset package. After importing the Toolkit, configuring the project
environment was performed at two levels. The first level of configuration involves applying changes to
the “Project Settings” using the “Apply Mixed Reality Project Settings” option from the Mixed Reality
Toolkit menu bar (figure 9, step 2). This setting configures Unity at a project level. It needs to make
sure that, the ‘Settings for Universal Windows Platform’ is selected, and the ‘Virtual Reality Supported’
box from XR settings list is checked (figure 9, step 3 & 4). This configuration includes scripting
backend, rendering quality, and player settings. All configurations have been applied to the present
scene created in this specific Unity project.
Figure 9. Configuring Unity project and scene settings to ensure compatibility with HoloLens.
The second level of configuration applies changes to a specific scene created under a project. This has
been achieved using the “Apply Mixed Reality Scene Settings” option from the Mixed Reality Toolkit
menu bar (figure 9, step 2). For the scene level configuration, we used the toolkit to set the camera
position, add the custom HoloLens camera, and configure background colour and rendering settings.
After the proper configurations were applied to the “Project Settings” and “Scene Settings”, the 3D
model generated in section 4 was imported into the Unity project using the platform’s asset importing
option. Finally, gestural interactivity was implemented on the 3D model to allow users to interact with
and manipulate the model via the default gestures recognised by the HoloLens. The Mixed Reality
Toolkit has a number of scripts and tools for adding interaction mechanisms to the MxR experience. We
have used the toolkit to add gestural and gaze-based interaction mechanisms.
6.2 Building with Universal Windows Platform (UWP)
Unity can build projects for a number of platforms. In this paper, however, the project was built for the
Universal Windows Platform (UWP). UWP is an open source API developed by Microsoft and first
introduced in Windows 10 (source: https://visualstudio.microsoft.com/vs/features/universal-windows-
platform, dated: 1st March 2019). The purpose of this platform is to help develop universal apps that run
on Windows 10, Windows 10 Mobile, Xbox One and HoloLens without the need to write codes for each
target device. Hence, a single build can be deployed to multiple target devices. The steps followed before
building the project for UWP were: adding a scene to the ‘Built Settings’, enabling C# debugging; and
specifying HoloLens as a target device.
Figure 10 shows how to configure the built environment and the steps of building the project for UW.
First, it requires the user to select the ‘Build Settings’ from the File menu (figure 10, step 2 & 3) and
click ‘Add Open Scenes’ and select the scenes opted for deployment (be sure that the scenes are listed
in the ‘Scenes In Build’ box). A user needs to make sure the ‘Universal Windows Platform’ is selected
as Platform and then select the HoloLens as ‘Target Device’ (step 4). Check the ‘Unity C# Projects’ box
to enable C# debugging, and finally, click ‘Build’ (step 5 & 6). At this stage, all the files (including
*.sln) required for project deployment to the HoloLens will be created and stored at a location specified
by the user.
Figure 10. Building the project for Universal Windows Platform.
6.3 Debugging with Microsoft Visual Studio and deploying to HoloLens
At this stage, the *.sln file from the “Building with UWP” step discussed above is imported into
Microsoft Visual Studio for debugging and deploying to HoloLens. Figure 11 shows the output of the
deployment process. Deployment can commence by connecting the HoloLens via WiFi or using a USB
cable and launching the deployment process from the Debug menu.
Figure 11. Debugging and deploying the solution to HoloLens using Visual Studio.
The HoloLens must be connected to the computer via USB and the *.sln file created during the build
process is opened. To start the process, a user needs to select ‘Start Without Debugging’ from the
‘Debug” pull-down menu (figure 11, step 1). The system will show the output details of the deployment
process (figure 11, step 2). The user must make sure that the ‘Deploy’ count is 1 and ‘failed’ count is
zero. If the deployment is successful, the HoloLens will launch the deployed application by itself.
6.4 Mixed reality experience in the HoloLens
Usually, MxR experiences are developed to be used by a single user unless the experience is developed
for collaborative use. After the application has been released into the HoloLens, a user can also connect
the device to a bigger screen using WiFi for streaming the experience with others. It is difficult to
imagine what the HoloLens user is experiencing unless the experience is shared. As a workaround to
this issue, HoloLens allows the capturing and streaming of the users’ view to another screen, as long as
both devices are connected to the same WiFi network. In our case, we used this Mixed Reality Capture
capability to stream the HoloLens user’s experience (shown in Figure 12 below). There are a few
seconds lag between the actual experience and the streaming of content to the other person’s screen.
Figure 12. A user interacting with a 3D model via HoloLens.
7. Demonstration and user feedback
To validate the methodology a workshop was conducted at the Curtin library makerspace, Curtin
University, Australia, on 23 November 2018. Fourteen participants attended the workshop, ranging from
novice to expert computer users with an age range of 18 to 60 years. During the workshop, the data sets
were supplied to the participants, and they were asked to follow the steps from the instructors. Most of
the participants managed to produce the 3D model and reach the final level of deploying the content to
the HoloLens. Due to the limited number of HoloLens (one set) and permitted time of the workshop
(two hours), only one 3D asset was deployed and interactions were set with simple gestures. The
environment with the embedded 3D assets was then shared with the participants to visualise and interact
with them. The participants experienced the MxR environment and provided feedback in an informal
post-experience discussion. This discussion, however, gives us some remarkable points to ponder for
the future:
Partial workflow (i.e. 3D modelling, editing and AR/VR visualisation with the FOSS) is supported
by cross platforms. However, deployment of 3D assets to HoloLens (for MxR) requires Windows
10 operating systems, which prohibited the Mac operating system users from immediate
participation or for following the workflow.
The workflow was found to be workable and easy to follow. The learning of the gesture control to
interact the 3D models in a MxR environment based on HoloLens requires time and practise for
first-time users. However, they managed to learn the gestures within a short period.
The workshop duration was limited to two hours, which evoked complaints from the participants,
and we were advised to host an extended session.
8. Conclusion
In this paper, we present a complete workflow for experiencing a 3D model in a MxR environment
captured from real-world objects by using proprietary and open access software and service. The
workflow starts with digitising 3D artefacts based on image-based photo modelling (photogrammetry),
converting a 3D point cloud to a 3D mesh, saving and sharing the 3D model to an online repository,
viewing the 3D model in VR/AR, and finally deploying the 3D content to a MxR environment (MS
HoloLens) and interacting with the virtual content.
The workflow was demonstrated to fourteen participants in a workshop session, and the users' feedback
was collected. User feedback validates the workflow as easy to learn, workable and effective; with a
few minor issues. We therefore believe this paper will help non-expert users, as well as small museums,
heritage institutes, interested communities and local groups who are interested in digitising their 3D
collections, sharing them online and visualising the 3D contents in an AR/VR/MxR environment;
especially if their budget is limited and they do not have extensive experience in photogrammetry,
modelling, or programming.
References
Amin, D., Govilkar, S., 2015. Comparative Study of Augmented Reality Sdks. International Journal on
Computational Science & Applications 5, 11-26.
Anderson, E.F., McLoughlin, L., Liarokapis, F., Peters, C., Petridis, P., De Freitas, S.J.V.r., 2010. Developing
Serious Games for Cultural Heritage: A State-of-the-art Review. Virtual Reality 14, 255-275.
Barry, A., Thomas, G., Debenham, P., Trout, J., 2012. Augmented Reality in i Public Space: The Natural
History Museum, London. Computer 45, 42-47.
Barsanti, S.G., Remondino, F., Fenández-Palacios, B.J., Visintini, D., 2014. Critical Factors and Guidelines
for 3D Surveying and Modelling in Cultural Heritage. International Journal of Heritage in the Digital
Era 3, 141-158.
Baskaran, A., 2018. Holograms and History, Inside the Collection. Museum of Applied Arts & Sciences.
Bekele, M.K., Pierdicca, R., Frontoni, E., Malinverni, E.S., Gain, J., 2018. A Survey of Augmented, Virtual,
and Mixed Reality for Cultural Heritage. Journal on Computing and Cultural Heritage 11, 1-36.
Bernardini, F., Mittleman, J., Rushmeier, H., Silva, C., Taubin, G., 1999. The Ball-Pivoting Algorithm for
Surface Reconstruction. IEEE transactions on visualization and computer graphics 5, 349-359.
Bolognesi, M., Furini, A., Russo, V., Pellegrinelli, A., Russo, P., 2014. Accuracy of Cultural Heritage 3D
Models by RPAS and Terrestrial Photogrammetry, pp. 113-119.
Bottino, A.G., García, A.M., Occhipinti, E., 2017. Holomuseum: A Prototype of Interactive Exhibition with
Mixed Reality Glasses HoloLens.
Bruno, F., Bruno, S., De Sensi, G., Luchi, M.-L., Mancuso, S., Muzzupappa, M., 2010. From 3D
Reconstruction to Virtual Reality: A Complete Methodology for Digital Archaeological Exhibition.
Journal of Cultural Heritage 11, 42-49.
Carmigniani, J., Furht, B., Anisetti, M., Ceravolo, P., Damiani, E., Ivkovic, M., 2010. Augmented reality
technologies, systems and applications. Multimedia Tools and Applications 51, 341-377.
Champion, E., 2018. The Role of 3D Models in Virtual Heritage Intrastructures, in: Benardou, A., Champion,
E., Dallas, C., Hughes, L.M. (Eds.), Cultural Heritage Infrastructures in Digital Humanities. NY
Routledge, Abingdon, Oxon New York, p. 172.
Comes, R., Neamţu, C., Buna, Z., Badiu, I., Pupeză, P., 2014. Methodology to Create 3D Models for
Augmented Reality Applications Using Scanned Point Clouds. Mediterr Archaeol Ar 14, 35-44.
Curless, B., Levoy, M., 1996. A Volumetric Method for Building Complex Models From Range Images,
Proceedings of the 23rd annual conference on Computer graphics and interactive techniques. ACM,
pp. 303-312.
Deseilligny, M.P., Luca, L.D., Remondino, F., 2011. Automated Image-Based Procedures for Accurate
Artifacts 3D Modeling And Orthoimage Generation, Proc. CIPA.
Durand, H., Engberg, A., Pope, S.T., 2011. A Comparison of 3d Modeling Programs. ATON
Project/CREATE, Department of Music, University of California, Santa Barbara, USA, pp.1-9.
Gkion, M., Patoli, M., White, M., 2011. Museum Interactive Experiences Through a 3D Reconstruction of
the Church of Santa Chiara, IASTED Computer Graphics and Virtual Reallity Conference,
Cambridge, United Kingdom.
Grussenmeyer, P., Al Khalil, O., 2008. A Comparison of Photogrammetry Software Packages for the
Documentation of Buildings.
Guidazzoli, A., Liguori, M.C., Chiavarini, B., Verri, L., Imboden, S., De Luca, D., Ponti, F.D., 2017. From
3D Web to VR historical scenarios: A Cross-media Digital Heritage Application for Audience
Development, Virtual System & Multimedia (VSMM), 2017 23rd International Conference on. IEEE,
pp. 1-8.
Haydar, M., Roussel, D., Maïdi, M., Otmane, S., Mallem, M.J.V.r., 2011. Virtual and Augmented Reality for
Cultural Computing and Heritage: A Case Study of Virtual Exploration of Underwater Archaeological
Sites (preprint). Virtual Reality 15, pp.311-327.
Kazhdan, M., Hoppe, H., 2013. Screened Poisson Surface Reconstruction. ACM Transactions on Graphics
(ToG) 32, 29.
Knapitsch, A., Park, J., Zhou, Q.-Y., Koltun, V., 2017. Tanks and Temples: Benchmarking Large-scale Scene
Reconstruction. ACM Transactions on Graphics (TOG) 36, 78.
Lab, D., 2018. Getting Started: A Guide to Photogrammetry, 3D Capturing Technology. University of
Michigan 3D Lab, Michigan, USA, p. 16.
Mancera-Taboada, J., Rodríguez-Gonzálvez, P., González-Aguilera, D., Finat, J., San José, J., Fernández,
J.J., Martínez, J., Martínez, R., 2011. From the Point Cloud to Virtual and Augmented Reality: Digital
Accessibility for Disabled People in San Martin’s Church (Segovia) and its Surroundings,
International Conference on Computational Science and Its Applications. Springer, pp. 303-317.
Milgram, P., Kishino, F., 1994. A taxonomy of mixed reality visual displays. IEICE Transactions on
Information and Systems 77, 1321-1329.
Monaghan, D., O'Sullivan, J., O'Connor, N.E., Kelly, B., Kazmierczak, O., Comer, L., 2011. Low-cost
Creation of a 3D Interactive Museum Exhibition, Proceedings of the 19th ACM international
conference on Multimedia. ACM, pp. 823-824.
Nguyen, M.H., Wünsche, B., Delmas, P., Lutteroth, C., 2012. 3D Models from the Black Box: Investigating
the Current State of Image-Based Modeling, Proceedings of the 20th International Conference on
Computer Graphics, Visualisation and Computer Vision (WSCG 2012), Pilsen, Czech Republic, June.
Nikolov, I., Madsen, C., 2016. Benchmarking Close-range Structure from Motion 3D Reconstruction
Software Under Varying Capturing Conditions, Euro-Mediterranean Conference. Springer, pp. 15-26.
Oniga, E., Chirilă, C., Stătescu, F., 2017. Accuracy Assessment of a Complex Building 3d Model
Reconstructed from Images Acquired with a Low-Cost Uas, ISPRS-International Archives of the
Photogrammetry, Remote Sensing and Spatial Information Sciences, Nafplio, Greece, pp. 551-558.
Papagiannakis, G., Geronikolakis, E., Pateraki, M., López-Menchero, V.M., Tsioumas, M., Sylaiou, S.,
Liarokapis, F., Grammatikopoulou, A., Dimitropoulos, K., Grammalidis, N., 2018. Mixed Reality,
Gamified Presence, and Storytelling for Virtual Museums. Encyclopedia of Computer Graphics
Games, 1-13.
Pollalis, C., Fahnbulleh, W., Tynes, J., Shaer, O., 2017. HoloMuse: Enhancing Engagement With
Archaeological Artifacts Through Gesture-based Interaction with Holograms, Proceedings of the
Eleventh International Conference on Tangible, Embedded, and Embodied Interaction. ACM, pp. 565-
570.
Pollalis, C., Gilvin, A., Westendorf, L., Futami, L., Virgilio, B., Hsiao, D., Shaer, O., 2018. ARtLens:
Enhancing Museum Visitors' Engagement with African Art, Proceedings of the 19th International
ACM SIGACCESS Conference on Computers and Accessibility. ACM, pp. 195-200.
Pollalis, C., Minor, E., Westendorf, L., Fahnbulleh, W., Virgilio, I., Kun, A.L., Shaer, O., 2018. Evaluating
Learning with Tangible and Virtual Representations of Archaeological Artifacts, Proceedings of the
Twelfth International Conference on Tangible, Embedded, and Embodied Interaction. ACM, pp. 626-
637.
Portalés, C., Lerma, J.L., Pérez, C., 2009. Photogrammetry and Augmented Reality for Cultural Heritage
Applications. The Photogrammetric Record 24, 316-331.
Potenziani, M., Callieri, M., Dellepiane, M., Corsini, M., Ponchio, F., Scopigno, R., 2015. 3DHOP: 3D
Heritage Online Presenter. Comput Graph-Uk 52, pp.129-141.
Rahaman, H., Champion, E., 2019. From Images to 3D Reconstruction: A Feature Comparison on Proprietary
and Open Access Photogrammetry Workflow for Cultural Heritage. Digital Humanities Quarterly,
(manuscript submitted for publication).
Raptis, G.E., Fidas, C., Avouris, N., 2018. Effects of mixed-reality on players’ behaviour and immersion in
a cultural tourism game: A cognitive processing perspective. International Journal of Human-
Computer Studies 114, 69-79.
Rua, H., Alvito, P., 2011. Living the Past: 3D models, Virtual Reality and Game Engines as Tools for
Supporting Archaeology and the Reconstruction of Cultural Heritage The Case-study of the Roman
Villa of Casal de Freiria. Journal of Archaeological Science 38, 3296-3308.
Santagati, C., Inzerillo, L., Di Paola, F., 2013. Image Based Modeling Techniques for Architectural Heritage
3D Digitalization: Limits and Potentialities, International Archives of the Photogrammetry, Remote
Sensing and Spatial Information Sciences, XXIV International CIPA Symposium, Strasbourg, France.
Scott, M.J., Parker, A., Powley, E., Saunders, R., Lee, J., Herring, P., Brown, D., Krzywinska, T., 2018.
Towards an Interaction Blueprint for Mixed Reality Experiences in GLAM Spaces: The Augmented
Telegrapher at Porthcurno Museum.
Van Krevelen, D., Poelman, R., 2010. A survey of Augmented Reality Technologies, Applications and
Limitations. International journal of virtual reality 9, 1.
Wang, F.-Y.J.I.I.S., 2009. Is culture computable?, pp. 2-3.
Wang, Y.-F., 2011. A Comparison Study of Five 3D Modeling Systems Based on the SfM Principles.
Technical Report 2011-01. Visualize Inc., Goleta, USA, pp. 1-30.
Wu, H.-K., Lee, S.W.-Y., Chang, H.-Y., Liang, J.-C., 2013. Current Status, Opportunities and Challenges of
Augmented Reality in Education. Computers & education 62, pp. 41-49.
... The presentation of three-dimensional data is now widely used in other fields, such as archaeology, demonstrating the clear added value of such presentation [69,70]. The ongoing development of visualization software, which optimizes the effects of light, shadow, and texture, means that the models and scenes created are becoming ever more realistic, which can only benefit third parties. ...
Article
Full-text available
Facial soft tissue reconstruction is an important tool in forensic investigations, especially when conventional identification methods are unsuccessful. This paper presents a digital workflow for facial reconstruction and identity verification using computer vision techniques applied to two forensic cases. The first case involves a cold case from 1993, in which a manual reconstruction by Prof. Helmer was conducted in 1994. We digitally reconstructed the same individual using CAD software (Blender), enabling a direct comparison between manual and digital techniques. To date, the deceased remains unidentified. The second case, from 2021, involved a digitally reconstructed face that was later matched to a missing person through DNA analysis. Here, comparison material was available, including an official photograph. A police officer involved in the case noted a “striking resemblance” between the reconstruction and the photograph. To evaluate this subjective impression, we performed quantitative analyses using three face recognition models (Dlib-based method, VGG-Face, and GhostFaceNet). The models did not indicate significant similarity, highlighting a gap between human perception and algorithmic assessment. These findings suggest that current face recognition algorithms may not yet be fully suited to evaluating reconstructions, which tend to deviate in subtle but critical facial features. To achieve better facial recognition results, further research is required to generate more anatomically accurate and detailed reconstructions that align more closely with the sensitivity of AI-based identification systems.
... This research was used to find whether using portrait photos or landscape photos makes the 3D object better based by topology and environmental conditions in waruga areas. This approach is taken because, while we aim to create high-quality and realistic 3D objects, it's crucial to recognize that these 3D models will ultimately be required for distribution and integration into applications [26]. Indeed, those applications will play a vital role in promoting and safeguarding the cultural heritage of Waruga, ensuring that it remains relevant and sustainable for generations to come. ...
Article
Full-text available
Waruga is a distinctive cultural artifact found exclusively in the Minahasa region. Despite its historical and cultural significance, efforts to preserve Waruga remain inadequate. Many structures have been left neglected, covered in fungi, or even damaged over time. Additionally, government-led relocation initiatives have contributed to the loss of their original form, further threatening this invaluable Minahasa cultural heritage. This study aims to examine the impact of photographic orientation in the creation of 3D models using close-range photogrammetry techniques. The resulting 3D models will be displayed on a digital platform to support the preservation and promotion of Minahasa culture. The photography process was divided into two categories: point-of-view shots and high-angle shots. Findings indicate that the optimal angle for point-of-view shots is 15 degrees downward, while for high-angle shots, it is 30 degrees downward. Furthermore, comparative analysis of Waruga structures with varying shapes demonstrates that portrait orientation yields 3D models that more accurately resemble the original objects compared to landscape orientation when using the same number of images. The study concludes that portrait orientation is the most effective approach for 3D reconstruction of Waruga, offering advantages such as faster processing times and reduced file sizes. In contrast, landscape orientation presents challenges, including difficulties in capturing intricate details, increased processing time, and larger file sizes. These findings provide valuable insights into optimizing digital preservation techniques for Waruga and other cultural heritage artifacts.
... By including information sheets or descriptive apparatuses, viewers or users of the models can access more specific knowledge about the route, such as historical context, architectural details, cultural significance, or any other relevant information. These additional resources contribute to a deeper understanding and appreciation of the artefact being represented, enhancing the overall experience and educational value of the models [38]. ...
Article
Full-text available
This paper focuses on a research project for the acquisition and post-production of digital data to create informative virtual representations and digital twins of different European Cultural Heritage sites. The goal was to establish a reliable database for a multi-scalar web platform, also accessible through extended reality (XR) tools. This initiative aims to support the promotion and management of cultural and historical monuments within the context of European Cultural Routes supported by the Council of Europe. The project involves different case studies spanning European geographic regions, such as the Upper Kama in Russia, the Valencian Routes of Jaime I in Spain, and the Gdańsk fortresses in Poland. The methodology employed in this effort primarily relies on integrated rapid survey techniques. Unmanned aerial vehicles (UAVs) and simultaneous localization and mapping (SLAM) technologies were used for data collection. These methods contribute to the creation of accurate 3D databases and models that transform the cultural routes into a digital format accessible via an informative platform. The actions presented in this paper are part of the European project “PROMETHEUS”, which is funded by the Horizon 2020 program of the European Union. The project involves collaboration between universities and enterprises, fostering inter-sectoral cooperation. Various techniques such as photographic archives, census analysis, and scan-to-BIM (building information modeling) processes are employed to develop this method further. In fact, the ultimate goal of the project is to establish a framework that can be replicated in other cultural contexts, enhancing the digital documentation and valorization of heritage sites.
... These factors are capable of hindering the research agreement on a common synergy involving all the decision-makers, professional organisations and likewise local communities. A specific approach can be achieved by means of heritage extension through immersive technologies (Rahaman, Champion and Bekele, 2019), which are increasingly becoming a fundamental part of the CH and archaeology field. Architectural surveying is essential for understanding the physical properties and behaviour of an existing building and for recommending the required measures (Pocobelli et al., 2018). ...
Article
Full-text available
Cultural heritage (CH) conveys values through every physical element and its intrinsic essence, necessitating careful attention to its preservation and longevity. In an era increasingly shaped by technology, digital conversion is an essential component of relevant research and is consistently considered for application in future CH strategies. This study addresses the unavailability of historical, graphical and technical records, in addition to the disparities in responsibilities that hinder the recognition and management of heritage in Algeria. This highlights the capability of digital exploration to initiate innovative approaches aimed at valorising heritage assets through documentation based on advanced technologies. An effective solution to represent and safeguard built heritage for professional application and dissemination via an implementation-based study that elucidates the proposed workflow across the National Theatre building in Algiers. Demonstrating that replication through a digital environment (DE) generates new ways for both physical and intangible interpretation using the digital twin (DT). Acknowledging the potential of the enhanced HBIM, this paper first describes digital surveying using 3D laser scanning. Secondly, it explores the pioneering application of artificial intelligence (AI) to process point clouds and improve semantic recognition, segmentation, and outputs. Finally, virtual reality (VR) combined with a software suite enriches the DT. The contribution to the body of knowledge lies in establishing a robust framework to investigate the relationship between AI, automated data preprocessing, and postprocessing to enhance the Scan-to-BIM process and support conservation of historical buildings. It demonstrates efficient time and resource consumption, accuracy, and overall effectiveness. The main findings include the virtual extension of existing assets by linking various representation tools throughout a comprehensive prototype, where machine learning (ML) reinforced connections between reality capture (RC), cloud processing, and ultimately BIM. Meanwhile, VR provided an immersive experience that directly impacted user engagement and stakeholder interactions, thus facilitating decision-making related to building management and enabling remote assessment.
Chapter
The Cultural Heritage sector is moving fast in the direction of innovative research, increasingly connected to the issues and opportunities of digitization, resilience, and sustainability. Together with ‘big’ companies like Google or Microsoft, UNESCO, ICOMOS and ICCROM are just some of the most important institutions and organizations that are currently investing in these directions. Cultural Heritage organizations are joining forces and embracing new technologies to preserve and exchange information about our fragile shared heritage. By digitizing their valuable collections and making the information available to conservation experts, museums, libraries, and archives are helping to protect our delicate heritage sites. Digital technologies have numerous practical applications for Cultural Heritage, from cloud use to academic research, preservation, and conservation, with more innovations. For example, the European Commission, through Horizon Europe program, offers ongoing support for research and innovation in the field of Cultural Heritage.
Thesis
Full-text available
In the last two decades, the impact of 3D imaging technologies on paleontology has been transformative. Techniques such as laser scanning, computed tomography, photogrammetry and finite element analysis (FEA) have enabled detailed morphological and functional studies. FEA is widely used to understand the mechanical performance of biological structures. However, the models are sensitive to input parameters, (eg. material properties), and given the incomplete fossil record, it is essential to validate them. A novel non-contact technique, 3D scanning Laser Doppler Vibrometry (LDV), was used to measure displacement patterns from an alligator mandible, and compute full-field dynamic strain. This data was compared with FE simulations. LDV results are matched by the FE simulations, with high spatial resolution. However, absolute values vary greatly, due to modeling differences and geometric complexities. LDV may not be suitable for validating biological finite element models; or may be better when combined with other strain estimation techniques. Material properties of alligator teeth were examined in detail using nanoindentation, providing detailed ranges of estimates of hardness and Young's modulus across enamel, dentine, and the enamel-dentine junction. Significant variations in mechanical properties, within and between tooth layers were observed, which were then contextualized within the largest comparative framework of dental material properties from 62 taxa, offering insights into the biomechanical diversity and evolutionary adaptations of vertebrates. Results suggest that the material properties vary greatly across different groups and that the functional implications of dental material evolution is likely a complex interplay between phylogeny, dietary adaptations and habitat. Further testing in other groups, and detailed functional studies might help explain how dental tissue complexes contribute to the overall function of the jaw. Photogrammetry has seen significant expansion due to advances in digital imaging and computing, allowing precise 3D reconstructions from 2D images. An app called ‘Pratima3D’ was developed in this study, utilizing Apple's Object Capture API, which streamlines, automates and democratizes the 3D modeling process. Future advancements of the algorithm are expected to further enhance the impact of 3D digitization in research, education, and outreach, providing interactive and immersive learning experiences and broadening public engagement with scientific inquiry.
Article
A fotogrammetria kriminalisztikai alkalmazása-tudományosan megalapozott módszerek fényképezőgép segítségével történő 3D képrögzítéshez Absztrakt Jelen tanulmány célja, hogy a szakirodalmi eredmények és gyakorlati ta-pasztalatok összefoglalója által módszertani segédletként szolgáljon a kri-minalisztikai célú, fényképezőgép segítségével, fotogrammetria útján tör-ténő háromdimenziós képrögzítés és modellalkotás terén, elsődlegesen bűnügyi technikusok és igazságügyi szakértők számára. A tanulmány jog-szabályi, illetve gyakorlati kontextusba helyezi a fotogrammetriai alapú há-romdimenziós képrögzítést, mint a helyszíni és laboratóriumi dokumentá-ció potenciális eszközét. Az egyszerű, gyors és bárki által elsajátítható fo-lyamat elsődlegesen fényképezési készségeket igényel, az útmutató éppen ezért nagy hangsúlyt fektet a legfontosabb gyakorlati fényképészeti isme-retek összefoglalására, amely segítséget nyújt a célnak legmegfelelőbb ka-mera-, illetve fényképezési beállítások kiválasztásához. Bemutatásra kerül továbbá a modellalkotáshoz szükséges felvételsorozatok elkészítési meto-dikája, különböző méretű helyszínek és objektumok esetén-különös fi-gyelemmel a módszer különböző tényezőkből adódó korlátaira-, továbbá a fényképalapú, szoftveres rekonstrukció és annak lépései. Ismertetésre ke-rülnek továbbá a módszer jelenlegi és jövőbeli, kriminalisztikai célú alkal-mazási területei. Kulcsszavak: kriminalisztikai fényképezés, fotogrammetria, háromdimen-ziós képrögzítés, módszertan
Preprint
Full-text available
Cultural heritage, a testament to human history and civilization, has gained increasing recognition for its significance in preservation and dissemination. The integration of immersive technologies has transformed how cultural heritage is presented, enabling audiences to engage with it in more vivid, intuitive, and interactive ways. However, the adoption of these technologies also brings a range of challenges and potential risks. This paper presents a systematic review, with an in-depth analysis of 177 selected papers. We comprehensively examine and categorize current applications, technological approaches, and user devices in immersive cultural heritage presentations, while also highlighting the associated risks and challenges. Furthermore, we identify areas for future research in the immersive presentation of cultural heritage. Our goal is to provide a comprehensive reference for researchers and practitioners, enhancing understanding of the technological applications, risks, and challenges in this field, and encouraging further innovation and development.
Chapter
Full-text available
The success of virtual heritage projects, through the careful inspection, contextualization and modification of 3D digital heritage models with virtual reality technology, is still problematic. Models are hard to find, impossible to download and edit, in unusual, unwieldy or obsolete formats. Many of the freely available models are stand-alone 3D meshes with no accompanying metadata or information on how the acquisition of the data. Few have information on if or how the models can be shared (and if they are editable). Fewer still quantify the accuracy of the scanning or modelling process, or make available the scholarly documents, field reports, photographs and site plans that allowed the designers to extract enough information for their models.
Article
Full-text available
A multimedia approach to the diffusion, communication, and exploitation of Cultural Heritage (CH) is a well-established trend worldwide. Several studies demonstrate that the use of new and combined media enhances how culture is experienced. The benefit is in terms of both number of people who can have access to knowledge and the quality of the diffusion of the knowledge itself. In this regard, CH uses augmented-, virtual-, and mixed-reality technologies for different purposes, including education, exhibition enhancement, exploration, reconstruction, and virtual museums. These technologies enable user-centred presentation and make cultural heritage digitally accessible, especially when physical access is constrained. A number of surveys of these emerging technologies have been conducted; however, they are either not domain specific or lack a holistic perspective in that they do not cover all the aspects of the technology. A review of these technologies from a cultural heritage perspective is therefore warranted. Accordingly, our article surveys the state-of-the-art in augmented-, virtual-, and mixed-reality systems as a whole and from a cultural heritage perspective. In addition, we identify specific application areas in digital cultural heritage and make suggestions as to which technology is most appropriate in each case. Finally, the article predicts future research directions for augmented and virtual reality, with a particular focus on interaction interfaces and explores the implications for the cultural heritage domain.
Conference Paper
Full-text available
Technological advances offer new methods of representing physical objects in tangible and virtual forms. This study compares learning outcomes from 61 students as they interact with ancient Egyptian sculptures using three increasingly popular educational technologies: HoloLens AR headset, 3D model viewing website (SketchFab), and plastic extrusion 3D prints. We explored how differences in interaction styles affect the learning process, quantitative and qualitative learning outcomes, and critical analysis.
Conference Paper
Full-text available
We present HoloMuse, an AR application for the HoloLens wearable device, which allows users to actively engage with archaeological artifacts from a museum collection in ways that are otherwise not possible. We designed HoloLens to facilitate learning and engagement with museum collections without taking away from the experience of viewing an original artifact within the context of an exhibit. HoloMuse can be used inside the gallery or in the classroom. It enables users to pick up, rotate, scale, and alter a hologram of an original archeological artifact using in-air gestures. Users can also curate their own exhibit or customize an existing one by selecting artifacts from a virtual gallery and placing them within the physical world so that they are viewable only using the device. We intend to study the impact of HoloMuse on learning and engagement with college-level art history and archeology students.
Article
Full-text available
Nowadays, Unmanned Aerial Systems (UASs) are a wide used technique for acquisition in order to create buildings 3D models, providing the acquisition of a high number of images at very high resolution or video sequences, in a very short time. Since low-cost UASs are preferred, the accuracy of a building 3D model created using this platforms must be evaluated. To achieve results, the dean's office building from the Faculty of “Hydrotechnical Engineering, Geodesy and Environmental Engineering” of Iasi, Romania, has been chosen, which is a complex shape building with the roof formed of two hyperbolic paraboloids. Seven points were placed on the ground around the building, three of them being used as GCPs, while the remaining four as Check points (CPs) for accuracy assessment. Additionally, the coordinates of 10 natural CPs representing the building characteristic points were measured with a Leica TCR 405 total station. The building 3D model was created as a point cloud which was automatically generated based on digital images acquired with the low-cost UASs, using the image matching algorithm and different software like 3DF Zephyr, Visual SfM, PhotoModeler Scanner and Drone2Map for ArcGIS. Except for the PhotoModeler Scanner software, the interior and exterior orientation parameters were determined simultaneously by solving a self-calibrating bundle adjustment. Based on the UAS point clouds, automatically generated by using the above mentioned software and GNSS data respectively, the parameters of the east side hyperbolic paraboloid were calculated using the least squares method and a statistical blunder detection. Then, in order to assess the accuracy of the building 3D model, several comparisons were made for the facades and the roof with reference data, considered with minimum errors: TLS mesh for the facades and GNSS mesh for the roof. Finally, the front facade of the building was created in 3D based on its characteristic points using the PhotoModeler Scanner software, resulting a CAD (Computer Aided Design) model. The results showed the high potential of using low-cost UASs for building 3D model creation and if the building 3D model is created based on its characteristic points the accuracy is significantly improved.
Conference Paper
We present ARtLens, an Augmented Reality application for the Microsoft HoloLens, which allows museum visitors to actively interact with and learn about artifacts. We designed ARtLens to enhance learning and engagement with museum collections while keeping the focus on the original artifact. ARtLens provides context for an artifact by supplying audio and visual information, and guides visitors in exploring the original artifact. It also allows users to directly manipulate, using gesture-based interactions, holographic representations of related artifacts next to original artifacts in the gallery. We intend to study the impact of ARtLens on object-based learning and engagement of museum visitors in an African Art gallery.
Article
Mixed-reality environments introduce innovative human-computer interaction paradigms assisted by enhanced visual content presentation which require from end-users to perform excessive cognitive tasks related to visual attention, search, processing, and comprehension. In such visually enriched interaction realms, individual differences in perception and visual information processing might affect users’ behaviour and immersion, given that such effects are known to exist in conventional computer environments, like desktop or mobile. In an attempt to shed light on whether, how, and why such effects persist within mixed-reality contexts, we conducted a between-subjects eye-tracking study (N=73) in which users interacted within either a conventional or a mixed-reality technological context, and adopted an accredited cognitive style theory to interpret the derived results. Analysis of results yielded that mixed-reality interaction realms amplified the effects of human cognitive style towards game-specific interaction behaviour and visual behaviour. Findings further support the added value of incorporating human cognitive factors in both design and run-time, aiming to provide adaptive and personalised features to end-users within mixed-reality interaction contexts. Such practical implications are also discussed in this paper.
Article
We present a benchmark for image-based 3D reconstruction. The benchmark sequences were acquired outside the lab, in realistic conditions. Ground-truth data was captured using an industrial laser scanner. The benchmark includes both outdoor scenes and indoor environments. High-resolution video sequences are provided as input, supporting the development of novel pipelines that take advantage of video input to increase reconstruction fidelity. We report the performance of many image-based 3D reconstruction pipelines on the new benchmark. The results point to exciting challenges and opportunities for future work.