Conference PaperPDF Available

Linked Open Data as universal markers for Mobile Augmented Reality Applications in Cultural heritage


Abstract and Figures

Many projects have already analyzed the current limitations and challenges on the integration of the Linked Open Data (LOD) cloud in mobile augmented reality (MAR) applications for cultural heritage, and underline the future directions and capabilities. The majority of the above works relies on the detected geo-location of the user or his device by various sensors (GPS – global positioning system, accelerometer, camera, etc.) or geo-based linked data, while others use marker-based techniques to link various locations with labels and descriptions of specific geodata. But when it comes to indoor environments (museums, libraries) where tracking the accurate user’s position and orientation is challenging due to the lack of GPS valid sensor data, complex and costly technological systems need to be implemented for identifying user’s OoI (Object of Interest). This paper describes a concept which is based on image identification and matching between frames from the user’s camera and stored images from the Europeana platform, that can link the LOD cloud from cultural institutes around Europe and mobile augmented reality applications in cultural heritage without the need of the accurate user’s location, and discusses the challenges and future directions of this approach.
Content may be subject to copyright.
Linked Open Data as universal markers for Mobile
Augmented Reality Applications in Cultural heritage
John Aliprantis, Eirini Kalatha, Markos Konstantakis, Kostas Michalakis,
George Caridakis
University of the Aegean,
81100 Mytilene, Greece,,,,
Abstract. Many projects have already analyzed the current limitations and
challenges on the integration of the Linked Open Data (LOD) cloud in mobile
augmented reality (MAR) applications for cultural heritage, and underline the
future directions and capabilities. The majority of the above works relies on the
detected geo-location of the user or his device by various sensors (GPS – global
positioning system, accelerometer, camera, etc.) or geo-based linked data, while
others use marker-based techniques to link various locations with labels and
descriptions of specific geodata. But when it comes to indoor environments
(museums, libraries) where tracking the accurate user’s position and orientation
is challenging due to the lack of GPS valid sensor data, complex and costly
technological systems need to be implemented for identifying user’s OoI
(Object of Interest). This paper describes a concept which is based on image
identification and matching between frames from the user’s camera and stored
images from the Europeana platform, that can link the LOD cloud from cultural
institutes around Europe and mobile augmented reality applications in cultural
heritage without the need of the accurate user’s location, and discusses the
challenges and future directions of this approach.
Keywords: Linked Open DataAugmented RealityCultural
HeritageImage MatchingMobile Applications
1 Introduction
Today, cultural heritage institutions like libraries, archives and museums (LAMs)
are eager to find new ways to attract and educate new visitors, while also preserving
and promoting their cultural heritage. Therefore, they are turning to solutions like
ubiquitous mobile applications and augmented reality techniques in order to enhance
user’s perception, deliver a more complete touristic experience and promote
interaction among users or between user and cultural artifact.
However, the augmented reality applications in culture have also their limitations
and drawbacks, as their consistent use from users have made them quite familiar and
acceptable, raising more and more the expectations and requirements for new
interactive and visual discovery methods. Current applications are characterized
rather static and cannot be updated easily, due to the nature of the data that they
process and display. Many of them use closed databases of information that are built
and accessed only by a single application, while others depend on open but single
databases disconnected from others, called channel (like Wikipedia). Nowadays, this
is perceived as a limitation and drawback from users, as they demand access to more
personalized, dynamic and adaptive information, in order to satisfy their personal
interests and desires [1].
A possible solution to this issue is the usage of dynamic databases like the Linked
Open Data (LOD) Cloud, which uses the infrastructure of the World Wide Web to
publish and link datasets that can then be examined and processed by applications
accessing to them, while also these datasets are continuously updated and linked with
new assets [2]. Linked Data [3] is one of the most practical parts of the Semantic
Web [4], as it is a model for representing and accessing data that are flexible enough
to allow adding or changing sources of data, to link with other databases and data and
to process the information according to its meaning. As a structured data model,
linked data is also readable and editable by search engines and software agents, thus
increasing importance for data interchange and integration. Despite that, the
integration of LOD databases into augmented reality applications is not impeccable,
as developers have to address other issues which mostly are derived from its open
world consumption, leading to trust issues and data quality assessment [1].
In recent years, many cultural institutes have converted their cultural heritage
information that was previously only used for internal purposes, to linked data in
distinctive datasets which are linked to the LOD cloud, usually with the aid of web
aggregators such as Europeana. In this way, structured data can benefit both the
institution and the larger community by expanding the semantic web and establishing
an institution as a trusted source of high quality data, while also giving the
opportunity to develop new interfaces for users to experience cultural heritage, like
the mobile augmented reality applications [5]. Mobile tourist guides, enriched
storytelling and digital representation of cultural heritage monuments are only few of
the many applications that have been developed in order to enhance user experience
during their cultural interaction, combining successfully the augmented reality
techniques with the large and open linked data resources. Nevertheless, the majority
of these applications are strongly dependent on the geo-location of the user or geo-
tagged data, as their function rely on identifying points of interest (PoIs) in user’s
field of view and adjust the context-sensitive information to be displayed in screen
from geo-based linked data [2][6].
Current research aims in proposing a prototype that uses linked cultural data from
cultural institutions (museums, libraries, art galleries) in Europe which are stored in
specific structured data standards (Resource Description Framework - RDF) and are
freely available to the public for further processing (open data). Its main focus is
cultural artifacts that are stored in indoor unprepared environments and thus user
tracking is almost impossible. One of the main objectives of this research is the
implementation and optimization of the image identification algorithm for mobile
devices. Several works present various techniques to identify the field of view (FOV)
of the user by matching frames from his camera to already existed databases of
images [7], while others use point cloud databases to match images with 3D models
(2D to 3D matching) [8]. In the proposed prototype, geographical information of the
user's device (GPS sensor) is only used to narrow the number of the possible
databases to be searched, by considering the latest value of the sensor and a certain
range around it, thus improving efficiency and accuracy. This procedure however, not
only decreases the computation time of the search algorithm, but also determines the
ontologies to be matched by identifying the subject of the nearby cultural institutions.
Then, the software queries the server requesting images of cultural objects located in
this area (using SPARQL language), and if the image matching to the camera’s
frames is successful, the object that the user has in his field of vision is identified and
the appropriate description and metadata from the LOD database (Europeana
aggregator) are shown. A key attribute of the above prototype is the use of the
Europeana API (Application Programming Interface) which allows developers to use
these cultural data and implement applications for cultural heritage.
The paper is organized as follows. In Section 2, we present the current projects in
culture that integrate linked data in augmented reality applications, whose basic
characteristic is the location – based approach on the display of their content. Section
3 analyzes the positives of the LOD – AR integration and describes the issues that
must be addressed. In Section 4, we illustrate the proposed concept and define the
current issues in the LOD AR integration that it addresses and the challenges that
we have to overcome. Finally in Section 5, we discuss our future work based on this
2 Related Works
The integration of Linked Open Data sources into augmented reality applications has
emerged as an effective solution for static datasets of mobile applications, with many
projects having already built several prototypes. Vert and Vasiu [1] developed a
prototype that integrates user – generated linked open data and open government data
in a mobile augmented reality tourist guide in Timisoara, Romania. In this model, the
identification is accomplished with the aid of GPS sensors from user’s devices, and
PoIs stored in linked open data bases like DBpedia and LinkedGeoData that are
verified with Google Fusion Tables.
ARCAMA-3D [2] is a mobile platform that overlays the surroundings of the user
with 3d representation that highlights the objects of interest (OoIs), allowing users to
interact with them and seeking more information through the Linked Open Data
cloud. This platform is also strongly depended on geo-location data and embedded
sensors like GPS and accelerometer in order to align the 3d content with the physical
environment, using also methods like the Kalman filter [9]. Furthermore, its data is
context-sensitive as it highlights the buildings with different colors if they contain
interesting information for the user with respect to his interests.
Van Aart et al [6] envision a mobile tourist guide application which will be able to
dynamically combine information, navigation and entertainment. In this way, they
propose the implementation of a mobile application which constructs dynamic
walking tours in Amsterdam based on user’s interests and geo-location of PoIs
nearby, whose data are stored in linked datasets like DBpedia and LinkedGeoData.
The proposed algorithm is quite efficient in outdoor environments, even if it has to
deal with the significant discrepancies that exist between coordinates for the same
location, but its accuracy might not be sufficient when it comes to an indoor
augmented reality application.
Hegde et al [10] argue about the importance of linked data and how can they enrich
PoIs in order to enhance browsing experience in augmented reality applications. They
claim that PoIs cannot give information by themselves and as far as there are no links
to additional information through the common augmented reality applications, the
offering experience may not satisfy the demanding users. Linking semantically
additional content to enrich PoIs enables users to discover more information based on
their interests.
In conclusion, the above research projects show the potential of using the Linked
Open Data cloud as the main source dataset for a mobile application where the
information access is based on the geo-location of the user. However, there is no
sufficient research on mobile applications based on indoor environments such as
museums and libraries, where the integration of LOD is not straightforward, as user’s
location and orientation regarding the artifacts is much more difficult to be tracked,
especially in a non pre-prepared environment.
3 LOD – AR integration: benefits and issues
As Vert and Vasiu [11] claim in their work about the LOD – AR integration, not only
augmented reality applications benefit from the Linked Open Data cloud but the vice
versa is equally important. As already mentioned, integrating linked open data in
mobile applications has certain benefits, most of them related to the static and closed
classical databases that are used widely in augmented reality applications, but on the
other side, the LOD cloud can also profit from the raising amount of content created
by the MAR applications that can be linked to it, increasing its diversity and size,
which in turn favors the Linked Data application developers.
In recent years, augmented reality applications are in enduring popularity and
widely accepted by users but this seems to reach its boundaries as there are some
issues which need to be addressed directly. This is also due to the fact that state-of-
the-art Augmented Reality browsers like Layar, Wikitude, Junaio and Tagwhat are
based on the current version of the web (Web 2.0) which however is to be replaced by
the Semantic Web (or Web of data, Web 3.0) [4], the new web version that changes
the way we conceive and interact with web data, and therefore it affects the
functioning of these AR browsers.
Reynolds et al [12] argue about the limitations of current MAR applications with
regards to the present web version, and conclude that there are 3 major issues of the
architectures of existing augmented reality applications, as follows:
Selection and integration of data sources. As we mentioned before, current
MAR applications are based on static and closed databases. As a result, any
update in application’s dataset has to be done manually which is a rather
demanding or sometimes impossible task, taking into account the massive
information that many applications may contain. Furthermore, there is little or no
interaction between individual databases as usually they are exclusively designed
and constructed for the need of particular applications. That means there are no
linked datasets (for example there are no hyperlinks or symbolic relationships
between different but associated data) and developers have to create a new data
set if they want to combine the purposes of two or more applications.
Utilization of contextual information. The increasing amount of information
that MAR applications can handle and display nowadays can satisfy even the
most demanding users. Meanwhile, a mobile device such as smartphone or tablet,
has a small screen that usually cannot display all the necessary information, or
even worse may cause users to feel overwhelmed by the amount of data that
appears on their screen. In recent years though, new mobile devices are equipped
with a range of sensors like GPS and accelerometer, which facilitate the use of
contextual information. Augmented reality applications can use the information
taken from these sensors to estimate recent activity and tendency from users, in
order to specify the desired data that users can require, thus minimize the
possibility of displaying irrelevant or redundant information.
Browsing experience. Web browsers link pages of data through hyperlinks,
hence allowing users to navigate and explore new content based on their interests.
Augmented reality browsers and applications don’t support such functionality as
their content doesn’t include any links to data outside their data sets. As a
consequence, current AR applications provide a rather fixed experience regarding
to the information that they can display.
Linked Open Data principles are well-suited for addressing the above issues that
hinder the ongoing progress of MAR applications. Until today (August 2017), the
LOD cloud includes more than 1100 datasets from various fields such as social
networking, science and government data, which are linked with each other [13].
Furthermore, it contains plenty of geo-location data entries, such as those found in
data sets like GeoNames and LinkedGeoData, which not only provide the coordinates
of many locations but also facts and events that characterize them. Meanwhile, many
of the included datasets are linked to the DBpedia which stores general information
for very common concepts in structured form, and been extracted by the data created
in Wikipedia.
Integrating linked data in MAR applications can enhance augmented reality by
addressing the three issues mentioned above. With the linked databases that are also
open for update from developers and users, augmented reality applications are no
longer static as data can be dynamically selected and processed theoretically from the
entire LOD cloud. Furthermore, linked data can better exploit the sensor data from
user’s devices by specifying more efficiently the query for the desired data, thus
enabling the utilization of contextual data. Also, due to the functioning of the linked
open data items, augmented reality applications can display additional data inside the
view of the application, without requiring the user to close the application and open a
new one like a mobile browser, which aims towards enhancing the user’s browsing
experience [11].
Equally, LOD cloud is also benefited by its integration with augmented reality. The
increasing amount of information that AR applications produce and handle, such as
3D models, can be linked to the LOD cloud, and help it grow in size and diversity,
something that can be capitalized on by both developers and other users or linked
datasets. Furthermore, linked data applications lack in user friendly interfaces, as
they have to display structured data in complex RDF triples, thus augmented reality
applications can intervene as an already known interface which allow users to interact
with linked data in a way that they are already familiar and in favor of.
Even though this integration has many benefits, there are some further issues that
need to be taken into consideration. First of all, the size of the LOD cloud may be an
advantage but it has to be used wisely, especially when it comes to mobile devices
like smartphones with small screen, as there is a massive amount of data that can be
displayed, which may upset or confuse the user. Filtering displaying data regarding
the context and user’s interests can be an effective countermeasure. Also, the fact that
linked data are mostly open data that everyone can update or even add and link their
own, raises trust, quality and privacy issues that developers must deal with.
Furthermore, queries of linked data are handled by powerful technologies such as
RDF and SPARQL that also are complex and time consuming processes. As a result,
they may slow down the functioning of a mobile augmented reality application or
even worse, introduce incomparable barriers on its design and its normal features.
Finally, not every linked data set has the same vocabulary and ontology, something
that leads to matching issues, while data may be duplicated, with wrong coordinates
or overlapped in different datasets, so developers have to implement more
sophisticated queries and algorithms [12, 14].
In conclusion, linked open data seems to be an effective solution to enhance the
emergence of augmented reality applications, due to its quantity of interlinked
information, and also its dynamic selection and integration of data, while it can
benefit from the data produced by AR applications that can be linked to it.
Furthermore, this integration answers to the expectation of users nowadays to be able
to use dynamic, adaptive and personalized technology applications that help them
explore and interact with their content efficiently.
4. Proposed Concept Overview
Current work aims in proposing a new way of integrating linked data in mobile
augmented reality applications that guide visitors in a cultural institution. One of its
basic features is that it refers to indoor environment or locations where accurate
values of GPS sensors are nearly impossible. We already mentioned that many of the
current projects in the LOD – AR field are depended on the tracking of user’s precise
location, in order to query the appropriate geo-location data that will be displayed.
That is major constraint if we consider that many cultural institutions such as
museums and libraries are closed buildings where it is very difficult to track and link
user’s position with a internal PoI. Furthermore, there may not be concrete geo-data
for every cultural artifact inside a building so it is tricky to specify with which object
the user is interested. in order to fetch the correct data from the LOD cloud.
So, how can we track user’s location and orientation inside a building effectively
and recognize the PoI that user probable refers to? Many projects that incorporate
augmented reality techniques in indoor environments and therefore track user’s
location amongst the cultural exhibits require the technical preparation and
transformation of the area, with installation of specialized devices. Wireless
technologies and indoor GPS, cooperating with the state-of-the-art smartphones with
many sensors, are quite accurate methods but are rather costly and complex to be
widely implemented [15, 16]. RFID (Radio – Frequency Identification) and BLE
(Bluetooth Low Energy) technologies can be also effective and viable solutions but
there are also certain technological and practical aspects that need to be addressed
[17, 18]. Furthermore, QR codes and image tracking are the most low cost and simple
methods but they require additional actions from users, which may be frustrating [19].
Finally, modern mobile devices have the capability to use their powerful camera and
processing power in order to perform large-scale indoor 3D scene reconstruction
based on cloud images and sensor data [20]. These techniques track effectively user’s
location in indoor environments but they have all in common one vital disadvantage,
which is the need of a pre-prepared environment with complex and sometimes costly
technological systems.
In our proposed concept we are going to implement an indoor tracking method
based on image recognition and identification between frames retrieved from mobile
device’s camera and linked open data sets. The basic concept is that accurate user’s
tracking inside a room is not necessary once we identify which artifact user is
currently examining through his camera. Taking advantage of the images that are
usually associated with a cultural artifact in the LOD cloud, we can compare the
frames taken by the camera with the appropriate image data set from the cloud. Upon
successful identification, the system queries the rest of the information about the
tracked object, and also its links from the linked data, and finally the user although
being in an indoor environment has access to the LOD cloud through the PoI in front
of him.
4.1 Architecture description
The architecture of the proposed concept involves four key components: the LOD
cloud, the augmented reality interface, user’s mobile device and finally the cloud
database of the application. The data flow and function sequence are pictured on the
above figure.
In the first level of the architecture, the user’s mobile device connects with the
LOD cloud in order to query for cultural artifact images that are located in institutions
near user’s location. This procedure will be accomplished with the aid of linked data
online portals such as Europeana aggregator, which allows free access to many
European cultural databases. The selected images are stored in the application’s cloud
database where they are converted to the appropriate format with characteristic points,
so afterwards they can be compared with the camera’s frames.
After the application’s database is formed, the second level involves the
cooperation of mobile device’s camera and application’s database in order to identify
the PoI that the user is currently looking at. To achieve this, specialized algorithms
that detect same patterns in images must be implemented, taking into account various
conditions that may alter current images of artifacts regarding to the images stored to
the LOD cloud.
Finally, after the PoI has been identified, the user’s device queries the LOD current
database for more information about the cultural artifact and its links, and displays
them in the user’s screen with augmented reality techniques. The user then can
interact with the information shown as digital content to the screen and navigate to the
LOD cloud according to his interests.
4.2 Use case scenario
Fig. 2. Proposed framework - LOD and AR integration in indoor environments
Fig. 1
The proposed architecture of the concept
The proposed framework includes five major steps to be implemented, depicted in fig
1. It requires mobile user devices like smartphones and tablets, and can be functional
only at cultural institutions that have shared their cultural heritage as linked open data
in the LOD cloud, hence making it available to the public through online aggregators.
Once user is inside the building and starts the application on his mobile device, the
proposed framework follows the next steps:
The software retraces the last known GPS sensor value that indicates the city or
region that the user is currently exploring. Since GPS is not available inside a
building, we cannot track the user’s location but we can still narrow the possible
cultural institutes that the user may visit. That helps us minimize the linked data
sets that need to be queried about the forthcoming image frames. In the best
scenario, we can assume the exact building the user may currently be in, so we
can have all the appropriate images from the equivalent cultural heritage of the
institution, but on the worst case, we still can extract the city that user is visiting
and the available linked data sets in this area, but it will surely affect the
computation time of the matching images function later on.
After the identification of the cultural institutes that are nearby the user and
maintain their heritage as structured data in the LOD cloud, the framework
queries about images that represent their artifacts, and the result of the query is
stored in the cloud-based database of the framework. For the query, we can use
the powerful SPARQL language and the Europeana API, once the Europeana
online platform is the EU digital aggregator for cultural heritage [21].
The next step is the image processing and preparation for the matching function.
Images stored in system’s database must be in a specific form which allows
software to compare and decide if two images are the same. This usually is
achieved with the detection of characteristic points of an image [22] and the
search for the same patterns among many images.
In this step, the framework compares the frames taken by the user’s camera with
the images from its database, and in case of matching it detects the artifact that
the user is interested for. This seems to be the most demanding function of the
whole procedure, as it requires a lot of processing power and the use of complex
software. Frame rate and latency of the result are two issues that need to be
addressed, but with the necessary filters this part of the procedure can be quite
Finally, once the framework specifies the PoI from the LOD cloud that the user is
currently interested in, it uses augmented reality techniques to display more
information about it. Furthermore, the user can seek more data about the current
object or relevant artifacts by using the links from the LOD cloud.
The basic concept of the above framework is that it takes advantage of the large
amount of images in the LOD cloud and treats them like image - “markers”. A marker
in augmented reality is an image with specific characteristics that figures a distinctive
pattern, easily recognizable by the software. Upon successful detection, the system
usually reacts by displaying a message or digital content, when others just link users
to another website like QR codes. All the uploaded images that are associated with
multiple cultural artifacts can be potential markers, thus these images can be tracked
through image recognition.
Furthermore, the proposed framework needs to handle the possible data overload in
the user’s screen, as we already mentioned the significant size of the LOD cloud and
the small screen of a mobile device. That’s why context awareness techniques will be
applied to display information according to user’s personal interests and profile, based
also on the metadata of the linked data sets.
In conclusion, the proposed framework aims in integrating linked data in
augmented reality applications for indoor environments in a way that both benefit
from each other. On the one hand, there will be a cross functional AR prototype for
all cultural institutions that share their data in the LOD cloud without any preparation
of environment (regionally restricted in Europe because of Europeana limitations as
the used aggregrator). On the other hand, the proposed user interface will visualize
LOD with AR techniques providing more familiar and friendly interaction ways with
4.3 Challenges and limitations
Like the majority of the applications that integrate linked data with augmented
reality techniques, the proposed framework has its own challenges. First of all, the
size of data that needs to be displayed in the user’s screen is way too big, as it is based
on the LOD cloud, so context-awareness methods must be implemented in order to
personalize the digital content and links that will be displayed. Also, the heterogeneity
of the linked data sets must be addressed with specialized algorithms that consider the
differences between languages and ontologies, coupled with structural ambiguities
and misclassification of data.
Furthermore, this project faces a few more challenges which are related to its wide
range of capabilities. As we already mentioned, the GPS sensor values cannot be valid
in indoor environment thus won’t be taken into account in tracking of the user’s
position inside a building. But previous values of the sensor are quite important in
order to minimize the possible linked data sets that should be queried and therefore
the number of images that need to be processed and compared. A potential incorrect
value of the GPS sensor may lead to false data sets or too many images that will cause
an endless processing time without the right outcomes. The same result will cause a
disabled or uninstalled sensor in the mobile device.
Also, the computation time and the necessary processing power that is needed to
perform the matching function between the stored images and the camera’s frames
may end up as a very challenging issue. That’s why the GPS value is of importance
for the decrease of the amount of processed data. Also, the matching function which
compares the characteristic points of two images should require modern mobile
devices capable of executing complex algorithms in limited time.
Moreover, the effectiveness of the proposed framework depends on the quality and
reliability of the images that figure the cultural artifacts and are uploaded on the LOD
cloud. Factors like brightness, angle of the image, changes of the artifact’s location
and background differences may affect the outcome of the matching function, thus
misleading user to false information.
5. Conclusion and Future Work
In this paper, we introduced a new way of integrating linked open data sets in
augmented reality applications for cultural heritage institutions. We argue that images
stored in the LOD cloud could be potential “markers” for augmented reality tracking
techniques, and we can exploit this feature to track user’s PoI in an indoor
environment. Considering that GPS sensors are not fully functional inside a building
and indoor tracking can be accomplished only with the preparation of the
corresponding room by using complex and costly equipment, we present a framework
that can still recognize the cultural artifact that the user is currently interested in, by
using image recognition techniques. The proposed method aims in matching images
from the LOD cloud that are relative to cultural artifacts, with frames from the user’s
mobile device camera and after successful detection, it displays the information and
links from the linked data set. Although it presents many issues as it requires a great
amount of computing power to perform the matching function, it still has a lot of
potential to become a universal augmented reality tool that incorporates linked open
data functionalities.
Our next steps include the review of existing matching algorithms between two
images and the evaluation of their effectiveness in our project. We also plan to design
the database of the application appropriately in order to minimize the calculation time
of the tracking process, a critical issue for the proposing system. Finally, our future
plan involves the development of a prototype that will encompass the promised
functionality and will successfully merge LOD and AR in a mobile application for a
selected cultural institution.
1. Vert, Silviu, and Radu Vasiu. "Integrating linked open data in mobile augmented reality
applications-a case study." TEM Journal 4.1 (2015): 35-43.
2. Aydin, Betül, et al. "Extending Augmented Reality Mobile Application with Structured
Knowledge from the LOD Cloud." IMMoA. 2013.
3. Bizer, Christian, Tom Heath, and Tim Berners-Lee. "Linked data-the story so far."
Semantic services, interoperability and web applications: emerging concepts (2009): 205-
4. Shadbolt, Nigel, Tim Berners-Lee, and Wendy Hall. "The semantic web revisited." IEEE
intelligent systems 21.3 (2006): 96-101.
5. Marden, Julia, et al. "Linked open data for cultural heritage: evolution of an information
technology." Proceedings of the 31st ACM international conference on Design of
communication. ACM, 2013.
6. Van Aart, Chris, Bob Wielinga, and Willem Robert Van Hage. "Mobile cultural heritage
guide: location-aware semantic search." International Conference on Knowledge
Engineering and Knowledge Management. Springer, Berlin, Heidelberg, 2010.
7. V. Bettadapura, I. Essa and C. Pantofaru, "Egocentric field-of-view localization using
first-person point-of-view devices.", Applications of Computer Vision (WACV), 2015
IEEE Winter Conference on, IEEE, 2015, p. 626-633
8. K. W. Chen, C. H. Wang, X. Wei, Q. Liang, M. H. Yang, C. S. Chen, and Y. P. Hung,
"To Know Where We Are: Vision-Based Positioning in Outdoor Environments.", arXiv
preprint arXiv:1506.05870, 2015
9. Kalman, Rudolph Emil. "A new approach to linear filtering and prediction problems."
Journal of basic Engineering 82.1 (1960): 35-45.
10. Hegde, Vinod, et al. "Utililising Linked Data for Personalized Recommendation of
POI’s." International AR Standards Meeting, Barcelona, Spain. 2011.
11. Vert, Silviu, and Radu Vasiu. "Integrating linked data in mobile augmented reality
applications." International Conference on Information and Software Technologies.
Springer, Cham, 2014.
12. Reynolds, Vinny, et al. "Exploiting linked open data for mobile augmented reality." W3C
Workshop: Augmented Reality on the Web. Vol. 1. June, 2010.
13. Linking Open Data cloud diagram 2017, by Andrejs Abele, John P. McCrae, Paul
Buitelaar, Anja Jentzsch and Richard Cyganiak.
14. Vert, Silviu, and Radu Vasiu. "Relevant aspects for the integration of linked data in
mobile augmented reality applications for tourism." International Conference on
Information and Software Technologies. Springer, Cham, 2014.
15. Khoury, Hiam M., and Vineet R. Kamat. "Evaluation of position tracking technologies
for user localization in indoor construction environments." Automation in Construction
18.4 (2009): 444-457.
16. Ta, Viet-Cuong, et al. "Smartphone-based user location tracking in indoor environment."
Indoor Positioning and Indoor Navigation (IPIN), 2016 International Conference on.
IEEE, 2016.
17. Tesoriero, Ricardo, et al. "Using active and passive RFID technology to support indoor
location-aware systems." IEEE Transactions on Consumer Electronics 54.2 (2008).
18. Oosterlinck, Dieter, et al. "Bluetooth tracking of humans in an indoor environment: An
application to shopping mall visits." Applied Geography 78 (2017): 55-65.
19. Michalakis, Konstantinos, John Aliprantis, and George Caridakis. "Intelligent Visual
Interface with the Internet of Things." Proceedings of the 2017 ACM Workshop on
Interacting with Smart Objects. ACM, 2017.
20. Chen, Si, Muyuan Li, and Kui Ren. "The power of indoor crowd: Indoor 3D maps from
the crowd." Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE
Conference on. IEEE, 2014.
21. Haslhofer, Bernhard, and Antoine Isaac. "data. europeana. eu: The europeana linked open
data pilot." International Conference on Dublin Core and Metadata Applications. 2011.
22. Mikolajczyk, Krystian, and Cordelia Schmid. "Indexing based on scale invariant interest
points." Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International
Conference on. Vol. 1. IEEE, 2001.
... Additionally, he analyzed how these challenges could be solved robustly and accurately by utilizing deep learning techniques and he presented a real-time temporal six degrees of freedom (DOF) object tracking and illumination estimation method. Aliprantis et al. (2018) proposed a concept of integrating linked open datasets into AR applications for cultural heritage institutions. Their concept implements an indoor tracking method which is based on image identification and matching between linked open data cloud images relative to cultural artifacts and frames from user's mobile device camera and aims at eliminating the need to accurately pinpoint user's exact location. ...
... Furthermore, semantic web makes use of linked open data and as a result, it can enhance AR by enabling the use of a wide variety of contextual data, through dynamically selecting and integrating data sources and by allowing users to experience Web-like browsing in AR applications (Reynolds et al., 2010;Vert and Vasiu, 2014a). In addition, owing to this fact, it can exploit sensor data obtained from users' devices more effectively by defining the query for the desired data more precisely (Aliprantis et al., 2018). The vast volume of semantically interrelated and interconnected information offers a lot of merits for AR applications but it also requires effective filtering of the displayed data which should be based on the overall context and users' needs and requirements in real time. ...
... Due to the open data nature of semantic web and linked open data, information can be both accessed and edited by all users which raises quality, trust and privacy issues (Makkonen et al., 2019). Additionally, based on this fact and the heterogeneous datasets, data ambiguity, duplication, misclassification, mismatching, overlapping and enrichment issues along with data differences regarding vocabulary and ontology may occur (Aliprantis et al., 2018;Vert and Vasiu, 2014b). The latter may be resolved by using ontology matching techniques (Shvaiko and Euzenat, 2011). ...
Full-text available
The growth rates of today’s societies and the rapid advances in technology have led to the need for access to dynamic, adaptive and personalized information in real time. Augmented reality provides prompt access to rapidly flowing information which becomes meaningful and “alive” as it is embedded in the appropriate spatial and time framework. Augmented reality provides new ways for users to interact with both the physical and digital world in real time. Furthermore, the digitization of everyday life has led to an exponential increase of data volume and consequently, not only have new requirements and challenges been created but also new opportunities and potentials have arisen. Knowledge graphs and semantic web technologies exploit the data increase and web content representation to provide semantically interconnected and interrelated information, while deep learning technology offers novel solutions and applications in various domains. The aim of this study is to present how augmented reality functions and services can be enhanced when integrating deep learning, semantic web and knowledge graphs and to showcase the potentials their combination can provide in developing contemporary, user-friendly and user-centered intelligent applications. Particularly, we briefly describe the concept of augmented reality and mixed reality and present deep learning, semantic web and knowledge graphs technologies. Moreover, based on our literature review, we present and analyze related studies regarding the development of augmented reality applications and systems that utilize these technologies. Finally, after discussing how the integration of deep learning, semantic web and knowledge graphs into augmented reality enhances the quality of experience and quality of service of augmented reality applications to facilitate and improve users’ everyday life, conclusions and suggestions for future research and studies are given.
... (pic.) Linked Open Data (LOD) Cloud uses the infrastructure of the World Wide Web to publish and link datasets that can then be examined and processed by applications accessing to them, while also these datasets are continuously updated and linked with new assets. LOD is one of the most practical parts of the SW, as it is a model for representing and accessing data that are flexible enough to allow adding or changing sources of data, to link with other databases and data and to process the information according to its meaning [24] . Application of the principles and technologies of Linked Data and the Semantic Web is a new, promising approach to address these problems. ...
Conference Paper
Full-text available
During the past years, cultural interactive experiences are produced in an increased pace to bring back the long lost fiction, as well as the functional and ritual nature of the art objects. One of the main purposes of this interest has been to augment user's participation during his interaction with cultural objects by making him actor of his own cultural experience. To date, there are various technologies available in cultural environments to support cultural exhibitions directly or indirectly (augmented reality, digital storytelling, serious games, linked open data, context awareness), and every technology used makes an impact on the exhibition or the visitors. It is important for cultural spaces to explore whether technological enhancements can help them attract more visitors and provide different ways of learning or interaction between visitors and exhibits or among them. How do the new media influence and shape this new-for Greece at least-research area? Do they truly form a catalyst for change to the essence of Art History as a science? Do they transform its structure on an epistemological level, in such a way that we should now speak of a new field, with a totally different subject and objectives, with its own theoretical and methodological tools? In this context we shall discuss the digital status on historical research as far as the visual arts are concerned, pointing out the challenges, the possibilities as well as the disadvantages. Article During the past years, cultural interactive experiences are produced in an increased pace to bring back the lost fiction, as well as the functional and ritual nature of the art objects. One of the main purposes of this interest has been to augment user's participation during his interaction with cultural objects by making him actor of his own cultural experience. To date, there are various technologies available in cultural environments to support cultural
... By expanding the game possibilities with the use of cultural LOD, the user will be able to explore and learn historic, artistic, geographical etc. Information in a playful way. Eventually, cultural multimedia and textual content displayed on "The Lost Painting" will be dynamically updated, since the connected resources perpetually store and provide new data [1]. Digital storytelling: Handler Miller [14] defines digital storytelling as the use of digital media platforms and interactivity for narrative purposes, either for fictional or for nonfiction stories. ...
Full-text available
The main feature of a Serious Game (SG) is its objective of supporting the player to achieve learning targets through a fun experience. The paper focuses on the creation of a digital cultural SG, named “The stolen painting”. The main goal of this game is to initiate users into art painting, through game activities that encourage the users to learn about some of the most famous paintings in the world and their creators. The theory of Gardner’s Multiple Intelligences (MI) is used for game profile identification through social media data mining techniques, or alternatively, through Multiple Intelligences Profiling Questionnaire (MIPQ) in order to reveal and quantify the different types of intellectual strengths (intelligences) that each user exhibits. Game’s progress is based on the three strongest intelligences of the player and the main objective of the player is to reveal the stolen painting’s identity.
... In the beginning, a lot of efforts were devoted to overcome technological gaps (the first devices were portable but a free and user-friendly mobility has been reached only with the advent of smartphones and tablets [21]); then, tracking issues were considered. Outdoor sites can benefit from GPS localization [22], but GPS-denied environments need alternative solutions [23]. Technologies have been developed to support both computer vision and wireless localization techniques. ...
Backgrund: Recent improvements of augmented reality technologies have boosted the design and the development of new solutions to support the user when visiting cultural sites. Each kind of multimedia information can be conveyed to the user in a new and intriguing way. On the other hand, a model to evaluate this kind of AR solutions for cultural heritage still misses. Objective: This paper aims to bridge the gap between applications and assessment by proposing a multivariate evaluation model and its application for an Android mobile augmented reality application designed to support the user during the visit of the historical industrial site of Carpano in Turin, Italy. This site is now a museum that keeps alive the memory of antique procedures and industrial machineries. Method: The proposed assessment model is based on a star like representation, which is often used to denote multivariate models; the length of each ray denotes the value of the corresponding variate. A three-level scale has been chosen for the proposed star-like representation: full length corresponds to the high-maximum level, medium length corresponds to the fair-average level and short length corresponds to the poor-null level. Results: The proposed AR application has been used by 13 people who, at the end of the experience, filled a questionnaire. Subjective feedbacks allowed us to evaluate the application usability. Moreover, the multivariate evaluation model has been applied to the AR application, thus outlining advantages and drawbacks. Conclusion: The presented multivariate evaluation model considers several different elements that can have an impact on the user experience; it also takes into account the coherence of the multimedia material used to augment the visit, as well as the impact of different thematic routes, is assessed.
Conference Paper
Full-text available
Communication between users and physical objects and sensors through the web within the Internet of Things framework, requires by definition the capability to perceive the sensors and the underlying information and services. Visualization of the Things in IoT is thus a requirement for natural interaction between users and IoT instances in the upcoming but steadily established computing paradigm. The immense quantity of sensors and variety of usable information introduces the need to intelligently filter and adapt the respective information sources and layers. Current work proposes an architecture that supports intelligent interaction between users and the IoT addressing the intelligent perception requirement described earlier. On the one hand, sensory visualization is tackled via Augmented Reality layers of sensors and information and on the other hand context and location awareness enhance the system by providing usable in the respective senses information.
Conference Paper
Full-text available
Mobile augmented reality applications have seen tremendous growth in recent years and tourism is one of the fields in which this set of technologies has been proved to be a natural fit. Augmented reality has the potential of enhancing the surroundings of the tourist in a meaningful way. In order to provide personalized and rich content for the augmented reality application, researchers have explored the use of Semantic Web and especially Linked Data principles and technologies. In this paper we review existing projects at the intersection of these technologies and current aspects, not necessarily specific, but highly relevant to the integration of Linked Open Data in mobile augmented reality applications for tourism. In this respect, we discuss approaches in the area of geodata integration, quality of the open data, provenance information and trust. We conclude with recommendations regarding future research in this area.
Conference Paper
Full-text available
Mobile devices are currently the most popular way of delivering ubiquitous augmented reality experiences. Traditionally, content sources for mobile augmented reality applications can be seen as isolated silos of information, being designed specifically for the intended purpose of the application. Recently, due to the raising in popularity and usage of the Semantic Web technologies and the Linked Data, some efforts have been made to overcome cur-rent augmented reality content sources limitations by integrating Linked Data principles and taking advantage of the significant increase in size and quality of the Linked Open Data cloud. This paper presents a literature review of the pre-vious efforts in this respect, while highlighting in detail the limitations of cur-rent approaches, the advantages of integrating Linked Data principles in mobile augmented reality applications and up-to-date challenges in regarding this still novel approach. The authors conclude by suggesting some future research directions in this area.
Full-text available
In this paper we develop a model and implement a prototype that integrates user-generated linked open data and open government data in a mobile augmented reality tourism application that works in the browser of the mobile device. We present some challenges met in this endeavour and solutions to overcome them, as well as propose further issues that should be addressed by research in this field.
Full-text available
This paper surveys the landscape of linked open data projects in cultural heritage, exam-ining the work of groups from around the world. Traditionally, linked open data has been ranked using the five star method proposed by Tim Berners-Lee. We found this ranking to be lacking when evaluating how cultural heritage groups not merely develop linked open datasets, but find ways to used linked data to augment user experience. Building on the five-star method, we developed a six-stage life cycle describing both dataset development and dataset usage. We use this framework to describe and evaluate fifteen linked open data projects in the realm of cultural heritage. This is a preprint. The full text conference paper is available at or through ACM at
Intelligence about the spatio-temporal behaviour of individuals is valuable in many settings. Generating tracking data is a necessity for this analysis and requires an appropriate methodology. In this study, the applicability of Bluetooth tracking in an indoor setting is investigated. A wide variety of applications can benefit from indoor Bluetooth tracking. This paper examines the value of the method in a marketing application. A Belgian shopping mall served as a real-life test setting for the methodology. A total of 56 Bluetooth scanners registered 18.943 unique MAC addresses during a 19-day period. The results indicate that Bluetooth tracking is a sound approach for capturing tracking data, which can be used to map and analyse the spatio-temporal behaviour of individuals. The methodology also provides a more efficient and more accurate way of obtaining a variety of relevant metrics in the field of consumer behaviour research. Bluetooth tracking can be implemented as a new and cost effective practice for marketing research, that provides fast and accurate results and insights. We conclude that Bluetooth tracking is a viable approach, but that certain technological and practical aspects need to be considered when applying Bluetooth tracking in new cases.
We present a technique that uses images, videos and sensor data taken from first-person point-of-view devices to perform egocentric field-of-view (FOV) localization. We define egocentric FOV localization as capturing the visual information from a person's field-of-view in a given environment and transferring this information onto a reference corpus of images and videos of the same space, hence determining what a person is attending to. Our method matches images and video taken from the first-person perspective with the reference corpus and refines the results using the first-person's head orientation information obtained using the device sensors. We demonstrate single and multi-user egocentric FOV localization in different indoor and outdoor environments with applications in augmented reality, event understanding and studying social interactions.
Augmented reality (AR) displays become more and more popular recently, because of its high intuitiveness for humans and high-quality head-mounted display have rapidly developed. To achieve such displays with augmented information, highly accurate image registration or ego-positioning are required, but little attention have been paid for out-door environments. This paper presents a method for ego-positioning in outdoor environments with low cost monocular cameras. To reduce the computational and memory requirements as well as the communication overheads, we formulate the model compression algorithm as a weighted k-cover problem for better preserving model structures. Specifically for real-world vision-based positioning applications, we consider the issues with large scene change and propose a model update algorithm to tackle these problems. A long- term positioning dataset with more than one month, 106 sessions, and 14,275 images is constructed. Based on both local and up-to-date models constructed in our approach, extensive experimental results show that high positioning accuracy (mean ~ 30.9cm, stdev. ~ 15.4cm) can be achieved, which outperforms existing vision-based algorithms.