Linked Open Data as universal markers for Mobile
Augmented Reality Applications in Cultural heritage
John Aliprantis, Eirini Kalatha, Markos Konstantakis, Kostas Michalakis,
University of the Aegean,
81100 Mytilene, Greece
email@example.com, firstname.lastname@example.org, email@example.com,
Abstract. Many projects have already analyzed the current limitations and
challenges on the integration of the Linked Open Data (LOD) cloud in mobile
augmented reality (MAR) applications for cultural heritage, and underline the
future directions and capabilities. The majority of the above works relies on the
detected geo-location of the user or his device by various sensors (GPS – global
positioning system, accelerometer, camera, etc.) or geo-based linked data, while
others use marker-based techniques to link various locations with labels and
descriptions of specific geodata. But when it comes to indoor environments
(museums, libraries) where tracking the accurate user’s position and orientation
is challenging due to the lack of GPS valid sensor data, complex and costly
technological systems need to be implemented for identifying user’s OoI
(Object of Interest). This paper describes a concept which is based on image
identification and matching between frames from the user’s camera and stored
images from the Europeana platform, that can link the LOD cloud from cultural
institutes around Europe and mobile augmented reality applications in cultural
heritage without the need of the accurate user’s location, and discusses the
challenges and future directions of this approach.
Keywords: Linked Open Data・Augmented Reality・Cultural
Heritage・Image Matching・Mobile Applications
Today, cultural heritage institutions like libraries, archives and museums (LAMs)
are eager to find new ways to attract and educate new visitors, while also preserving
and promoting their cultural heritage. Therefore, they are turning to solutions like
ubiquitous mobile applications and augmented reality techniques in order to enhance
user’s perception, deliver a more complete touristic experience and promote
interaction among users or between user and cultural artifact.
However, the augmented reality applications in culture have also their limitations
and drawbacks, as their consistent use from users have made them quite familiar and
acceptable, raising more and more the expectations and requirements for new
interactive and visual discovery methods. Current applications are characterized
rather static and cannot be updated easily, due to the nature of the data that they
process and display. Many of them use closed databases of information that are built
and accessed only by a single application, while others depend on open but single
databases disconnected from others, called channel (like Wikipedia). Nowadays, this
is perceived as a limitation and drawback from users, as they demand access to more
personalized, dynamic and adaptive information, in order to satisfy their personal
interests and desires .
A possible solution to this issue is the usage of dynamic databases like the Linked
Open Data (LOD) Cloud, which uses the infrastructure of the World Wide Web to
publish and link datasets that can then be examined and processed by applications
accessing to them, while also these datasets are continuously updated and linked with
new assets . Linked Data  is one of the most practical parts of the Semantic
Web , as it is a model for representing and accessing data that are flexible enough
to allow adding or changing sources of data, to link with other databases and data and
to process the information according to its meaning. As a structured data model,
linked data is also readable and editable by search engines and software agents, thus
increasing importance for data interchange and integration. Despite that, the
integration of LOD databases into augmented reality applications is not impeccable,
as developers have to address other issues which mostly are derived from its open
world consumption, leading to trust issues and data quality assessment .
In recent years, many cultural institutes have converted their cultural heritage
information that was previously only used for internal purposes, to linked data in
distinctive datasets which are linked to the LOD cloud, usually with the aid of web
aggregators such as Europeana. In this way, structured data can benefit both the
institution and the larger community by expanding the semantic web and establishing
an institution as a trusted source of high quality data, while also giving the
opportunity to develop new interfaces for users to experience cultural heritage, like
the mobile augmented reality applications . Mobile tourist guides, enriched
storytelling and digital representation of cultural heritage monuments are only few of
the many applications that have been developed in order to enhance user experience
during their cultural interaction, combining successfully the augmented reality
techniques with the large and open linked data resources. Nevertheless, the majority
of these applications are strongly dependent on the geo-location of the user or geo-
tagged data, as their function rely on identifying points of interest (PoIs) in user’s
field of view and adjust the context-sensitive information to be displayed in screen
from geo-based linked data .
Current research aims in proposing a prototype that uses linked cultural data from
cultural institutions (museums, libraries, art galleries) in Europe which are stored in
specific structured data standards (Resource Description Framework - RDF) and are
freely available to the public for further processing (open data). Its main focus is
cultural artifacts that are stored in indoor unprepared environments and thus user
tracking is almost impossible. One of the main objectives of this research is the
implementation and optimization of the image identification algorithm for mobile
devices. Several works present various techniques to identify the field of view (FOV)
of the user by matching frames from his camera to already existed databases of
images , while others use point cloud databases to match images with 3D models
(2D to 3D matching) . In the proposed prototype, geographical information of the
user's device (GPS sensor) is only used to narrow the number of the possible
databases to be searched, by considering the latest value of the sensor and a certain
range around it, thus improving efficiency and accuracy. This procedure however, not
only decreases the computation time of the search algorithm, but also determines the
ontologies to be matched by identifying the subject of the nearby cultural institutions.
Then, the software queries the server requesting images of cultural objects located in
this area (using SPARQL language), and if the image matching to the camera’s
frames is successful, the object that the user has in his field of vision is identified and
the appropriate description and metadata from the LOD database (Europeana
aggregator) are shown. A key attribute of the above prototype is the use of the
Europeana API (Application Programming Interface) which allows developers to use
these cultural data and implement applications for cultural heritage.
The paper is organized as follows. In Section 2, we present the current projects in
culture that integrate linked data in augmented reality applications, whose basic
characteristic is the location – based approach on the display of their content. Section
3 analyzes the positives of the LOD – AR integration and describes the issues that
must be addressed. In Section 4, we illustrate the proposed concept and define the
current issues in the LOD – AR integration that it addresses and the challenges that
we have to overcome. Finally in Section 5, we discuss our future work based on this
2 Related Works
The integration of Linked Open Data sources into augmented reality applications has
emerged as an effective solution for static datasets of mobile applications, with many
projects having already built several prototypes. Vert and Vasiu  developed a
prototype that integrates user – generated linked open data and open government data
in a mobile augmented reality tourist guide in Timisoara, Romania. In this model, the
identification is accomplished with the aid of GPS sensors from user’s devices, and
PoIs stored in linked open data bases like DBpedia and LinkedGeoData that are
verified with Google Fusion Tables.
ARCAMA-3D  is a mobile platform that overlays the surroundings of the user
with 3d representation that highlights the objects of interest (OoIs), allowing users to
interact with them and seeking more information through the Linked Open Data
cloud. This platform is also strongly depended on geo-location data and embedded
sensors like GPS and accelerometer in order to align the 3d content with the physical
environment, using also methods like the Kalman filter . Furthermore, its data is
context-sensitive as it highlights the buildings with different colors if they contain
interesting information for the user with respect to his interests.
Van Aart et al  envision a mobile tourist guide application which will be able to
dynamically combine information, navigation and entertainment. In this way, they
propose the implementation of a mobile application which constructs dynamic
walking tours in Amsterdam based on user’s interests and geo-location of PoIs
nearby, whose data are stored in linked datasets like DBpedia and LinkedGeoData.
The proposed algorithm is quite efficient in outdoor environments, even if it has to
deal with the significant discrepancies that exist between coordinates for the same
location, but its accuracy might not be sufficient when it comes to an indoor
augmented reality application.
Hegde et al  argue about the importance of linked data and how can they enrich
PoIs in order to enhance browsing experience in augmented reality applications. They
claim that PoIs cannot give information by themselves and as far as there are no links
to additional information through the common augmented reality applications, the
offering experience may not satisfy the demanding users. Linking semantically
additional content to enrich PoIs enables users to discover more information based on
In conclusion, the above research projects show the potential of using the Linked
Open Data cloud as the main source dataset for a mobile application where the
information access is based on the geo-location of the user. However, there is no
sufficient research on mobile applications based on indoor environments such as
museums and libraries, where the integration of LOD is not straightforward, as user’s
location and orientation regarding the artifacts is much more difficult to be tracked,
especially in a non pre-prepared environment.
3 LOD – AR integration: benefits and issues
As Vert and Vasiu  claim in their work about the LOD – AR integration, not only
augmented reality applications benefit from the Linked Open Data cloud but the vice
versa is equally important. As already mentioned, integrating linked open data in
mobile applications has certain benefits, most of them related to the static and closed
classical databases that are used widely in augmented reality applications, but on the
other side, the LOD cloud can also profit from the raising amount of content created
by the MAR applications that can be linked to it, increasing its diversity and size,
which in turn favors the Linked Data application developers.
In recent years, augmented reality applications are in enduring popularity and
widely accepted by users but this seems to reach its boundaries as there are some
issues which need to be addressed directly. This is also due to the fact that state-of-
the-art Augmented Reality browsers like Layar, Wikitude, Junaio and Tagwhat are
based on the current version of the web (Web 2.0) which however is to be replaced by
the Semantic Web (or Web of data, Web 3.0) , the new web version that changes
the way we conceive and interact with web data, and therefore it affects the
functioning of these AR browsers.
Reynolds et al  argue about the limitations of current MAR applications with
regards to the present web version, and conclude that there are 3 major issues of the
architectures of existing augmented reality applications, as follows:
● Selection and integration of data sources. As we mentioned before, current
MAR applications are based on static and closed databases. As a result, any
update in application’s dataset has to be done manually which is a rather
demanding or sometimes impossible task, taking into account the massive
information that many applications may contain. Furthermore, there is little or no
interaction between individual databases as usually they are exclusively designed
and constructed for the need of particular applications. That means there are no
linked datasets (for example there are no hyperlinks or symbolic relationships
between different but associated data) and developers have to create a new data
set if they want to combine the purposes of two or more applications.
● Utilization of contextual information. The increasing amount of information
that MAR applications can handle and display nowadays can satisfy even the
most demanding users. Meanwhile, a mobile device such as smartphone or tablet,
has a small screen that usually cannot display all the necessary information, or
even worse may cause users to feel overwhelmed by the amount of data that
appears on their screen. In recent years though, new mobile devices are equipped
with a range of sensors like GPS and accelerometer, which facilitate the use of
contextual information. Augmented reality applications can use the information
taken from these sensors to estimate recent activity and tendency from users, in
order to specify the desired data that users can require, thus minimize the
possibility of displaying irrelevant or redundant information.
● Browsing experience. Web browsers link pages of data through hyperlinks,
hence allowing users to navigate and explore new content based on their interests.
Augmented reality browsers and applications don’t support such functionality as
their content doesn’t include any links to data outside their data sets. As a
consequence, current AR applications provide a rather fixed experience regarding
to the information that they can display.
Linked Open Data principles are well-suited for addressing the above issues that
hinder the ongoing progress of MAR applications. Until today (August 2017), the
LOD cloud includes more than 1100 datasets from various fields such as social
networking, science and government data, which are linked with each other .
Furthermore, it contains plenty of geo-location data entries, such as those found in
data sets like GeoNames and LinkedGeoData, which not only provide the coordinates
of many locations but also facts and events that characterize them. Meanwhile, many
of the included datasets are linked to the DBpedia which stores general information
for very common concepts in structured form, and been extracted by the data created
Integrating linked data in MAR applications can enhance augmented reality by
addressing the three issues mentioned above. With the linked databases that are also
open for update from developers and users, augmented reality applications are no
longer static as data can be dynamically selected and processed theoretically from the
entire LOD cloud. Furthermore, linked data can better exploit the sensor data from
user’s devices by specifying more efficiently the query for the desired data, thus
enabling the utilization of contextual data. Also, due to the functioning of the linked
open data items, augmented reality applications can display additional data inside the
view of the application, without requiring the user to close the application and open a
new one like a mobile browser, which aims towards enhancing the user’s browsing
Equally, LOD cloud is also benefited by its integration with augmented reality. The
increasing amount of information that AR applications produce and handle, such as
3D models, can be linked to the LOD cloud, and help it grow in size and diversity,
something that can be capitalized on by both developers and other users or linked
datasets. Furthermore, linked data applications lack in user – friendly interfaces, as
they have to display structured data in complex RDF triples, thus augmented reality
applications can intervene as an already known interface which allow users to interact
with linked data in a way that they are already familiar and in favor of.
Even though this integration has many benefits, there are some further issues that
need to be taken into consideration. First of all, the size of the LOD cloud may be an
advantage but it has to be used wisely, especially when it comes to mobile devices
like smartphones with small screen, as there is a massive amount of data that can be
displayed, which may upset or confuse the user. Filtering displaying data regarding
the context and user’s interests can be an effective countermeasure. Also, the fact that
linked data are mostly open data that everyone can update or even add and link their
own, raises trust, quality and privacy issues that developers must deal with.
Furthermore, queries of linked data are handled by powerful technologies such as
RDF and SPARQL that also are complex and time consuming processes. As a result,
they may slow down the functioning of a mobile augmented reality application or
even worse, introduce incomparable barriers on its design and its normal features.
Finally, not every linked data set has the same vocabulary and ontology, something
that leads to matching issues, while data may be duplicated, with wrong coordinates
or overlapped in different datasets, so developers have to implement more
sophisticated queries and algorithms [12, 14].
In conclusion, linked open data seems to be an effective solution to enhance the
emergence of augmented reality applications, due to its quantity of interlinked
information, and also its dynamic selection and integration of data, while it can
benefit from the data produced by AR applications that can be linked to it.
Furthermore, this integration answers to the expectation of users nowadays to be able
to use dynamic, adaptive and personalized technology applications that help them
explore and interact with their content efficiently.
4. Proposed Concept Overview
Current work aims in proposing a new way of integrating linked data in mobile
augmented reality applications that guide visitors in a cultural institution. One of its
basic features is that it refers to indoor environment or locations where accurate
values of GPS sensors are nearly impossible. We already mentioned that many of the
current projects in the LOD – AR field are depended on the tracking of user’s precise
location, in order to query the appropriate geo-location data that will be displayed.
That is major constraint if we consider that many cultural institutions such as
museums and libraries are closed buildings where it is very difficult to track and link
user’s position with a internal PoI. Furthermore, there may not be concrete geo-data
for every cultural artifact inside a building so it is tricky to specify with which object
the user is interested. in order to fetch the correct data from the LOD cloud.
So, how can we track user’s location and orientation inside a building effectively
and recognize the PoI that user probable refers to? Many projects that incorporate
augmented reality techniques in indoor environments and therefore track user’s
location amongst the cultural exhibits require the technical preparation and
transformation of the area, with installation of specialized devices. Wireless
technologies and indoor GPS, cooperating with the state-of-the-art smartphones with
many sensors, are quite accurate methods but are rather costly and complex to be
widely implemented [15, 16]. RFID (Radio – Frequency Identification) and BLE
(Bluetooth Low Energy) technologies can be also effective and viable solutions but
there are also certain technological and practical aspects that need to be addressed
[17, 18]. Furthermore, QR codes and image tracking are the most low cost and simple
methods but they require additional actions from users, which may be frustrating .
Finally, modern mobile devices have the capability to use their powerful camera and
processing power in order to perform large-scale indoor 3D scene reconstruction
based on cloud images and sensor data . These techniques track effectively user’s
location in indoor environments but they have all in common one vital disadvantage,
which is the need of a pre-prepared environment with complex and sometimes costly
In our proposed concept we are going to implement an indoor tracking method
based on image recognition and identification between frames retrieved from mobile
device’s camera and linked open data sets. The basic concept is that accurate user’s
tracking inside a room is not necessary once we identify which artifact user is
currently examining through his camera. Taking advantage of the images that are
usually associated with a cultural artifact in the LOD cloud, we can compare the
frames taken by the camera with the appropriate image data set from the cloud. Upon
successful identification, the system queries the rest of the information about the
tracked object, and also its links from the linked data, and finally the user although
being in an indoor environment has access to the LOD cloud through the PoI in front
4.1 Architecture description
The architecture of the proposed concept involves four key components: the LOD
cloud, the augmented reality interface, user’s mobile device and finally the cloud
database of the application. The data flow and function sequence are pictured on the
In the first level of the architecture, the user’s mobile device connects with the
LOD cloud in order to query for cultural artifact images that are located in institutions
near user’s location. This procedure will be accomplished with the aid of linked data
online portals such as Europeana aggregator, which allows free access to many
European cultural databases. The selected images are stored in the application’s cloud
database where they are converted to the appropriate format with characteristic points,
so afterwards they can be compared with the camera’s frames.
After the application’s database is formed, the second level involves the
cooperation of mobile device’s camera and application’s database in order to identify
the PoI that the user is currently looking at. To achieve this, specialized algorithms
that detect same patterns in images must be implemented, taking into account various
conditions that may alter current images of artifacts regarding to the images stored to
the LOD cloud.
Finally, after the PoI has been identified, the user’s device queries the LOD current
database for more information about the cultural artifact and its links, and displays
them in the user’s screen with augmented reality techniques. The user then can
interact with the information shown as digital content to the screen and navigate to the
LOD cloud according to his interests.
4.2 Use case scenario
Fig. 2. Proposed framework - LOD and AR integration in indoor environments
The proposed architecture of the concept
The proposed framework includes five major steps to be implemented, depicted in fig
1. It requires mobile user devices like smartphones and tablets, and can be functional
only at cultural institutions that have shared their cultural heritage as linked open data
in the LOD cloud, hence making it available to the public through online aggregators.
Once user is inside the building and starts the application on his mobile device, the
proposed framework follows the next steps:
The software retraces the last known GPS sensor value that indicates the city or
region that the user is currently exploring. Since GPS is not available inside a
building, we cannot track the user’s location but we can still narrow the possible
cultural institutes that the user may visit. That helps us minimize the linked data
sets that need to be queried about the forthcoming image frames. In the best
scenario, we can assume the exact building the user may currently be in, so we
can have all the appropriate images from the equivalent cultural heritage of the
institution, but on the worst case, we still can extract the city that user is visiting
and the available linked data sets in this area, but it will surely affect the
computation time of the matching images function later on.
After the identification of the cultural institutes that are nearby the user and
maintain their heritage as structured data in the LOD cloud, the framework
queries about images that represent their artifacts, and the result of the query is
stored in the cloud-based database of the framework. For the query, we can use
the powerful SPARQL language and the Europeana API, once the Europeana
online platform is the EU digital aggregator for cultural heritage .
The next step is the image processing and preparation for the matching function.
Images stored in system’s database must be in a specific form which allows
software to compare and decide if two images are the same. This usually is
achieved with the detection of characteristic points of an image  and the
search for the same patterns among many images.
In this step, the framework compares the frames taken by the user’s camera with
the images from its database, and in case of matching it detects the artifact that
the user is interested for. This seems to be the most demanding function of the
whole procedure, as it requires a lot of processing power and the use of complex
software. Frame rate and latency of the result are two issues that need to be
addressed, but with the necessary filters this part of the procedure can be quite
Finally, once the framework specifies the PoI from the LOD cloud that the user is
currently interested in, it uses augmented reality techniques to display more
information about it. Furthermore, the user can seek more data about the current
object or relevant artifacts by using the links from the LOD cloud.
The basic concept of the above framework is that it takes advantage of the large
amount of images in the LOD cloud and treats them like image - “markers”. A marker
in augmented reality is an image with specific characteristics that figures a distinctive
pattern, easily recognizable by the software. Upon successful detection, the system
usually reacts by displaying a message or digital content, when others just link users
to another website like QR codes. All the uploaded images that are associated with
multiple cultural artifacts can be potential markers, thus these images can be tracked
through image recognition.
Furthermore, the proposed framework needs to handle the possible data overload in
the user’s screen, as we already mentioned the significant size of the LOD cloud and
the small screen of a mobile device. That’s why context awareness techniques will be
applied to display information according to user’s personal interests and profile, based
also on the metadata of the linked data sets.
In conclusion, the proposed framework aims in integrating linked data in
augmented reality applications for indoor environments in a way that both benefit
from each other. On the one hand, there will be a cross functional AR prototype for
all cultural institutions that share their data in the LOD cloud without any preparation
of environment (regionally restricted in Europe because of Europeana limitations as
the used aggregrator). On the other hand, the proposed user interface will visualize
LOD with AR techniques providing more familiar and friendly interaction ways with
4.3 Challenges and limitations
Like the majority of the applications that integrate linked data with augmented
reality techniques, the proposed framework has its own challenges. First of all, the
size of data that needs to be displayed in the user’s screen is way too big, as it is based
on the LOD cloud, so context-awareness methods must be implemented in order to
personalize the digital content and links that will be displayed. Also, the heterogeneity
of the linked data sets must be addressed with specialized algorithms that consider the
differences between languages and ontologies, coupled with structural ambiguities
and misclassification of data.
Furthermore, this project faces a few more challenges which are related to its wide
range of capabilities. As we already mentioned, the GPS sensor values cannot be valid
in indoor environment thus won’t be taken into account in tracking of the user’s
position inside a building. But previous values of the sensor are quite important in
order to minimize the possible linked data sets that should be queried and therefore
the number of images that need to be processed and compared. A potential incorrect
value of the GPS sensor may lead to false data sets or too many images that will cause
an endless processing time without the right outcomes. The same result will cause a
disabled or uninstalled sensor in the mobile device.
Also, the computation time and the necessary processing power that is needed to
perform the matching function between the stored images and the camera’s frames
may end up as a very challenging issue. That’s why the GPS value is of importance
for the decrease of the amount of processed data. Also, the matching function which
compares the characteristic points of two images should require modern mobile
devices capable of executing complex algorithms in limited time.
Moreover, the effectiveness of the proposed framework depends on the quality and
reliability of the images that figure the cultural artifacts and are uploaded on the LOD
cloud. Factors like brightness, angle of the image, changes of the artifact’s location
and background differences may affect the outcome of the matching function, thus
misleading user to false information.
5. Conclusion and Future Work
In this paper, we introduced a new way of integrating linked open data sets in
augmented reality applications for cultural heritage institutions. We argue that images
stored in the LOD cloud could be potential “markers” for augmented reality tracking
techniques, and we can exploit this feature to track user’s PoI in an indoor
environment. Considering that GPS sensors are not fully functional inside a building
and indoor tracking can be accomplished only with the preparation of the
corresponding room by using complex and costly equipment, we present a framework
that can still recognize the cultural artifact that the user is currently interested in, by
using image recognition techniques. The proposed method aims in matching images
from the LOD cloud that are relative to cultural artifacts, with frames from the user’s
mobile device camera and after successful detection, it displays the information and
links from the linked data set. Although it presents many issues as it requires a great
amount of computing power to perform the matching function, it still has a lot of
potential to become a universal augmented reality tool that incorporates linked open
Our next steps include the review of existing matching algorithms between two
images and the evaluation of their effectiveness in our project. We also plan to design
the database of the application appropriately in order to minimize the calculation time
of the tracking process, a critical issue for the proposing system. Finally, our future
plan involves the development of a prototype that will encompass the promised
functionality and will successfully merge LOD and AR in a mobile application for a
selected cultural institution.
1. Vert, Silviu, and Radu Vasiu. "Integrating linked open data in mobile augmented reality
applications-a case study." TEM Journal 4.1 (2015): 35-43.
2. Aydin, Betül, et al. "Extending Augmented Reality Mobile Application with Structured
Knowledge from the LOD Cloud." IMMoA. 2013.
3. Bizer, Christian, Tom Heath, and Tim Berners-Lee. "Linked data-the story so far."
Semantic services, interoperability and web applications: emerging concepts (2009): 205-
4. Shadbolt, Nigel, Tim Berners-Lee, and Wendy Hall. "The semantic web revisited." IEEE
intelligent systems 21.3 (2006): 96-101.
5. Marden, Julia, et al. "Linked open data for cultural heritage: evolution of an information
technology." Proceedings of the 31st ACM international conference on Design of
communication. ACM, 2013.
6. Van Aart, Chris, Bob Wielinga, and Willem Robert Van Hage. "Mobile cultural heritage
guide: location-aware semantic search." International Conference on Knowledge
Engineering and Knowledge Management. Springer, Berlin, Heidelberg, 2010.
7. V. Bettadapura, I. Essa and C. Pantofaru, "Egocentric field-of-view localization using
first-person point-of-view devices.", Applications of Computer Vision (WACV), 2015
IEEE Winter Conference on, IEEE, 2015, p. 626-633
8. K. W. Chen, C. H. Wang, X. Wei, Q. Liang, M. H. Yang, C. S. Chen, and Y. P. Hung,
"To Know Where We Are: Vision-Based Positioning in Outdoor Environments.", arXiv
preprint arXiv:1506.05870, 2015
9. Kalman, Rudolph Emil. "A new approach to linear filtering and prediction problems."
Journal of basic Engineering 82.1 (1960): 35-45.
10. Hegde, Vinod, et al. "Utililising Linked Data for Personalized Recommendation of
POI’s." International AR Standards Meeting, Barcelona, Spain. 2011.
11. Vert, Silviu, and Radu Vasiu. "Integrating linked data in mobile augmented reality
applications." International Conference on Information and Software Technologies.
Springer, Cham, 2014.
12. Reynolds, Vinny, et al. "Exploiting linked open data for mobile augmented reality." W3C
Workshop: Augmented Reality on the Web. Vol. 1. June, 2010.
13. Linking Open Data cloud diagram 2017, by Andrejs Abele, John P. McCrae, Paul
Buitelaar, Anja Jentzsch and Richard Cyganiak. http://lod-cloud.net/
14. Vert, Silviu, and Radu Vasiu. "Relevant aspects for the integration of linked data in
mobile augmented reality applications for tourism." International Conference on
Information and Software Technologies. Springer, Cham, 2014.
15. Khoury, Hiam M., and Vineet R. Kamat. "Evaluation of position tracking technologies
for user localization in indoor construction environments." Automation in Construction
18.4 (2009): 444-457.
16. Ta, Viet-Cuong, et al. "Smartphone-based user location tracking in indoor environment."
Indoor Positioning and Indoor Navigation (IPIN), 2016 International Conference on.
17. Tesoriero, Ricardo, et al. "Using active and passive RFID technology to support indoor
location-aware systems." IEEE Transactions on Consumer Electronics 54.2 (2008).
18. Oosterlinck, Dieter, et al. "Bluetooth tracking of humans in an indoor environment: An
application to shopping mall visits." Applied Geography 78 (2017): 55-65.
19. Michalakis, Konstantinos, John Aliprantis, and George Caridakis. "Intelligent Visual
Interface with the Internet of Things." Proceedings of the 2017 ACM Workshop on
Interacting with Smart Objects. ACM, 2017.
20. Chen, Si, Muyuan Li, and Kui Ren. "The power of indoor crowd: Indoor 3D maps from
the crowd." Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE
Conference on. IEEE, 2014.
21. Haslhofer, Bernhard, and Antoine Isaac. "data. europeana. eu: The europeana linked open
data pilot." International Conference on Dublin Core and Metadata Applications. 2011.
22. Mikolajczyk, Krystian, and Cordelia Schmid. "Indexing based on scale invariant interest
points." Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International
Conference on. Vol. 1. IEEE, 2001.