PreprintPDF Available

STARE: Semantic Augmented Reality Decision Support in Smart Environments

Authors:

Abstract

The Internet of Things (IoT) facilitates real-time decision support within smart environments. Augmented Reality (AR) allows for the ubiquitous visualization of IoT-derived data, and AR visualization will simultaneously permit the cognitive and visual binding of information to the physical object(s) to which they pertain. Essential questions exist about efficiently filtering, prioritizing, determining relevance, and adjudicating individual information needs in real-time decision-making. To this end, this paper proposes a novel AR decision support framework (STARE) to support immediate decisions within a smart environment by augmenting the user's focal objects with assemblies of semantically relevant IoT data and corresponding suggestions.
STARE: Semantic Augmented Reality Decision Support in Smart
Environments
Mengya Zheng*Xingyu Pan Nestor Velasco Bermeo Rosemary J. Thomas David Coyle
Gregory M.P. O’Hare Abraham G. Campbell
School of Computer Science, University College Dublin, Ireland
Fig. 1. Augment Objects with All Relevant IoT Data and Corresponding Brief Suggestions For Instant Explainable Decision Support
Abstract—The Internet of Things (IoT) facilitates real-time decision support within smart environments. Augmented Reality (AR)
allows for the ubiquitous visualization of IoT-derived data, and AR visualization will simultaneously permit the cognitive and visual
binding of information to the physical object(s) to which they pertain. Essential questions exist about efficiently filtering, prioritizing,
determining relevance, and adjudicating individual information needs in real-time decision-making. To this end, this paper proposes a
novel AR decision support framework (STARE) to support immediate decisions within a smar t environment by augmenting the user’s
focal objects with assemblies of semantically relevant IoT data and corresponding suggestions.
Index Terms—Augmented Reality, smart environment, decision support, semantic annotations, ubiquitous computing
1 INTRODUCTION
The Internet of Things (IoT) provides unprecedented opportunities
for the access to and conflation of a myriad of heterogeneous data to
support real-time decision-making within smart environments. How-
ever, the increasing dataset scales brought by the smart environment
revolution reveal the limitations of traditional 2D-screen-based smart
environment data visualization interfaces: decision-makers have to
spend an additional cognitive load on searching and filtering useful
Internet of Things (IoT) data from the centralized long data lists.
The emergence of Augmented Reality (AR) techniques has brought
in new potential decision-support solutions to spatially scatter these IoT
data over a smart environment with the proximity to the data sources,
such as Situated Visualization [4]. However, such scattered visualiza-
tion of IoT data may sometimes lead to the separation between relevant
data and the user’s focus, thus generating inconsistencies. Also, possi-
ble redundancies can gradually accumulate when the decision-maker
switches attention from their currently focused context to discover the
relevant information among other irrelevant IoT data scattered over the
smart environment. To avoid these possible redundant and inconsistent
artifacts which contribute to information overload factors and hindered
*e-mail: mengya.zheng@ucdconnect.ie
e-mail: abey.campbell@ucd.ie
Manuscript received xx xxx. 201x; accepted xx xxx. 201x. Date of Publication
xx xxx. 201x; date of current version xx xxx. 201x. For information on
obtaining reprints of this article, please send e-mail to: reprints@ieee.org.
Digital Object Identifier: xx.xxxx/TVCG.201x.xxxxxxx
decision-making, we propose the STARE (SemanTic Augmented RE-
ality) decision support framework to augment the user’s focal context
with ubiquitous and seamless decision support.
Decision support systems (DSS) are commonly defined as computer-
based systems used for assisting decision making [2]. Prior data-
oriented AR smart environment DSS demonstrated that they could
assist in decision-making by providing data retrieval and information
analysis [1] but left the vital decisions to expert users. However, non-
expert users may benefit more from the higher-level model-oriented
decision support such as consequence simulation and suggestions [1]
for non-vital decisions. We found no prior AR smart environment
interfaces provided such high-level decision support by reviewing the
current literature. Therefore, after establishing semantic object-data
associations for the focus augmented IoT data, STARE further encapsu-
lates an ontology-based suggestion model to provide explainable sug-
gestions targeted for non-expert users’ focal decision contexts within a
dynamically changing smart environment.
2 SYSTEM OVERVIEW
By implementing STARE on a Microsoft HoloLens, this system en-
capsulates the focus augmentation and a novel explainable suggestion
model to shorten the physical and semantic distance between the user’s
focal object and its relevant decision support data. To achieve the
focus augmentation strategy, STARE applies
f ocus +voice command
modality to allow for the hands-free triggering of the AR information
over objects within the user’s focus. This system consists of five main
modules (Fig. 2): object classifier, semantic annotator based on a light
weight ontology, decision support component, Bluetooth Low Energy
(BLE) sensor data scanner, and AR information visualization module.
This system allows the users to freely explore the smart environment
by invoking the decision-support information for the objects they are
currently focusing on. For example, a user wants to open the window
but is not sure about outdoor air quality, so they use a voice command
to trigger a decision-support AR annotation about the window, which
superimposes indoor and outdoor air quality data over the windows and
shows a suggestion to close the window due to outdoor air pollution
(Fig. 1).
Fig. 2. System Architecture Diagram
2.1 Object Classifier
After detecting the voice command, the web camera captures an im-
age for the current field of view, which is then sent to the Microsoft
Azure Custom Vision
1
cloud service for object classification. To avoid
information overload caused by excessive AR superimposition over
all recognized objects, STARE defines one object list for each given
smart home context to filter only decision-involved objects. Moreover,
one matching percentage threshold and object depth threshold are also
defined to filter out the unclear objects. For each matching object suc-
cessfully recognized from this image, one bounding box is returned by
the object classifier, which is then used for the object localization in the
physical world by combining the depth data.
2.2 Lightweight Ontology & Semantic Annotator
The proposed lightweight ontology aims at providing a semantic struc-
ture for the data collected from the different sensors and serves as a
stepping stone towards the more elaborated reasoning rules of the De-
cision Support component of STARE. The semantic annotator extends
the Semantic Sensor Network (SSN) Ontology to formally express the
events captured by the sensors, while extending the SOSA Ontology
(Sensor, Observation, Sample, and Actuator) [3] to formally express
and record the object recognition and localization events captured by
the object classifier. Simultaneously, the STARE lightweight ontology
transfers the object label into Resource Description Framework (RDF)
documents (Fig. 2) to ensure that the decision support component can
easily consume them. The STARE lightweight ontology further sup-
ports observations registered by the sensors and provides a semantic
structure of the sensors by combining the vocabularies from SSN and
SOSA.
2.3 Decision Support Component
The decision support component provides a novel suggestion model
to achieve high-level decision support within a dynamically changing
environment. As Fig. 2 shows, the decision support component incor-
porates a triplestore and a decision engine to generate suggestions and
explanations. The object RDF documents generated by the semantic
annotator are all stored in this triplestore, which serves as the repository
to feed the decision engine. This decision engine comprises a rule store
and a reasoning engine to construct the multi-dimensional semantic
associations between the object and its relevant IoT data. The rule
1https://azure.microsoft.com/services/cognitive-services/
store assigns decision rules to the object RDF, while the reasoning
engine makes logical inferences based on the object’s decision rules.
According to the decision rules, the reasoning engine semantically an-
notates this object RDF document with a list of relevant IoT data and
their descriptors. This relevant IoT data list contains all environmental
parameters that may change the object’s states (e.g., sunshine and mois-
ture for indoor plants; fridge temperature for the fridge) or interfere
with the usage decision about this object (e.g., air quality for air puri-
fier and windows; energy consumption for the home appliances). For
each semantically annotated IoT data, the data descriptor contains the
objective properties, such as the sensor/device name and a BEACON
ID used for BLE data scanning (Fig. 2).
2.4 AR Explainable Decision Support Data Visualization
Utilizing the decision rules and IoT data descriptors derived from the
decision engine, the AR front-end can superimpose suggestions over
the focal object to support decisions about it (Fig. 1). However, such
high-level suggestions or recommendations without explanation do not
generate trust from a decision-maker. Accordingly, STARE explains the
provided brief suggestions with an assembly of the relevant IoT data
defined in the object RDF, which work as the decisive input values that
may affect the resulting advice [5]. Among these decisive input values,
the key features that determined the decisions are then highlighted with
high color codes [6] and alerts (Fig. 1) to clearly explain the decision
logic behind the suggestions.
To retain data details without causing visual clutter, this system pro-
vides a simplification for all superimposed data and allows for further
investigation in a details-on-demand manner. As Figure 1 shows, by
clicking (gesture interaction) on any IoT data icon, detailed information
will be visualized in line charts, histograms, or pie charts, which are
determined by the data types.
3 CONCLUSIONS
This paper has presented an innovative AR smart environment decision
support framework (STARE) which incorporates a novel suggestion
model and a focus augmentation modality to allow for immediate-
continuous decision support within the user’s currently investigated
context. Further enhancements to STARE incorporating machine learn-
ing will enable the automatic construction of semantic data-object
associations. This future work will ensure STARE extensibility allow-
ing for the creation of smart environments that deliver truly intuitive
interfaces for a future empowered ubiquitous IoT world.
ACKNOWLEDGMENTS
This research forms part of the CONSUS Programme which is funded
under the SFI Strategic Partnerships Programme (16/SPP/3296) and is
co-funded by Origin Enterprises Plc.
REFERENCES
[1]
S. Alter. A taxonomy of decision support systems. Sloan Management
Review (pre-1986), 19(1):39, 1977.
[2] P. N. Finlay. Introducing decision support systems. Blackwell Pub, 1994.
[3]
K. Janowicz, A. Haller, S. J. Cox, D. Le Phuoc, and M. Lefran
c¸
ois. Sosa:
A lightweight ontology for sensors, observations, samples, and actuators.
Journal of Web Semantics, 56:1–10, 2019.
[4]
B. Marques, B. S. Santos, T. Ara
´
ujo, N. C. Martins, J. B. Alves, and P. Dias.
Situated visualization in the decision process through augmented reality.
In 2019 23rd international conference information visualisation (IV), pp.
13–18. IEEE, 2019.
[5]
I. Nunes and D. Jannach. A systematic review and taxonomy of explanations
in decision support and recommender systems. User Modeling and User-
Adapted Interaction, 27(3-5):393–444, 2017. doi: 10.1007/s11257-017
-9195-0
[6]
M. Zheng and A. G. Campbell. Location-based augmented reality in-
situ visualization applied for agricultural fieldwork navigation. In 2019
IEEE International Symposium on Mixed and Augmented Reality Adjunct
(ISMAR-Adjunct), pp. 93–97. IEEE, 2019.
ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Nowadays unbalanced crop yield and population growth have led to people worry about a potential future "Agricultural Crisis". To increase the arable land-use efficiency, this research has focused on the development of Augmented Reality based Agricultural information visualization and fieldwork navigation tool. This tool has been developed to provide professional aids for farmers' daily fieldwork. To provide continuous fieldwork assistance, this system integrates non-distraction field navigation, the overall visualization of fields and subfields in the form of overlaid polygons, and the detail sensor data visualization for pertinent subfields in the form of Point of Interests. Due to the limitation of current commercially available AR headsets, we first validate this test bench on Virtual Reality video see-through headsets and AR tablets. At this stage, this test bench has only been validated with simulated data. In future, Situated Analytics concepts are planned to be integrated into the system to in-situ visualize real-time data chunks acquired from local sensors, so as to provide timely accurate information and decision support even when the whole dataset is in a high volume.
Conference Paper
Full-text available
The decision-making process and the development of decision support systems (DSS) have been enhanced by a variety of methods originated from information science, cognitive psychology and artificial intelligence over the past years. Situated visualization (SV) is a method to present data representations in context. Its main characteristic is to display data representations near the data referent. As augmented reality (AR) is becoming more mature, affordable and widespread, using it as a tool for SV becomes feasible in several situations. In addition, it may provide a positive contribution to more effective and efficient decision-making, as the users have contextual, relevant and appropriate information to endorse their choices. As new challenges and opportunities arise, it is important to understand the relevance of intertwining these fields. Based on a literature analysis, this paper addresses and discusses current areas of application, benefits, challenges and opportunities of using SV through AR to visualize data in context to support a decision-making process and its importance in future DSS.
Article
Full-text available
The Sensor, Observation, Sample, and Actuator (SOSA) ontology provides a formal but lightweight general-purpose specification for modellingthe interaction between the entities involved in the acts of observation, actuation, and sampling. SOSA is the result of rethinking the W3C-XG Semantic Sensor Network (SSN) ontology based on changes in scope and target audience, technical developments, and lessons learned over the past years. SOSA also acts as a replacement of SSN's Stimulus Sensor Observation (SSO) core. It has been developed by the first joint working group of the Open Geospatial Consortium (OGC) and the World Wide Web Consortium (W3C) on Spatial Data on the Web. In this work, we motivate the need for SOSA, provide an overview of the main classes and properties, and briefly discuss its integration with the new release of the SSN ontology as well as various other alignments to specifications such as OGC's Observations and Measurements (O&M), Dolce-Ultralite (DUL), and other prominent ontologies. We will also touch upon common modelling problems and application areas related to publishing and searching observation, sampling, and actuation data on the Web. The SOSA ontology and standard can be accessed at https://www.w3.org/TR/vocab-ssn/.
Article
Full-text available
With the recent advances in the field of artificial intelligence, an increasing number of decision-making tasks are delegated to software systems. A key requirement for the success and adoption of such systems is that users must trust system choices or even fully automated decisions. To achieve this, explanation facilities have been widely investigated as a means of establishing trust in these systems since the early years of expert systems. With today’s increasingly sophisticated machine learning algorithms, new challenges in the context of explanations, accountability, and trust towards such systems constantly arise. In this work, we systematically review the literature on explanations in advice-giving systems. This is a family of systems that includes recommender systems, which is one of the most successful classes of advice-giving software in practice. We investigate the purposes of explanations as well as how they are generated, presented to users, and evaluated. As a result, we derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities for current and future decision support systems. The taxonomy includes a variety of different facets, such as explanation objective, responsiveness, content and presentation. Moreover, we identified several challenges that remain unaddressed so far, for example related to fine-grained issues associated with the presentation of explanations and how explanation facilities are evaluated.