STARE: Semantic Augmented Reality Decision Support in Smart
Mengya Zheng*Xingyu Pan Nestor Velasco Bermeo Rosemary J. Thomas David Coyle
Gregory M.P. O’Hare Abraham G. Campbell †
School of Computer Science, University College Dublin, Ireland
Fig. 1. Augment Objects with All Relevant IoT Data and Corresponding Brief Suggestions For Instant Explainable Decision Support
Abstract—The Internet of Things (IoT) facilitates real-time decision support within smart environments. Augmented Reality (AR)
allows for the ubiquitous visualization of IoT-derived data, and AR visualization will simultaneously permit the cognitive and visual
binding of information to the physical object(s) to which they pertain. Essential questions exist about efﬁciently ﬁltering, prioritizing,
determining relevance, and adjudicating individual information needs in real-time decision-making. To this end, this paper proposes a
novel AR decision support framework (STARE) to support immediate decisions within a smar t environment by augmenting the user’s
focal objects with assemblies of semantically relevant IoT data and corresponding suggestions.
Index Terms—Augmented Reality, smart environment, decision support, semantic annotations, ubiquitous computing
The Internet of Things (IoT) provides unprecedented opportunities
for the access to and conﬂation of a myriad of heterogeneous data to
support real-time decision-making within smart environments. How-
ever, the increasing dataset scales brought by the smart environment
revolution reveal the limitations of traditional 2D-screen-based smart
environment data visualization interfaces: decision-makers have to
spend an additional cognitive load on searching and ﬁltering useful
Internet of Things (IoT) data from the centralized long data lists.
The emergence of Augmented Reality (AR) techniques has brought
in new potential decision-support solutions to spatially scatter these IoT
data over a smart environment with the proximity to the data sources,
such as Situated Visualization . However, such scattered visualiza-
tion of IoT data may sometimes lead to the separation between relevant
data and the user’s focus, thus generating inconsistencies. Also, possi-
ble redundancies can gradually accumulate when the decision-maker
switches attention from their currently focused context to discover the
relevant information among other irrelevant IoT data scattered over the
smart environment. To avoid these possible redundant and inconsistent
artifacts which contribute to information overload factors and hindered
Manuscript received xx xxx. 201x; accepted xx xxx. 201x. Date of Publication
xx xxx. 201x; date of current version xx xxx. 201x. For information on
obtaining reprints of this article, please send e-mail to: email@example.com.
Digital Object Identiﬁer: xx.xxxx/TVCG.201x.xxxxxxx
decision-making, we propose the STARE (SemanTic Augmented RE-
ality) decision support framework to augment the user’s focal context
with ubiquitous and seamless decision support.
Decision support systems (DSS) are commonly deﬁned as computer-
based systems used for assisting decision making . Prior data-
oriented AR smart environment DSS demonstrated that they could
assist in decision-making by providing data retrieval and information
analysis  but left the vital decisions to expert users. However, non-
expert users may beneﬁt more from the higher-level model-oriented
decision support such as consequence simulation and suggestions 
for non-vital decisions. We found no prior AR smart environment
interfaces provided such high-level decision support by reviewing the
current literature. Therefore, after establishing semantic object-data
associations for the focus augmented IoT data, STARE further encapsu-
lates an ontology-based suggestion model to provide explainable sug-
gestions targeted for non-expert users’ focal decision contexts within a
dynamically changing smart environment.
2 SYSTEM OVERVIEW
By implementing STARE on a Microsoft HoloLens, this system en-
capsulates the focus augmentation and a novel explainable suggestion
model to shorten the physical and semantic distance between the user’s
focal object and its relevant decision support data. To achieve the
focus augmentation strategy, STARE applies
f ocus +voice command
modality to allow for the hands-free triggering of the AR information
over objects within the user’s focus. This system consists of ﬁve main
modules (Fig. 2): object classiﬁer, semantic annotator based on a light
weight ontology, decision support component, Bluetooth Low Energy
(BLE) sensor data scanner, and AR information visualization module.
This system allows the users to freely explore the smart environment
by invoking the decision-support information for the objects they are
currently focusing on. For example, a user wants to open the window
but is not sure about outdoor air quality, so they use a voice command
to trigger a decision-support AR annotation about the window, which
superimposes indoor and outdoor air quality data over the windows and
shows a suggestion to close the window due to outdoor air pollution
Fig. 2. System Architecture Diagram
2.1 Object Classiﬁer
After detecting the voice command, the web camera captures an im-
age for the current ﬁeld of view, which is then sent to the Microsoft
Azure Custom Vision
cloud service for object classiﬁcation. To avoid
information overload caused by excessive AR superimposition over
all recognized objects, STARE deﬁnes one object list for each given
smart home context to ﬁlter only decision-involved objects. Moreover,
one matching percentage threshold and object depth threshold are also
deﬁned to ﬁlter out the unclear objects. For each matching object suc-
cessfully recognized from this image, one bounding box is returned by
the object classiﬁer, which is then used for the object localization in the
physical world by combining the depth data.
2.2 Lightweight Ontology & Semantic Annotator
The proposed lightweight ontology aims at providing a semantic struc-
ture for the data collected from the different sensors and serves as a
stepping stone towards the more elaborated reasoning rules of the De-
cision Support component of STARE. The semantic annotator extends
the Semantic Sensor Network (SSN) Ontology to formally express the
events captured by the sensors, while extending the SOSA Ontology
(Sensor, Observation, Sample, and Actuator)  to formally express
and record the object recognition and localization events captured by
the object classiﬁer. Simultaneously, the STARE lightweight ontology
transfers the object label into Resource Description Framework (RDF)
documents (Fig. 2) to ensure that the decision support component can
easily consume them. The STARE lightweight ontology further sup-
ports observations registered by the sensors and provides a semantic
structure of the sensors by combining the vocabularies from SSN and
2.3 Decision Support Component
The decision support component provides a novel suggestion model
to achieve high-level decision support within a dynamically changing
environment. As Fig. 2 shows, the decision support component incor-
porates a triplestore and a decision engine to generate suggestions and
explanations. The object RDF documents generated by the semantic
annotator are all stored in this triplestore, which serves as the repository
to feed the decision engine. This decision engine comprises a rule store
and a reasoning engine to construct the multi-dimensional semantic
associations between the object and its relevant IoT data. The rule
store assigns decision rules to the object RDF, while the reasoning
engine makes logical inferences based on the object’s decision rules.
According to the decision rules, the reasoning engine semantically an-
notates this object RDF document with a list of relevant IoT data and
their descriptors. This relevant IoT data list contains all environmental
parameters that may change the object’s states (e.g., sunshine and mois-
ture for indoor plants; fridge temperature for the fridge) or interfere
with the usage decision about this object (e.g., air quality for air puri-
ﬁer and windows; energy consumption for the home appliances). For
each semantically annotated IoT data, the data descriptor contains the
objective properties, such as the sensor/device name and a BEACON
ID used for BLE data scanning (Fig. 2).
2.4 AR Explainable Decision Support Data Visualization
Utilizing the decision rules and IoT data descriptors derived from the
decision engine, the AR front-end can superimpose suggestions over
the focal object to support decisions about it (Fig. 1). However, such
high-level suggestions or recommendations without explanation do not
generate trust from a decision-maker. Accordingly, STARE explains the
provided brief suggestions with an assembly of the relevant IoT data
deﬁned in the object RDF, which work as the decisive input values that
may affect the resulting advice . Among these decisive input values,
the key features that determined the decisions are then highlighted with
high color codes  and alerts (Fig. 1) to clearly explain the decision
logic behind the suggestions.
To retain data details without causing visual clutter, this system pro-
vides a simpliﬁcation for all superimposed data and allows for further
investigation in a details-on-demand manner. As Figure 1 shows, by
clicking (gesture interaction) on any IoT data icon, detailed information
will be visualized in line charts, histograms, or pie charts, which are
determined by the data types.
This paper has presented an innovative AR smart environment decision
support framework (STARE) which incorporates a novel suggestion
model and a focus augmentation modality to allow for immediate-
continuous decision support within the user’s currently investigated
context. Further enhancements to STARE incorporating machine learn-
ing will enable the automatic construction of semantic data-object
associations. This future work will ensure STARE extensibility allow-
ing for the creation of smart environments that deliver truly intuitive
interfaces for a future empowered ubiquitous IoT world.
This research forms part of the CONSUS Programme which is funded
under the SFI Strategic Partnerships Programme (16/SPP/3296) and is
co-funded by Origin Enterprises Plc.
S. Alter. A taxonomy of decision support systems. Sloan Management
Review (pre-1986), 19(1):39, 1977.
 P. N. Finlay. Introducing decision support systems. Blackwell Pub, 1994.
K. Janowicz, A. Haller, S. J. Cox, D. Le Phuoc, and M. Lefran
A lightweight ontology for sensors, observations, samples, and actuators.
Journal of Web Semantics, 56:1–10, 2019.
B. Marques, B. S. Santos, T. Ara
ujo, N. C. Martins, J. B. Alves, and P. Dias.
Situated visualization in the decision process through augmented reality.
In 2019 23rd international conference information visualisation (IV), pp.
13–18. IEEE, 2019.
I. Nunes and D. Jannach. A systematic review and taxonomy of explanations
in decision support and recommender systems. User Modeling and User-
Adapted Interaction, 27(3-5):393–444, 2017. doi: 10.1007/s11257-017
M. Zheng and A. G. Campbell. Location-based augmented reality in-
situ visualization applied for agricultural ﬁeldwork navigation. In 2019
IEEE International Symposium on Mixed and Augmented Reality Adjunct
(ISMAR-Adjunct), pp. 93–97. IEEE, 2019.