Content uploaded by Karan Mitra
Author content
All content in this area was uploaded by Karan Mitra on Sep 08, 2021
Content may be subject to copyright.
Augmented Reality-Assisted Healthcare System for
Caregivers in Smart Regions
Joo Chan Kim, Saguna Saguna, Christer ˚
Ahlund, Karan Mitra
Department of Computer Science, Electrical and Space Engineering
Lule˚
a University of Technology
Skellefte˚
a, Sweden
Email: {joo.chan.kim, saguna.saguna, christer.ahlund, karan.mitra}@ltu.se
Abstract—The rise in the aging population worldwide is
already negatively impacting healthcare systems due to the lack of
resources. It is envisioned that the development of novel Internet
of Things (IoT)-enabled smart city healthcare systems may not
only alleviate the stress on the current healthcare systems but may
significantly improve the overall quality of life of the elderly. As
more elderly homes are fitted with IoT, and intelligent healthcare
becomes the norm, there is a need to develop innovative aug-
mented reality (AR) based applications and services that make
it easier for caregivers to interact with such systems and assist
the elderly on a daily basis. This paper proposes, develops, and
validates an AR and IoT-enabled healthcare system to be used
by caregivers. The proposed system is based on a smart city
IoT middleware platform and provides a standardized, intuitive
and non-intrusive way to deliver elderly person’s information
to caregivers. We present our prototype, and our experimental
results show the efficiency of our system in IoT object detection
and relevant information retrieval tasks. The average execution
time, including object detection, communicating with a server,
and rendering the results in the application, takes on average
between 767ms and 1,283ms.
Index Terms—Internet of Things, augmented reality, health-
care, human computer interaction, smart city
I. INTRODUCTION
The Internet of Things (IoT) is a relatively new paradigm
that connects uniquely identifiable objects that encompass
sensors and actuators and networking capabilities to the In-
ternet [1]. The data originating from the IoT is opening the
doors for advanced data-driven services in areas such as smart
healthcare, manufacturing, and transportation [2]–[5].
IoT with artificial intelligence (AI) is envisioned to improve
the lives of elderly living alone at homes by providing them
with daily and essential care [2], [6]. Such AI-based IoT
services are essential in bringing down the extreme load on
healthcare providers and government organizations that are al-
ready stretched to deliver timely and cost-effective services [3].
Japan by 2025 is aiming towards community-based integrated
care with new technologies for early detection and intervention
using pattern analysis to understand disease progression using
AI-based alerts for caregivers and families [3]; such services
enable the elderly to live longer, independently with dignity
in their own homes. In the current COVID 19 pandemic, this
has become all the more essential since the elderly belong to
This work was supported by the Swedish Governmental Agency for
Innovation Systems (diary no. 2017-02807).
the highest risk category and would like to continue living an
independent life.
For the success of such smart IoT-enabled healthcare sys-
tems, two critical challenges need tackling. First, the large-
scale (e.g., hundreds to thousands of homes in a city) de-
ployment of IoT sensors, actuators, and devices in elderly
apartments is envisioned to provide timely care by helping
caregivers easily gain access to information about the elderly
person’s behavior. However, the large amount of information
processed by the intelligent IoT healthcare system can add
to the caregiver’s load if it’s not presented in a clear and
concise way with ease of access. There is already a high load
on the caregivers involved with elderly assistance, and such
systems should alleviate that load rather than increase it. With
these new systems in place, caregivers will have to look at
different types of processed information for each or different
combinations of IoT sensor/actuator/device(s) while providing
care in each apartment. Secondly, when such systems are
deployed in elderly homes, a need to allay the fears of
the elderly regarding the different sensors in their homes is
necessary. Their fear mainly arises from the fact that they
usually do not understand what each sensor or actuator is
used for, what kind of data it transmits, and its use within
the service. Thus, it is essential to provide such information
to the caregiver and elderly in an easy-to-understand manner
that can alleviate their stress regarding the new technology and
lead to the service’s smooth functioning.
The integration of augmented reality (AR) and IoT is
projected to grow into a USD 7 trillion market by 2027 [7].
Various market research companies for technology present
significant growth projections in the next few years, showing
that technology in both domains is advancing towards realizing
new applications and services, especially in healthcare [8]–
[12]. Thus, in this paper, we integrate AR with an IoT-based
healthcare system to enable easy access of information to
caregivers to provide efficient care to the elderly, as explained
further in our motivating scenario.
Motivating Scenario: Arne is an elderly person living alone
in his apartment, as shown in Figure 1, and receives caregiv-
ing services provided by the municipality healthcare workers
where he resides. His apartment is fitted with several smart
home sensors such as door, wall plug, motion, temperature,
light sensors, and an IoT device such as a smart medicine
Fig. 1: The motivating scenario of elderly care in a smart home (Arne, an elderly man, and Sara, the caregiver interacts with
IoT sensors) and a concept AR design to deliver IoT sensor information in the virtual map corresponding to the real-world.
Once a user watches an IoT sensor through a camera on a mobile device, information from all IoT sensors installed in the
user space is presented in AR with color codes to represent levels of wellness. The user can also do the same for appliances
such as a fridge, microwave, coffeepot to view their usage patterns.
pillbox. The data collected from these devices are used to
detect normal and abnormal behavior and notify the caregivers
accordingly. In particular, the IoT data is collected at the
gateway and is sent to a cloud-based AI healthcare service
to analyze the data to detect any anomalies in Arne’s behavior
linked to his wellness status.
Sara, a caregiver, visits Arne four to five times a day to
assess his status and support him with various activities of
daily living. Every time she visits Arne, she needs to assess
Arne’s behavior and wellbeing since her last visit to make
decisions regarding the kind of support Arne needs. Simple
things like how active or inactive Arne has been can indicate
how he feels today. Sara checks activities like toilet visits or
stays durations in the kitchen, where Arne spends most of his
time. During her evening visit, she checks if he watches his
favorite TV shows or not. In the morning, she likes to know
if he slept well or not. Sometimes, Arne can answer these
questions, but at other times it may not be so easy to gauge
what the situation has been since her last visit.
We assert that an AR and IoT-based interaction system with
intelligent AI/machine learning services can assist Sara to get
quick and easy views on Arne’s wellness since her last visit.
Such a system can give her time for conversation with Arne
about other things while doing her job in supporting and caring
for Arne. Such a system can have map-based views, as shown
in Figure 1 for different objects used by Arne. For example,
a fridge, microwave, and coffeepot can be used to assist Sara
in understanding what objects did Arne use throughout the
day and which places in his apartment are visited and how
much time he spent in each room. Such an intuitive service
can assist Sara in determining the overall well-being of Arne
to determine the best course of assistance he may need.
Contributions: In this paper, we make the following contri-
butions in the context of smart home-based elderly care: (1)
This paper proposes, develops, and validates an AR and IoT-
based system for assisting caregivers while doing their job.
In particular, it is the integration of AR with the deployed
IoT platform where our system gathers, stores, and processes
information from IoT devices in elderly homes in a standard-
ized manner. (2) Further, using the AR technology, our novel
system aims to provide the elderly person’s information to
the caregivers in an intuitive and non-intrusive manner using
object detection and AR-map view. Our experimental results
show the efficiency of our system in IoT object detection and
relevant information retrieval tasks.
The data gathering and processing included in this study
is approved by the Regional Ethical Board in Ume˚
a, Sweden
(diary no. 2018-189/31).
This paper is organized as follows: Section II presents the
related work. Section III presents our AR and IoT platform
system for smart in-home care. We then present our prototype
and results in Section IV. Section V presents the discussion.
Finally, Section VI presents the conclusion and future work.
II. RE LATED W OR K
A. Use cases of AR, IoT, and object detection
Since the emergence of AR, IoT, and object detection,
various research works have been conducted using these tech-
nologies. For example, authors in [13] established a framework
to construct an AR system using virtual reality (VR). In
the design process at VR, the user can assign virtual object
position on the corresponding coordinate in real-world, and
design interaction between the virtual object and real-world
object to enable control by the user through AR. In the context
of the design tool, authors in [14] developed an AR-based
framework that helps to draw a relationship between each
IoT device. The user can set interaction between IoT devices
by simple commands, like drag-drop and touch, on a mobile
device; thus, the user can manipulate how it works without
individual programming on every IoT device. Authors in [15]
designed an AR application that provides awareness of IoT
devices in a user’s surroundings. The system in [15] used ultra-
wideband-based radio technology to localize IoT devices on
a mobile device and enable controls on detected IoT devices
through AR.
Healthcare is another domain researchers interested in
adopting these technologies such as air quality monitoring
[16], stress care [17], [18], fall detection [19], [20], obstacle
avoidance [21] for adults and elderly healthcare. Moreover,
researchers in [12], [22] attempt to combine two technologies
in one system to provide enhanced service to the elderly. A
mobile application presenting information about drug compli-
ance through AR was developed in [22]. To provide informa-
tion about a specific drug, object detection on quick response
(QR) codes attached to the physical drug box is used instead of
menus. Authors in [12] tested the combination of AR and IoT
by providing a balance training service for the elderly. While
the balance training runs on a head-mounted display (HMD),
the collected data from IoT devices worn by the elderly
are transferred to an assistant who provides feedback to the
elderly. Their user evaluation result shows that AR with real-
time feedback based on the elderly’s activity is encouraging
and stimulating for attending the training. In contrast, the
usability of the system shows a negative impression depending
on the user’s age.
B. Combination of AR, IoT, and object detection
In healthcare, authors in [8] developed a system used by
both patients with Alzheimer’s disease and caregivers/family.
Various IoT devices are used to record and send daily envi-
ronmental data to a server. When there is an event that needs
the patient’s attention, an AR object is displayed on Google
glasses to notify the event. An image and audio message from
caregiver/family members of the patient could be presented on
Google glasses when a QR code is identified within his/her
sight. Their system showed an average of 106ms response time
on QR code whereas an average of 364ms of delay to play
audio messages.
Similarly, authors in [9] designed a smart home for the
elderly that uses the following elements: (1) projectors to
overlay a screen on the walls, (2) IoT devices to collect data
and control the front door, and (3) object detection to identify
the location of the resident/drug box, fall incident, and the
face of visitors at the front door. Their smart home used
object detection with high detection accuracy; however, the
system was designed only for the elderly. Hence, interaction
with a caregiver barely exists. Interestingly, authors in [10]
proposed a complement of that issue by providing a smart
home design tool to the caregiver. In [10], the system used
Microsoft’s HoloLens and its spatial mapping, a unique feature
that enables 3D reconstruction of the real-world in the virtual
space. In the virtual space, the caregiver can tag specific
objects that correspond to the real-world to assign specific
interactions and connect to other tagged objects and IoT
Fig. 2: Framework for iVO service using SSiO IoT platform;
Societal development through Secure IoT and Open data
(SSiO) and Internet of Things for Healthcare (iVO).
devices (e.g., actuator). For example, when the elderly take a
glass from the cabinet, the system turns the cabinet light off.
This interaction enables the caregiver to design personalized
smart assistance based on the habits or life patterns of the
elderly. IoT devices in the smart home track every movement
of the elderly, and the system uses those data to operate the
system designed by the caregiver.
Lastly, authors in [11] designed an AR interface for monitor-
ing the air quality status of a particular space. The air quality
data measured through IoT devices in each room are displayed
in AR. The authors proposed a top view on a virtual map of the
space to monitor every room’s status, and the user can inspect
more details on each room by selecting the button on the AR.
In their system design, the QR code is used to present AR,
whereas our proposal uses object detection on IoT devices.
We then provide system performance evaluation in inference
speed, network latency, and execution time described in the
following sections.
III. SYS TE M DESI GN A ND IMPLEMENTATION
The following section describes the system design and
development process of the prototype implemented on the IoT
platform named Societal development through Secure IoT and
Open data (SSiO) [23].
A. IoT platform
We used the SSiO [23], a smart city platform, to provision
an AR and IoT-enabled healthcare application to the care-
givers, as shown in Figure 2. The SSiO platform is based
on open-source FIWARE [24] and provides data collection,
storage, analysis, visualization, and exchange mechanisms.
As shown in Figure 2, the SSiO platform integrates several
AI-based healthcare services (i.e., Internet of Things for
Healthcare - iVO) and other types of services such as home
automation, energy management, air pollution monitoring. It
serves as a tested, efficient, and scalable smart city platform
[25]. SSiO platform is currently used to gather many IoT data
from homes where the elderly live.
(a) (b) (c)
Fig. 3: Screenshots of the prototype test on the mobile device
for (a) microwave, (b) refrigerator, and (c) coffeepot.
Fig. 4: Data request sequence diagram between a user, proto-
type, and SSiO.
For the AR-IoT-based system presented in this paper, the
caregiver (Sara) using the AR application points the camera
to the IoT object. Upon object detection on the device, a
bounding box is shown around the object. Sara can then
interact with the object (via voice or haptics) and retrieve
the relevant data. For example, “how many times did the
elderly (Arne) went into the kitchen today.” The data regarding
the IoT object is retrieved through the SSiO platform using
standardized RESTful APIs. The complete information flow
between the caregiver, AR application, and the SSiO platform
are shown in Figure 4. We now describe in detail the AR
application.
B. Object detection-enabled AR application
1) Framework of the prototype: Figure 5 illustrates the
framework of our prototype. The procedure to get data from
the IoT device to the user’s mobile device is divided into
seven steps: (1) the IoT device updates collected data on SSiO
every 5 minutes, and (2) when the caregiver wants to check
IoT device data, he/she points the camera on mobile device
toward the IoT device; (3) once the prototype recognizes the
IoT device, the caregiver gives input with either touch or voice
command to get data from the SSiO. (4) When the prototype
Fig. 5: Framework of the object detection-enabled AR appli-
cation with IoT platform.
sends a data request to the SSiO about the identified IoT
device; (5) the SSiO then sends back the data to the prototype;
and (6) the prototype renders the AR content with the use of
received data on the screen at the corresponding position where
the object is detected. (7) finally, the caregiver can navigate
menus to find more data using their fingers or voice.
2) Object detection: We used Google’s TensorFlow [26]
open-source software library to build a customized object
detection model for our scenario. The SSD MobileNet v2
FPNlite pre-trained model on the COCO 2017 dataset [27]
is used to train our custom detection model for two reasons:
(1) the TensorFlow lite framework designed for running on
a mobile device with low-latency inference can only accept
SSD-based model. (2) it has 22 milliseconds (ms) of average
inference speed and 22.2 of mean Average Precision (mAP)
[28]. Since high mAP demands lots of computing resources
from a device while sacrificing the inference speed, 22ms
of inference speed and 22.2mAP are optimal models that
can provide reasonably fast speed with acceptable accuracy
than other available models TensorFlow library. We train our
model to recognize three appliances; coffeepot, microwave,
and refrigerator. We initially capture 501 images of three
appliances from 20 and 35cm with various angles to achieve
a sufficient image diversity level for model training. Every
captured image is labeled with labelImg [29] and had pre/post-
processing using the Roboflow data management tool [30].
The pre/post-processing of images are performed to increase
the number of images by applying modifications such as
rotation, resize, color correction, and blur. As a result, the
number of training images is expanded to 1,203, improving
the model detection performance.
3) AR contents: The ARCore [31] and Sceneform [32] are
used to render AR content on the screen. When AR content
is created, Sceneform sets an AR content’s anchor point in
the virtual space corresponding to the real-world position of a
detected object. This anchor point enables maintaining the AR
content alive on specific coordinates while the user is moving
their device. Therefore, once the AR content has a unique
position in the virtual space, the user is free from an issue
relating to losing the tracked object. AR contents can be moved
away from spawned coordinates by dragging it with a finger
Fig. 6: Flowchart of the prototype representing a process
for getting data about the identified object through object
detection.
touch.
In this prototype, an AR map is shown below the rendered
object (see Figure 3) as also proposed in our scenario (see
Figure 1) when users ask to display it.
IV. EXP ERI ME NT AN D RE SULTS
A. Prototype test procedure
We used a Samsung Galaxy S8 with 4GB RAM, 64GB
storage, a Samsung Exynos 8895 processor with Android
version 9 operating system to test our prototype.
For the evaluation, we ran the prototype on each appliance
trained in our detection model. Figure 6 illustrates the process
of the prototype for displaying data received from SSiO with a
bounding box drawn around the detected object. The prototype
is running 1m away from the IoT devices for detection, and
the user requests data by providing a voice command (e.g.,
“information”). The experiment on each object is repeated 100
times, and the experiments are conducted with 4G LTE and
5GHz Wi-Fi networks, respectively. Both experiments’ data
are recorded in local storage.
To evaluate the prototype performance, we select three
parameters; (1) inference speed, (2) network latency, and (3)
execution time. The inference process of object detection
is performed within a local device while it is measured in
milliseconds. It is an interval between when a cropped image
from a camera input stream is inserted into the TensorFlow
Lite inference engine and the moment when detection results
are returned. Network latency is the delay until to receive IoT
device data from SSiO once a data request is sent. Finally,
execution time is the overall time consumed to present the
detection results with data from SSiO on the screen once the
system identifies an object.
B. Performance evaluation
We recorded and calculated the average and standard de-
viation of inference speed, network latency, and execution
time for both network environments (see Figure 7b and 7c).
Since we used voice command as an input to send a data
request to SSiO, we captured separately one more value to
understand how much delay the system takes for using voice
input. The decoding process is conducted within the mobile
device, and decode time is the interval between when the user
ends his/her speech used for voice command and when the
system completes the decoding of input audio (see Figure 7a).
The decode time is included in execution time, like inference
speed and network latency.
1) Inference speed: The average inference speed on all
appliances is less than 370ms under any circumstance. The
coffeepot has a relatively slow inference speed, whereas the
microwave has the fastest inference speed out of the three
appliances. Since the inference process is made within the
mobile device, the inference time is not affected by network
conditions.
2) Decode time: All appliances have less than 183ms of the
average decode time in both conditions. The decoding process
is also conducted within the mobile device.
3) Network latency: The network latency is affected by net-
work conditions; thus, the results significantly differ depending
on the network. The average network latency under the 5GHz
Wi-Fi is between 195ms and 233ms on three appliances.
The coffeepot costed the most prolonged latency to receive
data from SSiO, whereas the microwave spent the shortest
latency on average. On the other hand, the average network
latency under the 4G is between 452ms and 504ms on three
appliances. In the response time for using SSiO with 4G, the
microwave had shortest latency, whereas viewing the coffeepot
took longest time.
4) Execution time: Overall, the average execution time for
all appliances is less than a second in 5GHz Wi-Fi. The
prototype displays the result of the microwave within an
average of 767ms, thereby being the fastest time. In other
words, the refrigerator and coffeepot require more time to get
results, and the coffeepot spends the longest execution time
with an average of 927ms. All results with 4G connectivity
extends one second. The most time consumed object is the
coffeepot with an average of 1,283ms, whereas the least time
consumed object is the microwave with an average of 1,038ms.
V. DISCUSSION
Our system aims are mainly for caregivers similar to [10]
and [11], whereas systems in [8] and [9] were designed for
elderly. Like us, the system in [11] utilized a smartphone as
a platform device that has relatively high accessibility for
caregivers compared to HoloLens used in [10]. In relation
to [11], we used more types of sensors and extensive data
representation.
The average inference speed and decode time of input audio
results are independent of network condition; however, the
inference speed is affected by detection performance. Object
detection uses features extracted from the object’s surface or
shape to distinguish from the background and other props.
Object detection performance is less accurate for objects with
fewer features on their surface or shape. The coffeepot used
in our experiment is the perfect example of this less featured
object since it has a reflective plastic surface with single black
color. Thus, inference speed varies depending on the camera’s
perspective on the coffeepot because the uniqueness of shape
is the most significant feature for classification in this case.
However, We still could get correct detection results on all
appliances most of the time. We trained our detection model
(a)
(b) (c)
Fig. 7: (a) Part of the data fetching process with the use of voice command under successful object detection. Prototype test
results on three appliances through (b) 5GHz Wi-Fi and (c) 4G LTE.
for 20,000 steps with 1,203 images for three appliances, and
the result of the detector’s performance was 44.9mAP. The
performance can be further improved to reach its best by
taking different perspectives and running more training steps
with more images.
In general, caregivers in our municipality face the situation
where only 4G network connectivity is available while visiting
elderly houses. Accordingly, the evaluation of network latency
is crucial for 4G. Although the results of average network
latency in 4G is about two times higher than 5GHz Wi-Fi, it
is sustained below 504ms, and users receive the result in less
than 1.5 seconds.
The execution time is a sum of inference speed, decode
time, and network latency. However, several more processes
exist in the background, which are mainly related to the
rendering processes. Therefore, around 200ms of differences
between the measured execution time and the other three
parameters’ sum results are identified.
In our scenario, the prototype requires an ability to de-
tect physical objects that are hardly recognizable due to
the limitation of existing commercial systems. For example,
Vuforia enables high accuracy recognition on objects with rich
patterns on their surface [33]. Moreover, the object needs to
be scanned by Vuforia’s custom scanner app while the object
is placed on a specially designed marker. These requirements
are obstacles for using certain appliances that are too big for
scanning and have too few textures on their surface. Another
system, MediaPipe objectron [34], has a limited and fixed
detectable object database in its detection model. Therefore,
the requirement for our scenario was not met since we need
to be able to detect new objects that are not yet present in
the existing databases. These were the reason behind using
Google’s TensorFlow open-source library.
Feedback from nurses and caregivers: We interviewed
three experts (two female nurses and one male caregiver)
working in the elderly healthcare domain. Due to the COVID
19 pandemic situation, we so far demonstrated our system and
the prototype deployed on a mobile device to them. This is
done to gauge end-user’s perspective. The overall response
from the three experts towards the use of AR was positive.
They saw a need for AR to provide an efficient and intuitive
interface that can make it ease-to-understand the elderly’s
daily activities. Also, that an AR-map is helpful to understand
holistic daily activities that occurred in each apartment when
visiting.
However, what the experts want to view are things they
do not know about elderly behavior since their last visit rather
than raw IoT device data. Since the experts do not stay with the
elderly 24/7, missing information about the elderly’s activities
during their absence is their focus. Therefore, the experts
prefer to see analyzed data using machine learning algorithms
(e.g., fall detection) and history of elderly’s behavior (e.g.,
abnormal pattern of TV watching time, coffee consumption,
fridge access, or heating meals in a microwave). Thus, their
feedback and answers gave us further insight into the design
and layout of such AR applications, and this is what we
attempted to focus on while building the object view and the
AR-map view.
VI. CONCLUSION AND FU TUR E WO RK
In this paper, we proposed, developed, and evaluated an
AR-based mobile application by integrating AR within an IoT
platform within the elderly healthcare domain. The developed
object detector built for our AR application is evaluated for
object detection system performance on a mobile device. The
prototype uses the object detector with the customized model
to identify three appliances, receive IoT data assigned to
each appliance from the SSiO IoT platform, and visualize the
data through AR for both object view and map view. The
evaluation showed promising results for running our prototype
on a mobile device using both 5GHz Wi-Fi and 4G. The
interviews with caregivers showed that such an application
could contribute to their needs with a less complicated, easy-
to-understand, and intuitive interface. Our results can also be
useful for those who integrate and built AR applications with
IoT platforms in domains other than elderly healthcare.
The detection performance can be further improved by
increasing training steps and images in the model training
process. Although we interviewed few experts working in the
elderly healthcare domain, more tests and interviewees can
further improve the usability of our system in general. After
the easing of the current COVID-19 pandemic situation, we
plan to conduct a detailed user evaluation as part of a future
study with larger groups of caregivers and elderly homes who
are the target users of our system.
ACKNOWLEDGMENT
We like to thank the team of nurses from “The Demen-
tia Team, The Geriatric Clinic at Region V¨
asterbotten” in
Skellefte˚
a, Sweden and the caregivers team from “Municipal-
ity Home-care Services” of Skellefte˚
a municipality.
REFERENCES
[1] J. Gubbi, R. Buyya, S. Marusic, and M. Palaniswami, “Internet of Things
(IoT): A vision, architectural elements, and future directions,” Future
Generation Computer Systems, vol. 29, no. 7, pp. 1645–1660, Sep. 2013.
[2] S. Saguna, C. ˚
Ahlund, and A. Larsson, “Experiences and Challenges
of Providing IoT-Based Care for Elderly in Real-Life Smart Home
Environments,” in Handbook of Integration of Cloud Computing, Cyber
Physical Systems and Internet of Things, R. Ranjan and, K. Mitra et.
al, P. Prakash Jayaraman, L. Wang, and A. Y. Zomaya, Eds. Cham:
Springer International Publishing, 2020, pp. 255–271.
[3] S. Allen, et. al, “2020 Global Healthcare Outlook: Laying a foundation
for the Future,” Deloitte Insights,, Tech. Rep., 2020. [Online]. Avail-
able: https://www2.deloitte.com/content/dam/Deloitte/cz/Documents/
life-sciences- health-care/2020- global-health-care- outlook.pdf
[4] D. K. Baroroh, C.-H. Chu, and L. Wang, “Systematic literature review on
augmented reality in smart manufacturing: Collaboration between human
and computational intelligence,” Journal of Manufacturing Systems,
Nov. 2020.
[5] F. Zantalis, G. Koulouras, S. Karabetsos, and D. Kandris, “A Review
of Machine Learning and IoT in Smart Transportation,” Future Internet,
vol. 11, no. 4, p. 94, Apr. 2019.
[6] IoT-iVO, “Sensors that provide security,” 2020. [On-
line]. Available: https://translate.google.com/translate?sl=auto&tl=en&
u=https://iot-ivo.se/sensorer-som- skanker-trygghet/
[7] TheRound, “IoT augmented reality to reach US$7 trillion by 2027,”
Dec. 2016. [Online]. Available: https://www.theround.it/iot-augmented-
reality-to- reach-us7- trillion-by-2027/
[8] F. Ghorbani, M. Kia, M. Delrobaei, and Q. Rahman, “Evaluating the
Possibility of Integrating Augmented Reality and Internet of Things
Technologies to Help Patients with Alzheimer’s Disease,” in 2019
26th National and 4th International Iranian Conference on Biomedical
Engineering (ICBME). Tehran, Iran: IEEE, Nov. 2019, pp. 139–144.
[9] Y. J. Park, H. Ro, N. K. Lee, and T.-D. Han, “Deep-cARe: Projection-
Based Home Care Augmented Reality System with Deep Learning for
Elderly,” Applied Sciences, vol. 9, no. 18, p. 3897, Sep. 2019.
[10] C. Haidon, H. Pigot, and S. Giroux, “Joining semantic and augmented
reality to design smart homes for assistance,” Journal of Rehabilitation
and Assistive Technologies Engineering, vol. 7, Jan. 2020.
[11] M. S. H. Sassi and L. C. Fourati, “Architecture for Visualizing Indoor
Air Quality Data with Augmented Reality Based Cognitive Internet
of Things,” in Proceedings of the 34th International Conference on
Advanced Information Networking and Applications (AINA-2020), vol.
1151. Springer, Cham, Mar. 2020, pp. 405–418.
[12] F. Mostajeran, F. Steinicke, O. J. Ariza Nunez, D. Gatsios, and D. Fo-
tiadis, “Augmented Reality for Older Adults: Exploring Acceptability
of Virtual Coaches for Home-based Balance Training in an Aging
Population,” in Proceedings of the 2020 CHI Conference on Human
Factors in Computing Systems. Honolulu HI USA: ACM, Apr. 2020,
pp. 1–12.
[13] B. Soedji, J. Lacoche, and E. Villain, “Creating AR Applications for
the IOT : a New Pipeline,” in 26th ACM Symposium on Virtual Reality
Software and Technology. Virtual Event Canada: ACM, Nov. 2020, pp.
1–2.
[14] V. Heun, J. Hobin, and P. Maes, “Reality editor: programming smarter
objects,” in Proceedings of the 2013 ACM conference on Pervasive and
ubiquitous computing adjunct publication. Zurich Switzerland: ACM,
Sep. 2013, pp. 307–310.
[15] K. Huo, Y. Cao, S. H. Yoon, Z. Xu, G. Chen, and K. Ramani, “Scenariot:
Spatially Mapping Smart Things Within Augmented Reality Scenes,”
in Proceedings of the 2018 CHI Conference on Human Factors in
Computing Systems. Montreal QC Canada: ACM, Apr. 2018, pp. 1–13.
[16] H. P. L. d. Medeiros and G. Girao, “An IoT-based Air Quality Moni-
toring Platform,” in 2020 IEEE International Smart Cities Conference
(ISC2). Piscataway, NJ, USA: IEEE, Sep. 2020, pp. 1–6.
[17] L. Rachakonda, S. P. Mohanty, and E. Kougianos, “iFeliz: An Approach
to Control Stress in the Midst of the Global Pandemic and Beyond for
Smart Cities using the IoMT,” in 2020 IEEE International Smart Cities
Conference (ISC2). Piscataway, NJ, USA: IEEE, Sep. 2020, pp. 1–7.
[18] L. Rachakonda, P. Rajkumar, S. P. Mohanty, and E. Kougianos, “iMirror:
A Smart Mirror for Stress Detection in the IoMT Framework for
Advancements in Smart Cities,” in 2020 IEEE International Smart Cities
Conference (ISC2). Piscataway, NJ, USA: IEEE, Sep. 2020, pp. 1–7.
[19] K. Saraubon, K. Anurugsa, and A. Kongsakpaibul, “A Smart System
for Elderly Care using IoT and Mobile Technologies,” in Proceedings
of the 2018 2nd International Conference on Software and e-Business -
ICSEB ’18. Zhuhai, China: ACM Press, 2018, pp. 59–63.
[20] E. L. Chuma, L. L. B. Roger, G. G. de Oliveira, Y. Iano, and D. Pajuelo,
“Internet of Things (IoT) Privacy–Protected, Fall-Detection System for
the Elderly Using the Radar Sensors and Deep Learning,” in 2020 IEEE
International Smart Cities Conference (ISC2). Piscataway, NJ, USA:
IEEE, Sep. 2020, pp. 1–4.
[21] F. Ahmed, M. S. Mahmud, and M. Yeasin, “An Interactive Device for
Ambient Awareness on Sidewalk for Visually Impaired,” in 2018 IEEE
International Smart Cities Conference (ISC2). Kansas City, MO, USA:
IEEE, Sep. 2018, pp. 1–6.
[22] A. Khan and S. Khusro, “Smart Assist: Smartphone-Based Drug Compli-
ance for Elderly People and People with Special Needs,” in Applications
of Intelligent Technologies in Healthcare, F. Khan, M. A. Jan, and
M. Alam, Eds. Cham: Springer International Publishing, 2019, pp.
99–108, series Title: EAI/Springer Innovations in Communication and
Computing.
[23] S. S. Region, “Home,” 2020. [Online]. Available: https://en.ssio.se/
[24] FIWARE, “The Open Source platform for our smart digital future,”
Oct. 2021. [Online]. Available: https://www.fiware.org/
[25] V. Araujo, K. Mitra, S. Saguna, and C. ˚
Ahlund, “Performance evaluation
of FIWARE: A cloud-based IoT platform for smart cities,” Journal of
Parallel and Distributed Computing, vol. 132, pp. 250–261, Oct. 2019.
[26] TensorFlow, “TensorFlow Lite,” 2020. [Online]. Available: https:
//www.tensorflow.org/lite
[27] Tensorflow, “TensorFlow 2 Detection Model Zoo,” 2021. [Online].
Available: https://github.com/tensorflow/models
[28] P. Galeone, Hands-on neural networks with TensorFlow 2.0: understand
TensorFlow, from static graph to eager execution, and design neural
networks. Packt Publishing Ltd., 2019, oCLC: 1122196555.
[29] darrenl, “labelImg,” May 2021, original-date: 2015-09-17T01:33:59Z.
[Online]. Available: https://github.com/tzutalin/labelImg
[30] Roboflow, “Roboflow,” 2020. [Online]. Available: https://roboflow.ai
[31] ARCore, “ARCore,” 2020. [Online]. Available: https://developers.
google.com/ar
[32] Sceneform, “Sceneform,” 2020. [Online]. Available: https://developers.
google.com/sceneform/develop
[33] PTC, “Vuforia,” 2021. [Online]. Available: https://developer.vuforia.
com/
[34] Google, “Objectron,” 2020. [Online]. Available: https://google.github.
io/mediapipe/solutions/objectron.html