Content uploaded by Muhammad Aslam Jarwar
Author content
All content in this area was uploaded by Muhammad Aslam Jarwar on Jan 27, 2019
Content may be subject to copyright.
Exploiting IoT Services by Integrating Emotion
Recognition in Web of Objects
Muhammad Aslam Jarwar and Ilyoung Chong
Department of CICE, Hankuk University of Foreign Studies, Yongin-si, Korea
jarwar.aslam@gmail.com, iychong@hufs.ac.kr
Abstract— Web of Objects (WoO) is an IoT platform, which
is an avenue for better support to connect heterogeneous data
sources with the web, because of its modular and plug & play
nature. This modular and plug & play approach of WoO, is an
amazing support towards the adaptions of other concepts and
technologies. The better support based on ontology and semantic
based knowledge creations helps in context aware IoT services.
The one aspect of context aware IoT services, is emotion based
IoT services. Because emotion is an important aspect of human
life, he takes his actions based on his emotions, so the emotions
aware IoT services, assist the human in achieving his daily task
and makes his life easier. Our approach of integration of emotion
recognition and creation of emotion aware IoT services based on
WoO is a novel approach towards affective IoT services in WoO.
Keywords—Internet of Things; Web of Objects; Emotion
Recognition; Emotion aware IoT Services
I
NTRODUCTION
In our daily life, during performing various activities, we
create and generate different types of data in a large quantity.
This data contains an important information regarding our
activities, style of living, our behavior in different situations
and our emotions. The scientists have categorized different
types of emotions based on their nature, these are love, joy,
surprise, anger, sadness, and fear, Paul Ekman's six basic
classes of emotions (happiness, anger, disgust, fear, sadness,
surprise) are most popular and remain the focus of researchers.
Emotions play an important role in our daily life, our actions
depends on our emotions, and in one study [1] it has been
observed that emotions are also helpful in patient discovery. In
internet of things (IoT) emotions did not study deeply. By
recognition and fine-grained analysis of emotions and
sentiments in IoT environment play an important role in the
significance of better IoT services. If the recognized emotions
of a person shows that his intentions are to suicide, then we
could activate social care service, inform his colleagues and
recommend him leisure type activities, to keep away from his
intentions. By the emotions recognitions of elderly and special
people, we could activate assistance, care and welfare IoT
services for them. To fully explored emotions in IoT, it is
necessary to capture emotions from all the sources, through
which a human can interact, so the emotions data from the
various sources become heterogeneous. To cope with
heterogeneity of emotions data and knowledge based emotion
aware IoT services, we need such an IoT platform which
provides the abstraction of data layers, process layer, and
analysis layer and then knowledge based emotion aware
services. Web of Objects [2], is an IoT Platform, which
provides all these types of opportunities. It provides the
ontology and RDF based concepts for virtualization and
composition, by introducing virtual objects (VOs) or VO layer,
composite virtual objects (CVOs) or CVO layer and
knowledge based service layer [3,4].
The goal of this paper is the integration of emotions
recognition and activating corresponding emotion aware IoT
services in Web of Objects. In the reminder of this paper,
section II describe the different source of emotions and their
acquisition and analysis techniques, section III elaborates the
integration of emotion recognition and emotion aware IoT
services in web of objects, section IV covers the proof of
concepts and the conclusion of the paper is discussed in section
IV.
E
MOTIONS
S
OURCES
,
ACQUISITION AND ANALYSIS
In the IOT environment, the emotions data generated from
various sources, like emotions data from physical activities,
categorized as physiological emotions, emotions from chatting,
SMS and social media tweets and posts categorized as textual
emotions and emotions data from voice, video, emoticons, and
images categorized as multimedia emotions. The physiological
emotions like gesture, heartbeat, pulse rate, and walking fast,
and then suddenly stop and thinking, or when some people are
in tension they put their both hands on their head. The major
sources of physiological emotions are like from mobile
terminals, wearable sensors. Health-related physiological
emotions could be detected with eHealth Platform as used in
the recent study [5]. The information regarding textual
emotions received through feature extractions from social
media text, SMS, and blogs. There are many open source APIs
and tools available for emotions detection from the text e.g.
synesketch [6], and IBM Waston [7]. For multimedia
emotions, data received from voice and video recording
devices, and image capturing cameras, installed such as in the
smart home case, and emotions from emoticons used in SMS
and posts, received through extracting emoticons from textual
data. EmoVoice [8], and openEAR [9] are the open source
toolkits for the emotion feature extraction from voice data. For
emotions from images, OpenFace [10] and Java Emotion
Recognition [11] are open source libraries could also be used
in emotion recognition for emotion aware IoT services.
The contextual knowledge is an important factor for the
emotion analysis, and we could not apply only generic emotion
recognition technique. E.g. during a football match between
two teams, the happy emotions of winning team, become a sad
for the other. So for the contextual knowledge creation, we also
need knowledge model for the fine-grained emotion analysis.
The WoO supports the creation of knowledge, through its
knowledge model, as in [12] knowledge is created for the
smart ageing IoT services.
978-1-5090-5124-3/17/$31.00 ©2017 IEEE
54978-1-5090-5124-3/17/$31.00 ©2017 IEEE ICOIN 2017
ENABLING EMOTION AWARE
I
O
T
SERVICE IN
W
O
O
For enabling emotion aware IoT services, based on WoO,
we integrate emotion recognition and analysis in WoO as
shown in figure 1. The modular and plug & play approach of
WoO helps us to better virtualize, categorize and divide the
integration of components for emotion recognition and
emotion aware services. The integration of components has
following two main parts.
A. Data acquisition and processing components
In figure 1 “user profile/Request” (UPR) component,
received the request parameters through API, and it contains
the request input parameters such as user’s social media
accounts and the contextual parameters. It is not necessary that
user always requests for the emotions aware services, once he
subscribes, then emotions aware service loop start, to track
activities of that user profile. The additional information about
the user’s background and contextual information is also stored
by this component in RWK & ECK (REK) component because
the contextual information is important in the emotion
recognition process. The REK component contains the
contextual and background knowledge about the user, and this
component will continuously keep growing, with identified
user’s emotions categories and also with aggregated emotion
from PER, MER, TER and emotion component respectively.
The training model for these three categories of emotions
(physiological, multimedia and textual) is lying in the ETM
(Emotion Corpus/Training Model) component. The ETM
contains the training model of the all three emotion categories
and its subcategories. When the emotion data received at PER,
MER, and TER, these component detect the subcategory of
emotion data source and then it fetch the chunk of
classification and training model from the ETM and perform
the recognition of emotion from the received data. The user
profile database (UPD) contains the information about the user
profiles when any request made to the system, the profile of
that user will be stored in this component. At later stage those
profiles stored in this component, their activities will be
tracked for serving the emotion aware service to them.
The request analysis component received input parameter
from the UPR, then it analysis the input parameters (credential
in the case of social media as an emotion source), creates the
service execution graph and send it to EM-1 for the data
acquisition and emotion recognition. The EM-1 contains the
management of emotion recognition composition and
execution in the form of RDF graph. This component decides,
which emotion recognition logic to select, and when to execute
it. The PER, MER and TER component contain actual logic
and software component based on machine learning techniques
to extract features and recognize the emotions from the data
received through VS from PDS, MDS, and TDS. As in the
figure the PER received or fetch data from VS of PDS and VS
of MDS because VS of PDS and VS of MDS have a
semantically relationship with respect to physiological
emotions and multimedia emotions in the situation when the
physiological emotions could also be recorded through the
cameras in the form of images and videos. The VS are the
virtual sensors, basically, they need in the case when the PDS,
MDS or TDS are not available or they contain same data for
sometimes, the VS could save the energy of data source and
also helps in fast data acquisitions. The PDS, MDS, and TDS
are the data sources for the physiological, multimedia and
textual emotions. For example, the physiological data sources
may contain the wearable sensors for knowing the body
temperature, position sensor; to know the position of a man
whether he is in sitting, supine, pine, left/right lateral
recumbent, electrocardiogram sensor; pulse and oxygen in
blood sensors etc. The example of the multimedia data source
as discussed before the audio, video and image recording
devices. Mostly the example here of textual data sources are
blogs, Facebook posts, tweets, and Kakao chat logs. The E1,
E2, and E3 aggregate the emotions received from PER, MER,
and TER component and gives a final aggregated emotion for
the emotion aware service (EAS).
RWK & ECK Real World Knowledge & Emotion Contextual
Knowledge
EM Execution Management
PER Physiological Emotion Recognition
MER Multimedia Emotion Recognition
VS virtual Sensor, PDS Physiological Data Source
MDS Multimedia Data Source, TDS Textual Data Source
SCA Social care service, RWO Real World Object
VA Virtual Actuator, E
1
, E
2
, E
3
Emotions Type
Figure 1. Main Architecture of Emotion Recognition in WoO
B. Emotion aware IoT service components
The EAS received the aggregated emotion, user profile
data, and contextual information for the composition of
emotion aware service in the form of RDF graph. As the EAS
is the only single main service, which composes many other
microservices to achieve the emotion aware IoT service. For
example, the emotion detected that person is depressed (as the
sad, fear, gloomy, and shame), then the emotion aware service
may contain other microservices e.g. inform to social care
service, inform his friends or inform to parents and relatives.
Sometimes we need all these microservices and sometimes we
need few of them depending on the contextual and real world
knowledge. Here the contextual and real world knowledge may
contain that someone parents are not alive, or he does not like
his office friends. The EAS compose multiple emotion services
by adding multiple microservices, based on the knowledge
available in REK component with respect to the user profile.
The EM-2 performs the management function as like EM-
1, it received the service composition in the form of RDF
graph, which contains semantic ontology of different
components involves to accomplish the emotion aware service.
The EM-2 execute the different composite virtual objects
(CVOs) according sculpted in the service. As shown in the
figure that EM-2 chose social care CVO or display pictures
CVO, or any other CVO to achieve the desired service. The
55
VA (virtual actuator) actually perform the action through
RWO i.e. play movie, make a call or SMS to social care
service.
USE
CASE
:
A MAN IS MISSING TO HIS FAMILY
There are many WoO based IoT services, which become more
powerful, contextual and emotion aware after the integration of
emotion recognition. To realize the emotion aware IoT services
based on WoO, we present a proof of concept (POC) as
depicted in figure 2.
Figure 2. Emotion aware service POC based on WoO
For the POC, we have created the persona of Mr. Alice.
Mr. Alice is lonely at home and he is missing his family. He
has subscribed for the WoO based emotion aware IoT services.
When he subscribed to the service, his service template is
created based on his preference, and knowledge of about his
family, friends and all things, which matter to him is also
created. After creation, this template will be stored in the REK
in the format of linked data (RDF) and in user profile database.
After this process, his request will be composed and sent to
request analysis component for the validation of request
parameters and orchestration.
The EM-1 receives the execution information of emotion
types/sources/VO(s)/CVO(s), in the format of RDF graph, and
according to given instructed graph, it executes the service by
choosing the CVO(s). Here PER, MER, and TER are the
emotion recognition CVOs for each category of emotion data
sources. The emotion recognition CVO is the small emotion
recognition component contain the only logic part for different
category of emotions, whenever it received data from VOs, it
detects the type of emotion data source with the help of VO
parameters and fetches related training model from ETM and
real world knowledge and emotion contextual knowledge from
REK for applying the machine learning techniques to
recognized emotions. So here the Alice emotions will be
detected at CVO level and aggregated at the service level.
Now, for example, the Alice aggregated emotion recognized
that he is missing his family. The emotion aware service
activate the emotion aware IoT services (e.g. send notification
to family that Alice is missing you, some family related movies
will be played on his TV, and his smartphone remind that his
daughter/son/wife like XYZ foods, so service will recommend
that food), by combining multiple microservices, and these
services will be executed by CVO(s)/VO(s).
C
ONCLUSION
In this paper, we categorize the diverse emotions data
sources, and input devices, and then with VOs support, the
virtualization of these data sources was performed. The
integration of various emotion recognition techniques for
various virtual data sources performed at CVO level, with
simple and plug & play approach to adding and removing
machine learning techniques and training models according to
emotion data sources. Because of multiple emotion recognition
categories, the aggregation was performed at the service level,
by adding the user’s contextual information. The task of
emotion aware service accomplished by combining multiple
microservices. This approach of emotion integration for
emotion aware IoT services in WoO, harmonize the better IoT
services in every aspect of human activity. In the future, our
plan is to implement; our approach as SAS in the cloud.
A
CKNOWLEDGMENT
This work was supported by Korea Institute for
Advancement of Technology (KIAT) funded by Ministry of
Trade, Industry and Energy (MOTIE, Korea) [
N040800001
,
Development of Web Objects enabled EmoSpaces Service Technology
].
R
EFERENCES
[1] Mano, Leandro Y., et al. "Exploiting IoT technologies for enhancing
Health Smart Homes through patient identification and emotion
recognition." Computer Communications (2016).
[2] Web-of-Objects (WoO)-ITEA2 Project Jan 2012–Dec 2014.
https://itea3.org/project/web-of-objects.html.
[3] Kibria, Muhammad Golam, and Ilyoung Chong. "Knowledge-based
open Internet of Things service provisioning architecture on beacon-
enabled Web of Objects." International Journal of Distributed Sensor
Networks 12.9 (2016): 1550147716660896.
[4] Kibria, Muhammad Golam, et al. "A User-Centric Knowledge Creation
Model in a Web of Object-Enabled Internet of Things Environment."
Sensors 15.9 (2015): 24054-24086.
[5] Khan, Ali Mehmood, and Michael Lawo. "Recognizing Emotion from
Blood Volume Pulse and Skin Conductance Sensor Using Machine
Learning Algorithms." XIV Mediterranean Conference on Medical and
Biological Engineering and Computing 2016. Springer International
Publishing, 2016
[6] Krcadinac, Uros, et al. "Synesketch: An open source library for
sentence-based emotion recognition." IEEE Transactions on Affective
Computing 4.3 (2013): 312-325.
[7] Mostafa, Mohamed, et al. "Incorporating Emotion and Personality-
Based Analysis in User-Centered Modelling." arXiv preprint
arXiv:1608.03061 (2016).
[8] EmoVoice:-https://www.informatik.uni-
augsburg.de/en/chairs/hcm/projects/tools/emovoice/ , 2016-10-04
[9] Eyben, Florian, Martin Wöllmer, and Björn Schuller. "OpenEAR—
introducing the Munich open-source emotion and affect recognition
toolkit." 2009 3rd International Conference on Affective Computing and
Intelligent Interaction and Workshops. IEEE, 2009.
[10] Amos, Brandon, Bartosz Ludwiczuk, and Mahadev Satyanarayanan.
OpenFace: A general-purpose face recognition library with mobile
applications. Technical report, CMU-CS-16-118, 2016.
[11] Java Emotion Recognizer https://github.com/mpillar/java-emotion-
recognizer , last accessed on 2016-10-04
[12] Kibria, Muhammad Golam, and Ilyoung Chong. "Knowledge creation
model in WoO enabled smart ageing IoT service platform."
Ubiquitous and Future Networks (ICUFN), 2016 Eighth International
Conference on. IEEE, 2016
56