Conference PaperPDF Available

Exploiting IoT Services by Integrating Emotion Recognition in Web of Objects

Authors:

Abstract

Web of Objects (WoO) is an IoT platform, which is an avenue for better support to connect heterogeneous data sources with the web, because of its modular and plug & play nature. This modular and plug & play approach of WoO, is an amazing support towards the adaptions of other concepts and technologies. The better support based on ontology and semantic based knowledge creations helps in context aware IoT services. The one aspect of context aware IoT services, is emotion based IoT services. Because emotion is an important aspect of human life, he takes his actions based on his emotions, so the emotions aware IoT services, assist the human in achieving his daily task and makes his life easier. Our approach of integration of emotion recognition and creation of emotion aware IoT services based on WoO is a novel approach towards affective IoT services in WoO.
Exploiting IoT Services by Integrating Emotion
Recognition in Web of Objects
Muhammad Aslam Jarwar and Ilyoung Chong
Department of CICE, Hankuk University of Foreign Studies, Yongin-si, Korea
jarwar.aslam@gmail.com, iychong@hufs.ac.kr
Abstract— Web of Objects (WoO) is an IoT platform, which
is an avenue for better support to connect heterogeneous data
sources with the web, because of its modular and plug & play
nature. This modular and plug & play approach of WoO, is an
amazing support towards the adaptions of other concepts and
technologies. The better support based on ontology and semantic
based knowledge creations helps in context aware IoT services.
The one aspect of context aware IoT services, is emotion based
IoT services. Because emotion is an important aspect of human
life, he takes his actions based on his emotions, so the emotions
aware IoT services, assist the human in achieving his daily task
and makes his life easier. Our approach of integration of emotion
recognition and creation of emotion aware IoT services based on
WoO is a novel approach towards affective IoT services in WoO.
Keywords—Internet of Things; Web of Objects; Emotion
Recognition; Emotion aware IoT Services
I
NTRODUCTION
In our daily life, during performing various activities, we
create and generate different types of data in a large quantity.
This data contains an important information regarding our
activities, style of living, our behavior in different situations
and our emotions. The scientists have categorized different
types of emotions based on their nature, these are love, joy,
surprise, anger, sadness, and fear, Paul Ekman's six basic
classes of emotions (happiness, anger, disgust, fear, sadness,
surprise) are most popular and remain the focus of researchers.
Emotions play an important role in our daily life, our actions
depends on our emotions, and in one study [1] it has been
observed that emotions are also helpful in patient discovery. In
internet of things (IoT) emotions did not study deeply. By
recognition and fine-grained analysis of emotions and
sentiments in IoT environment play an important role in the
significance of better IoT services. If the recognized emotions
of a person shows that his intentions are to suicide, then we
could activate social care service, inform his colleagues and
recommend him leisure type activities, to keep away from his
intentions. By the emotions recognitions of elderly and special
people, we could activate assistance, care and welfare IoT
services for them. To fully explored emotions in IoT, it is
necessary to capture emotions from all the sources, through
which a human can interact, so the emotions data from the
various sources become heterogeneous. To cope with
heterogeneity of emotions data and knowledge based emotion
aware IoT services, we need such an IoT platform which
provides the abstraction of data layers, process layer, and
analysis layer and then knowledge based emotion aware
services. Web of Objects [2], is an IoT Platform, which
provides all these types of opportunities. It provides the
ontology and RDF based concepts for virtualization and
composition, by introducing virtual objects (VOs) or VO layer,
composite virtual objects (CVOs) or CVO layer and
knowledge based service layer [3,4].
The goal of this paper is the integration of emotions
recognition and activating corresponding emotion aware IoT
services in Web of Objects. In the reminder of this paper,
section II describe the different source of emotions and their
acquisition and analysis techniques, section III elaborates the
integration of emotion recognition and emotion aware IoT
services in web of objects, section IV covers the proof of
concepts and the conclusion of the paper is discussed in section
IV.
E
MOTIONS
S
OURCES
,
ACQUISITION AND ANALYSIS
In the IOT environment, the emotions data generated from
various sources, like emotions data from physical activities,
categorized as physiological emotions, emotions from chatting,
SMS and social media tweets and posts categorized as textual
emotions and emotions data from voice, video, emoticons, and
images categorized as multimedia emotions. The physiological
emotions like gesture, heartbeat, pulse rate, and walking fast,
and then suddenly stop and thinking, or when some people are
in tension they put their both hands on their head. The major
sources of physiological emotions are like from mobile
terminals, wearable sensors. Health-related physiological
emotions could be detected with eHealth Platform as used in
the recent study [5]. The information regarding textual
emotions received through feature extractions from social
media text, SMS, and blogs. There are many open source APIs
and tools available for emotions detection from the text e.g.
synesketch [6], and IBM Waston [7]. For multimedia
emotions, data received from voice and video recording
devices, and image capturing cameras, installed such as in the
smart home case, and emotions from emoticons used in SMS
and posts, received through extracting emoticons from textual
data. EmoVoice [8], and openEAR [9] are the open source
toolkits for the emotion feature extraction from voice data. For
emotions from images, OpenFace [10] and Java Emotion
Recognition [11] are open source libraries could also be used
in emotion recognition for emotion aware IoT services.
The contextual knowledge is an important factor for the
emotion analysis, and we could not apply only generic emotion
recognition technique. E.g. during a football match between
two teams, the happy emotions of winning team, become a sad
for the other. So for the contextual knowledge creation, we also
need knowledge model for the fine-grained emotion analysis.
The WoO supports the creation of knowledge, through its
knowledge model, as in [12] knowledge is created for the
smart ageing IoT services.
978-1-5090-5124-3/17/$31.00 ©2017 IEEE
54978-1-5090-5124-3/17/$31.00 ©2017 IEEE ICOIN 2017
ENABLING EMOTION AWARE
I
O
T
SERVICE IN
W
O
O
For enabling emotion aware IoT services, based on WoO,
we integrate emotion recognition and analysis in WoO as
shown in figure 1. The modular and plug & play approach of
WoO helps us to better virtualize, categorize and divide the
integration of components for emotion recognition and
emotion aware services. The integration of components has
following two main parts.
A. Data acquisition and processing components
In figure 1 “user profile/Request” (UPR) component,
received the request parameters through API, and it contains
the request input parameters such as user’s social media
accounts and the contextual parameters. It is not necessary that
user always requests for the emotions aware services, once he
subscribes, then emotions aware service loop start, to track
activities of that user profile. The additional information about
the user’s background and contextual information is also stored
by this component in RWK & ECK (REK) component because
the contextual information is important in the emotion
recognition process. The REK component contains the
contextual and background knowledge about the user, and this
component will continuously keep growing, with identified
user’s emotions categories and also with aggregated emotion
from PER, MER, TER and emotion component respectively.
The training model for these three categories of emotions
(physiological, multimedia and textual) is lying in the ETM
(Emotion Corpus/Training Model) component. The ETM
contains the training model of the all three emotion categories
and its subcategories. When the emotion data received at PER,
MER, and TER, these component detect the subcategory of
emotion data source and then it fetch the chunk of
classification and training model from the ETM and perform
the recognition of emotion from the received data. The user
profile database (UPD) contains the information about the user
profiles when any request made to the system, the profile of
that user will be stored in this component. At later stage those
profiles stored in this component, their activities will be
tracked for serving the emotion aware service to them.
The request analysis component received input parameter
from the UPR, then it analysis the input parameters (credential
in the case of social media as an emotion source), creates the
service execution graph and send it to EM-1 for the data
acquisition and emotion recognition. The EM-1 contains the
management of emotion recognition composition and
execution in the form of RDF graph. This component decides,
which emotion recognition logic to select, and when to execute
it. The PER, MER and TER component contain actual logic
and software component based on machine learning techniques
to extract features and recognize the emotions from the data
received through VS from PDS, MDS, and TDS. As in the
figure the PER received or fetch data from VS of PDS and VS
of MDS because VS of PDS and VS of MDS have a
semantically relationship with respect to physiological
emotions and multimedia emotions in the situation when the
physiological emotions could also be recorded through the
cameras in the form of images and videos. The VS are the
virtual sensors, basically, they need in the case when the PDS,
MDS or TDS are not available or they contain same data for
sometimes, the VS could save the energy of data source and
also helps in fast data acquisitions. The PDS, MDS, and TDS
are the data sources for the physiological, multimedia and
textual emotions. For example, the physiological data sources
may contain the wearable sensors for knowing the body
temperature, position sensor; to know the position of a man
whether he is in sitting, supine, pine, left/right lateral
recumbent, electrocardiogram sensor; pulse and oxygen in
blood sensors etc. The example of the multimedia data source
as discussed before the audio, video and image recording
devices. Mostly the example here of textual data sources are
blogs, Facebook posts, tweets, and Kakao chat logs. The E1,
E2, and E3 aggregate the emotions received from PER, MER,
and TER component and gives a final aggregated emotion for
the emotion aware service (EAS).
RWK & ECK Real World Knowledge & Emotion Contextual
Knowledge
EM Execution Management
PER Physiological Emotion Recognition
MER Multimedia Emotion Recognition
VS virtual Sensor, PDS Physiological Data Source
MDS Multimedia Data Source, TDS Textual Data Source
SCA Social care service, RWO Real World Object
VA Virtual Actuator, E
1
, E
2
, E
3
Emotions Type
Figure 1. Main Architecture of Emotion Recognition in WoO
B. Emotion aware IoT service components
The EAS received the aggregated emotion, user profile
data, and contextual information for the composition of
emotion aware service in the form of RDF graph. As the EAS
is the only single main service, which composes many other
microservices to achieve the emotion aware IoT service. For
example, the emotion detected that person is depressed (as the
sad, fear, gloomy, and shame), then the emotion aware service
may contain other microservices e.g. inform to social care
service, inform his friends or inform to parents and relatives.
Sometimes we need all these microservices and sometimes we
need few of them depending on the contextual and real world
knowledge. Here the contextual and real world knowledge may
contain that someone parents are not alive, or he does not like
his office friends. The EAS compose multiple emotion services
by adding multiple microservices, based on the knowledge
available in REK component with respect to the user profile.
The EM-2 performs the management function as like EM-
1, it received the service composition in the form of RDF
graph, which contains semantic ontology of different
components involves to accomplish the emotion aware service.
The EM-2 execute the different composite virtual objects
(CVOs) according sculpted in the service. As shown in the
figure that EM-2 chose social care CVO or display pictures
CVO, or any other CVO to achieve the desired service. The
55
VA (virtual actuator) actually perform the action through
RWO i.e. play movie, make a call or SMS to social care
service.
USE
CASE
:
A MAN IS MISSING TO HIS FAMILY
There are many WoO based IoT services, which become more
powerful, contextual and emotion aware after the integration of
emotion recognition. To realize the emotion aware IoT services
based on WoO, we present a proof of concept (POC) as
depicted in figure 2.
Figure 2. Emotion aware service POC based on WoO
For the POC, we have created the persona of Mr. Alice.
Mr. Alice is lonely at home and he is missing his family. He
has subscribed for the WoO based emotion aware IoT services.
When he subscribed to the service, his service template is
created based on his preference, and knowledge of about his
family, friends and all things, which matter to him is also
created. After creation, this template will be stored in the REK
in the format of linked data (RDF) and in user profile database.
After this process, his request will be composed and sent to
request analysis component for the validation of request
parameters and orchestration.
The EM-1 receives the execution information of emotion
types/sources/VO(s)/CVO(s), in the format of RDF graph, and
according to given instructed graph, it executes the service by
choosing the CVO(s). Here PER, MER, and TER are the
emotion recognition CVOs for each category of emotion data
sources. The emotion recognition CVO is the small emotion
recognition component contain the only logic part for different
category of emotions, whenever it received data from VOs, it
detects the type of emotion data source with the help of VO
parameters and fetches related training model from ETM and
real world knowledge and emotion contextual knowledge from
REK for applying the machine learning techniques to
recognized emotions. So here the Alice emotions will be
detected at CVO level and aggregated at the service level.
Now, for example, the Alice aggregated emotion recognized
that he is missing his family. The emotion aware service
activate the emotion aware IoT services (e.g. send notification
to family that Alice is missing you, some family related movies
will be played on his TV, and his smartphone remind that his
daughter/son/wife like XYZ foods, so service will recommend
that food), by combining multiple microservices, and these
services will be executed by CVO(s)/VO(s).
C
ONCLUSION
In this paper, we categorize the diverse emotions data
sources, and input devices, and then with VOs support, the
virtualization of these data sources was performed. The
integration of various emotion recognition techniques for
various virtual data sources performed at CVO level, with
simple and plug & play approach to adding and removing
machine learning techniques and training models according to
emotion data sources. Because of multiple emotion recognition
categories, the aggregation was performed at the service level,
by adding the user’s contextual information. The task of
emotion aware service accomplished by combining multiple
microservices. This approach of emotion integration for
emotion aware IoT services in WoO, harmonize the better IoT
services in every aspect of human activity. In the future, our
plan is to implement; our approach as SAS in the cloud.
A
CKNOWLEDGMENT
This work was supported by Korea Institute for
Advancement of Technology (KIAT) funded by Ministry of
Trade, Industry and Energy (MOTIE, Korea) [
N040800001
,
Development of Web Objects enabled EmoSpaces Service Technology
].
R
EFERENCES
[1] Mano, Leandro Y., et al. "Exploiting IoT technologies for enhancing
Health Smart Homes through patient identification and emotion
recognition." Computer Communications (2016).
[2] Web-of-Objects (WoO)-ITEA2 Project Jan 2012–Dec 2014.
https://itea3.org/project/web-of-objects.html.
[3] Kibria, Muhammad Golam, and Ilyoung Chong. "Knowledge-based
open Internet of Things service provisioning architecture on beacon-
enabled Web of Objects." International Journal of Distributed Sensor
Networks 12.9 (2016): 1550147716660896.
[4] Kibria, Muhammad Golam, et al. "A User-Centric Knowledge Creation
Model in a Web of Object-Enabled Internet of Things Environment."
Sensors 15.9 (2015): 24054-24086.
[5] Khan, Ali Mehmood, and Michael Lawo. "Recognizing Emotion from
Blood Volume Pulse and Skin Conductance Sensor Using Machine
Learning Algorithms." XIV Mediterranean Conference on Medical and
Biological Engineering and Computing 2016. Springer International
Publishing, 2016
[6] Krcadinac, Uros, et al. "Synesketch: An open source library for
sentence-based emotion recognition." IEEE Transactions on Affective
Computing 4.3 (2013): 312-325.
[7] Mostafa, Mohamed, et al. "Incorporating Emotion and Personality-
Based Analysis in User-Centered Modelling." arXiv preprint
arXiv:1608.03061 (2016).
[8] EmoVoice:-https://www.informatik.uni-
augsburg.de/en/chairs/hcm/projects/tools/emovoice/ , 2016-10-04
[9] Eyben, Florian, Martin Wöllmer, and Björn Schuller. "OpenEAR—
introducing the Munich open-source emotion and affect recognition
toolkit." 2009 3rd International Conference on Affective Computing and
Intelligent Interaction and Workshops. IEEE, 2009.
[10] Amos, Brandon, Bartosz Ludwiczuk, and Mahadev Satyanarayanan.
OpenFace: A general-purpose face recognition library with mobile
applications. Technical report, CMU-CS-16-118, 2016.
[11] Java Emotion Recognizer https://github.com/mpillar/java-emotion-
recognizer , last accessed on 2016-10-04
[12] Kibria, Muhammad Golam, and Ilyoung Chong. "Knowledge creation
model in WoO enabled smart ageing IoT service platform."
Ubiquitous and Future Networks (ICUFN), 2016 Eighth International
Conference on. IEEE, 2016
56
... "Emotion is the name used to comprehend all that is understood by feelings, states of feeling, pleasures, pains, passions, sentiments, and affections" [59]. The Web of Objects (WoO) [60] collects data on users' emotions-including physiological feelings (e.g., different kinds of gestures), textual feelings (e.g., Facebook posts, tweets, and messages), and social media messages (e.g., voices, images, and videos)-to detect emotions. After these data elements are analyzed, users' emotions are detected, and relevant services are activated. ...
... In this regard, Virtual Tutor [51] and Pen-Pen [53] present standalone applications. IAMHAPPY [55] and WoO [60] have designed knowledge bases to identify and improve users' moods, respectively, through knowledge engineering. Other studies have designed systems with more than one tier. ...
... For instance, Smart Home 2 [54] benefits from the above sensors to control the plants' environment. Finally, WoO [60], SAHHc [78], Smoodsically [62], IoT-BRB [69], Social Sensing [68], Postpartum Depression (PPD) [67], and SA-IoTBigSys [81] utilize specialized medical sensors such as body temperature and heart rate sensors to collect physiological information. It should be noted that almost none of the used sensors are specific to psychology, but are general, environmental, or medical sensors. ...
Article
Full-text available
The Internet of things (IoT) continues to “smartify” human life while influencing areas such as industry, education, economy, business, medicine, and psychology. The introduction of the IoT in psychology has resulted in various intelligent systems that aim to help people—particularly those with special needs, such as the elderly, disabled, and children. This paper proposes a framework to investigate the role and impact of the IoT in psychology from two perspectives: (1) the goals of using the IoT in this area, and (2) the computational technologies used towards this purpose. To this end, existing studies are reviewed from these viewpoints. The results show that the goals of using the IoT can be identified as morale improvement, diagnosis, and monitoring. Moreover, the main technical contributions of the related papers are system design, data mining, or hardware invention and signal processing. Subsequently, unique features of state-of-the-art research in this area are discussed, including the type and diversity of sensors, crowdsourcing, context awareness, fog and cloud platforms, and inference. Our concluding remarks indicate that this area is in its infancy and, consequently, the next steps of this research are discussed.
... • Advancements Prescient Support Optimization: The primary objective is to create progressed prescient support frameworks that use IoT sensors, AI calculations, and chronicled information to figure gear disappointments and optimize support plans. This will include the creation of interesting machine learning models that can precisely foresee the remaining valuable life of resources, empowering office directors to mediate proactively and decrease expensive downtime [9]. ...
... The discoveries from this inquire about can advise the advancement of more viable procedures for the broad selection of IoT-based offices administration arrangements. Investigating the broader scene of keen fabricating, [9] offers a comprehensive audit of past inquire about, current headways, and future bearings in this field. The ponder digs into the integration of different Industry 4.0 innovations, counting IoT, AI, and ML, and their applications in upgrading generation processes, asset administration, and supply chain optimization. ...
Article
Full-text available
The shift in urban foundation and administrations inside keen cities requests a comprehensive, innovative, and user-centric strategy that continues coordinating cutting-edge advances. This paper presents a spearheading approach that leverages the control of the Web of Things (IoT), Counterfeit Insights (AI), and Machine Learning (ML) to revolutionize the way shrewd city offices are overseen. At the centre of this strategy is the improvement of a centralized, cloud-based Offices Administration Stage (FMP) that serves as the spine for keen city operations. The FMP acts as an integrated hub, drawing together information from numerous IoT sensors, building automation systems, and other urban infrastructure. The policy also encourages user-centred design in light of improving occupant comfort, productivity, and well-being. Backed by IoT sensors and AI-driven building computerization frameworks, robust natural condition control and vitality utilization streamlining of the FMP can assist in conveying a predominant client involvement inside savvy city office.
... Journalists and media have strong influence on government policies and they effect the mindset of the public, which also impact the election results. Journalists use social media services [Zubiaga et al., 2013] [Jarwar et al., 2017] together the news about the major events. Mostly news of major events comes from microblogging and social media services, e.g. ...
... In this field Paul Ekman's six basic classes of emotions (happiness, anger, disgust, fear, sadness, surprise) [Ekman and Friesen, 1971] are most popular and remain the focus of researchers. The emotions are also important to provide IoT based recommended services [Jarwar and Chong, 2017]. ...
Thesis
Full-text available
Social media has revolutionized human communication and styles of interaction. Due to its easiness and effective medium, people share and exchange information, carry out discussion on various events, and express their opinions. For effective policy making and understanding the response of a community on different events, we need to monitor and analyze the social media. In social media, there are some users who are more influential, for example, a famous politician may have more influence than a common person. These influential users belong to specific communities. The main object of this research is to know the sentiments of a specific community on various events. For detecting the event based sentiments of a community we propose a generic framework. Our framework identifies the users of a specific community on twitter. After identifying the users of a community, we fetch their tweets and identify tweets belonging to specific events. The event based tweets are pre-processed. Pre-processed tweets are then analyzed for detecting sentiments of a community for specific events. Qualitative and quantitative evaluation confirms the effectiveness and usefulness of our proposed framework.
... Journalists and media have strong influence on government policies and they effect the mindset of the public, which also impact the election results. Journalists use social media services [Zubiaga et al., 2013] [Jarwar et al., 2017] together the news about the major events. Mostly news of major events comes from microblogging and social media services, e.g. ...
... In this field Paul Ekman's six basic classes of emotions (happiness, anger, disgust, fear, sadness, surprise) [Ekman and Friesen, 1971] are most popular and remain the focus of researchers. The emotions are also important to provide IoT based recommended services [Jarwar and Chong, 2017]. ...
Preprint
Full-text available
Social media has revolutionized human communication and styles of interaction. Due to its easiness and effective medium, people share and exchange information, carry out discussion on various events, and express their opinions. For effective policy making and understanding the response of a community on different events, we need to monitor and analyze the social media. In social media, there are some users who are more influential, for example, a famous politician may have more influence than a common person. These influential users belong to specific communities. The main object of this research is to know the sentiments of a specific community on various events. For detecting the event based sentiments of a community we propose a generic framework. Our framework identifies the users of a specific community on twitter. After identifying the users of a community, we fetch their tweets and identify tweets belonging to specific events. The event based tweets are pre-processed. Pre-processed tweets are then analyzed for detecting sentiments of a community for specific events. Qualitative and quantitative evaluation confirms the effectiveness and usefulness of our proposed framework.
... AER systems can contribute to assess a candidate's suitability for a job and measure important traits like dependability and cognitive abilities. In particular, embedded AER systems enabled through IoT can provide fine-grained analysis of emotions and sentiments [44], which can be used in various ways for monitoring and evaluations. In the military and other defence-related departments, AER systems are partially used to track how sets of people or countries 'feel' about a government or other entities [10]. ...
Preprint
Full-text available
Automated emotion recognition (AER) technology can detect humans' emotional states in real-time using facial expressions, voice attributes, text, body movements, and neurological signals and has a broad range of applications across many sectors. It helps businesses get a much deeper understanding of their customers, enables monitoring of individuals' moods in healthcare, education, or the automotive industry, and enables identification of violence and threat in forensics, to name a few. However, AER technology also risks using artificial intelligence (AI) to interpret sensitive human emotions. It can be used for economic and political power and against individual rights. Human emotions are highly personal, and users have justifiable concerns about privacy invasion, emotional manipulation, and bias. In this paper, we present the promises and perils of AER applications. We discuss the ethical challenges related to the data and AER systems and highlight the prescriptions for prosocial perspectives for future AER applications. We hope this work will help AI researchers and developers design prosocial AER applications.
... Now, the services of social IoT are exploited in emotion-recognition as these emotions relate to the social activities of humans in their daily life. Hence, the integration of social IoT services will make life easier with several social care facilities for people [9]. The proposed FERS is useful for developing IoT-based smart devices and appliances. ...
Article
Full-text available
This work proposes a facial expression recognition system for a diversified field of applications. The purpose of the proposed system is to predict the type of expressions in a human face region. The implementation of the proposed method is fragmented into three components. In the first component, from the given input image, a tree-structured part model has been applied that predicts some landmark points on the input image to detect facial regions. The detected face region was normalized to its fixed size and then down-sampled to its varying sizes such that the advantages, due to the effect of multi-resolution images, can be introduced. Then, some convolutional neural network (CNN) architectures were proposed in the second component to analyze the texture patterns in the facial regions. To enhance the proposed CNN model’s performance, some advanced techniques, such data augmentation, progressive image resizing, transfer-learning, and fine-tuning of the parameters, were employed in the third component to extract more distinctive and discriminant features for the proposed facial expression recognition system. The performance of the proposed system, due to different CNN models, is fused to achieve better performance than the existing state-of-the-art methods and for this reason, extensive experimentation has been carried out using the Karolinska-directed emotional faces (KDEF), GENKI-4k, Cohn-Kanade (CK+), and Static Facial Expressions in the Wild (SFEW) benchmark databases. The performance has been compared with some existing methods concerning these databases, which shows that the proposed facial expression recognition system outperforms other competing methods.
... One object needs first to choose the nature of the connection and the trustworthiness that it will build with others in order to subsequently interact according to a social relational paradigm. Hence, we introduce a hierarchy of Socially Aware, Geographical and Service Quality parameters that will allow the virtualized services to choose an appropriate type of relation according to these parameters [32]. ...
Article
Full-text available
In the context of Internet of Things (IoT), the cooperation and synergy between varied and disparate communicating objects is strained by trustworthiness, confidentiality and interoperability concerns. These restrictions can limit the development of IoT-based applications especially considering the emergent boost in the number of communicating objects and their growing itinerant nature in a collective service context. A new perspective arises with the paradigm of Social Internet of Things (SIoT), that relies on the implementation of semi-independent communicating objects with cooperation assessed by social relations and social feed-back. In this article, we present the development and expansion of the IoT concept towards SIoT in the context of the interactions between tourist services as communicating objects. As a proof-of-concept we propose a composition of services as virtualized social objects and the interaction between them, by taking into consideration the balance, trustworthiness, cooperation and synergy of services. Furthermore we present a solution to integrate also accessibility in SIoT services. The presented concept is presented using a demonstrator build for tourist services.
... To expose the services capability with data quality, microservices style architecture has significance support. In such a dynamic IoT environment and energy management environment, where the applications deal with a variety of data sources from heterogeneous objects and to enhance data quality for the quality of services, microservices based design and implementation architecture is a suitable way [11]- [13]. ...
Preprint
Full-text available
During the production, distribution, and consumption of energy, a large quantity of data is generated. For efficiently using of energy resources other supplementary data such as building information, weather, and environmental data etc. are also collected and used. All these energy data and relevant data is published as linked data in order to enhance the reusability of data and maximization of energy management services capability. However, the quality of this linked data is questionable because of wear and tears of sensors, unreliable communication channels, and highly diversification of data sources. The provision of high-quality energy management services requires high quality linked data, which reduces billing cost and improve the quality of the living environment. Assessment and improvement methodologies for the quality of data along with linked data needs to process very diverse data from highly diverse data sources. Microservices based data-driven architecture has great significance to processes highly diverse linked data with modularity, scalability, and reliability. This paper proposed microservices based architecture along with domain data and metadata ontologies to enhance and assess energy-related linked data quality.
Chapter
Present research is focused on consideration of emotional data. In order to deal with such objective, EEG technique has been applied. Electroencephalography (EEG) is a technique used to measure and record the electrical activity that takes place on the scalp. This recorded activity has been shown to be indicative of the underlying activity occurring in the superficial layer of the brain. Numerous studies have been conducted to investigate the quality of the electroencephalogram (EEG) signal. The major objective of this experiment was to examine the ability to recognize the emotional states shown by individuals. Despite the previous time-consuming nature and lack of accuracy in prior study, a viable solution has been discovered. It can be concluded that the system has a very limited capacity for both adaptation and expansion. Developing a thorough methodology for assessing electroencephalogram (EEG) data is of paramount significance.
Poster
Full-text available
An outcome of linear regression analysis suggested that the following Big Five dimensions: Openness, Extraversion, Conscientiousness, and Neuroticism have the highest correlation with the social emotion tones: Joy, Sadness, and Disgust. The dataset used to build this model is based on a number of users (N=391), eight inputs (Openness, Extraversion, Conscientiousness, Neuroticism, Joy, Sadness, Anger, and Disgust) and the class/output variable as the server status (where No: System Failure and Yes: System Idle). The total number of the instances for the testing set is 57; the output of the model shows a 75.44% corrected predicted instances and 24.56% incorrectly classified instances (kappa statistic: 0.5295; mean absolute error: 0.3432; RMS error: 0.4246). References Society is witnessing a shape rise in ubiquitous demands of computer systems and applications and with it associated complexity regarding expectations of ever-more intuitive interfaces. Due to this, there has been an increase in interest in research in modeling, understanding and predicting user behavior demands has become a priority across a number of domains. This work is concerned with the relationship between digital footprint and behavior and personality [2, 3]. A wide range of pervasive and often publicly available datasets encompassing digital footprints, such as social media activity, can be used to infer personality [1, 4] and development of robust models capable of describing individuals and societies [5].
Conference Paper
Full-text available
Understanding complex user behaviour under various conditions, scenarios and journeys can be fundamental to the improvement of the user-experience for a given system. Predictive models of user reactions, responses -- and in particular, emotions -- can aid in the design of more intuitive and usable systems. Building on this theme, the preliminary research presented in this paper correlates events and interactions in an online social network against user behaviour, focusing on personality traits. Emotional context and tone is analysed and modelled based on varying types of sentiments that users express in their language using the IBM Watson Developer Cloud tools. The data collected in this study thus provides further evidence towards supporting the hypothesis that analysing and modelling emotions, sentiments and personality traits provides valuable insight into improving the user experience of complex social computer systems.
Article
Full-text available
User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service.
Conference Paper
Full-text available
Various open-source toolkits exist for speech recognition and speech processing. These toolkits have brought a great benefit to the research community, i.e. speeding up research. Yet, no such freely available toolkit exists for automatic affect recognition from speech. We herein introduce a novel open-source affect and emotion recognition engine, which integrates all necessary components in one highly efficient software package. The components include audio recording and audio file reading, state-of-the-art paralinguistic feature extraction and plugable classification modules. In this paper we introduce the engine and extensive baseline results. Pre-trained models for four affect recognition tasks are included in the openEAR distribution. The engine is tailored for multi-threaded, incremental on-line processing of live input in real-time, however it can also be used for batch processing of databases.
Article
Web of Objects uses a semantic ontology to create and provide knowledge-based Internet of Things services in terms of virtual objects and composite virtual objects. The objects are created through virtualization with the use of semantic ontology to form the virtual objects, where multiple virtual objects are combined to form the composite virtual objects to offer the services. Beacon-enabled Web of Objects is the extension of the existing web that allows the beacon to broadcast the uniform resource identifier of the real-world object to the nearer mobile devices. Based on the user selection and request, the service is handled and offered by the Web of Objects platform; thus, an architecture on the beacon-enabled Web of Objects has been proposed in this article. To offer the knowledge-based services, a knowledge creation model has been presented. To realize the knowledge-based service features on the proposed architecture, a use case scenario has been presented and a conceptual semantic ontology model has been designed. Finally, to assess the features, a prototype has been implemented and demonstrated on the use case scenario.
Conference Paper
Intelligent service provisioning for smart ageing requires not only the current status of smart objects, but also the knowledge of the service and its surrounding environment. Hence, context information is detected by processing the stream of data and the situation is recognized by reasoning them to project the prediction of the service environment as well as to update the knowledge. Knowledge based IoT service provisioning can be achieved in terms of virtual objects and composite virtual objects in the Web-of-Objects platform. Web-of-Objects allows the semantic ontology to virtualize the objects, which can express the knowledge through the interrelation of the virtual objects. This paper proposes a knowledge creation model in Web-of-Objects platform that helps in learning the status of the objects to update the knowledge through machine learning technique, which has been considered in service request analysis for user-centric service provisioning. To realize the intelligent service features, a use case scenario has been studied and a prototype has been implemented.
Article
Online human textual interaction often carries important emotional meanings inaccessible to computers. We propose an approach to textual emotion recognition in the context of computer-mediated communication. The proposed recognition approach works at the sentence level and uses the standard Ekman emotion classification. It is grounded in a refined keyword-spotting method that employs: a WordNet-based word lexicon, a lexicon of emoticons, common abbreviations and colloquialisms, and a set of heuristic rules. The approach is implemented through the Synesketch software system. Synesketch is published as a free, open source software library. Several Synesketch-based applications presented in the paper, such as the the emotional visual chat, stress the practical value of the approach. Finally, the evaluation of the proposed emotion recognition algorithm shows high accuracy and promising results for future research and applications.
Recognizing Emotion from Blood Volume Pulse and Skin Conductance Sensor Using Machine Learning Algorithms
  • Ali Khan
  • Michael Mehmood
  • Lawo
Khan, Ali Mehmood, and Michael Lawo. "Recognizing Emotion from Blood Volume Pulse and Skin Conductance Sensor Using Machine Learning Algorithms." XIV Mediterranean Conference on Medical and Biological Engineering and Computing 2016. Springer International Publishing, 2016
OpenFace: A general-purpose face recognition library with mobile applications
  • Brandon Amos
  • Bartosz Ludwiczuk
  • Mahadev Satyanarayanan
Amos, Brandon, Bartosz Ludwiczuk, and Mahadev Satyanarayanan. OpenFace: A general-purpose face recognition library with mobile applications. Technical report, CMU-CS-16-118, 2016.