Conference PaperPDF Available

Context-aware Human Activity Recognition and decision making


Abstract and Figures

Ubiquitous Life Care (u-Life care) nowadays becomes more attractive to computer science researchers due to a demand on a high quality and low cost of care services at anytime and anywhere. Many works exploit sensor networks to monitor patient's health status, movements, and real-time daily life activities to provide care services to them. Context information with real-time daily life activities can help in better services, service suggestions, and change in system behavior for better healthcare. Our proposed Secured Wireless Sensor Network - integrated Cloud Computing for ubiquitous - Life Care (SC<sup>3</sup>) monitors human health as well as activities. In this paper we focus on Human Activity Recognition Engine (HARE) framework architecture, backbone of SC<sup>3</sup> and discussed it in detail. Camera-based and sensor-based activity recognition engines are discussed in detail along with the manipulation of recognized activities using Context-aware Activity Manipulation Engine (CAME) and Intelligent Life Style Provider (i-LiSP). Preliminary results of CAME showed robust and accurate response to medical emergencies. We have deployed five different activity recognition engines on Cloud to identify different set of activities of Alzheimer's disease patients.
Content may be subject to copyright.
Context-aware Human Activity Recognition and
Decision Making
Asad Masood Khattak, La The Vinh, Dang Viet Hung, Phan Tran Ho Truc, Le Xuan Hung, D. Guan, Zeeshan Pervez,
Manhyung Han, Sungyoung Lee, Young-Koo Lee
Dept. of Computer Engineering, Kyung Hee University, Korea,
{asad.masood, vinhlt, dangviethung, pthtruc, lxhung, donghai, zeeshan, smiley, sylee},
Abstract Ubiquitous Life Care (u-Life care) nowadays becomes
more attractive to computer science researchers due to a demand
on a high quality and low cost of care services at anytime and
anywhere. Many works exploit sensor networks to monitor
patient’s health status, movements, and real-time daily life
activities to provide care services to them. Context information
with real-time daily life activities can help in better services,
service suggestions, and change in system behavior for better
healthcare. Our proposed Secured Wireless Sensor Network -
integrated Cloud Computing for ubiquitous - Life Care (SC3)
monitors human health as well as activities. In this paper we
focus on Human Activity Recognition Engine (HARE)
framework architecture, backbone of SC3 and discussed it in
detail. Camera-based and sensor-based activity recognition
engines are discussed in detail along with the manipulation of
recognized activities using Context-aware Activity Manipulation
Engine (CAME) and Intelligent Life Style Provider (i-LiSP).
Preliminary results of CAME showed robust and accurate
response to medical emergencies. We have deployed five different
activity recognition engines on Cloud to identify different set of
activities of Alzheimer’s disease patients.
As the standard of living rises, people are more interested
in their health and desire healthy life. Due to aging of
population, rising cost of workforce and high quality treatment,
the cost of life care or healthcare system is increasing
worldwide. According to OECD1
Cloud Computing can provide a powerful, flexible, and
cost-effective infrastructure for life care services that can
fulfill the vision of “ubiquitous life care” that is providing life
care to people anywhere at any time with increasing coverage
(Organization of Economic
Cooperation and Development) Health data 2008, total health
spending accounted for 15.3% of GDP in the United States in
2006. Korea was 6.4% of GDP to health in 2006. The United
States also ranks far ahead of other OECD countries in terms
of total health spending per capita, with spending of 6,714
USD (adjusted for purchasing power parity (PPP)), more than
twice the OECD average of 2,824 USD in 2006. For Korea it
was 1480 USD.
and quality. Because of its elasticity, scalability, pay-as-you-go
model [1], Cloud Computing can potentially provide huge cost
savings, flexible, high-throughput, and ease of use for life care
services. For example, with life care providers looking at
automating processes for lower cost and higher gains, Cloud
Computing can act as an ideal platform. For this reason we
have developed Secured Wireless Sensor Network (WSN) -
integrated Cloud Computing for u-Life Care (SC3) [9] that
provide all the above discussed facilities.
Our focus in this paper is on Human Activity Recognition
Engine (HARE) component of SC3 architecture highlighted in
Figure 1. HARE can help in enhancing capabilities and
provides tremendous value for smarter service provisioning.
HARE can provide efficient model for managing real-time
data from various sensors, efficiently detection of human
activities, and better manipulation of detected activities using
ontologies. System accuracy in healthcare systems is the most
important issue. The existing systems are based on simple
condition and action [19], not using context information, or in
some cases use imperfect context information [7] where the
result of system is unpredictable. Their focus is more on
environment sensors rather than on real-time person activity.
Due to space limitation, we have provided the details of
each HARE component and its preliminary results achieved
with the help of Context-aware Activity Manipulation Engine
(CAME). Experimental results of the proposed HARE
framework showed robust response for health care services in
emergency situations. As a proof of concept, in initial phase
HARE is deployed on Cloud server for an Alzheimer’s disease
patient’s for his better life care using five different activity
recognition modules. The demonstration2
This paper is arranged as follows: Section II provides the
overview of the overall SC3 architecture. Section III is detail
description of proposed HARE architecture and its
subcomponents. Section IV comprises of the implementation
and results details. Finally we conclude our findings in Section
V and talk about future directions and applications of HARE.
was very successful
for a set of 14 different real time activities that Alzheimer’s
disease patients commonly perform.
The system architecture for SC3 is shown in Figure 1,
proposed in [9]. In this architecture, WSNs are deployed in
home environments for collecting data. This sensed data is
either human health data and/or data to be used for detection
of human activities for care services. To detect human
activities, we propose novel approaches: embodied-sensor
based activity recognition [18], video-based activity
recognition, wearable sensor-based activity recognition,
location tracking, and ontology based intelligent activity
logging and manipulation [13]. The sensors are either attached
to a person or to the walls in the home environment. The
video-based approach is based on images collected from
camera, extracting the background to get the moving object
and inferring activities such as walking, sitting, standing,
falling down, bending, jacking, jumping, running, siding,
skipping, one hand waving, both hands waving, and exercising.
Location tracking helps in properly locating the subject’s
current position. On top of these, ontology engine is
implemented to deduce high level activities and make
decisions according to situation based on user profile
Sensed data is transferred to Cloud by using sensor data
dissemination and WSN-Cloud integration mechanisms [6]. To
access medical data on Cloud, the user must be authenticated
and granted access permission. An image-based authentication
and activity-based access control are proposed to enhance
security and flexibility of user’s access [12 and 8].
For Independent Clouds Collaboration (ICC) with each
other, we proposed a dynamic collaboration procedure [6].
Numerous u-life care services can access Clouds to provide
better and low cost care for end-users such as secure u-119
service, secure u-Hospital, secure u-Life care research, and
secure u-Clinic.
In SC3, we mostly focused on WSN, WSN-Cloud, activity
recognition, authentication and access control to Cloud data,
and a sample care service for different disease patients at
home environment. We have implemented SC3 for Alzheimer
disease discussed in Section VI. First of all, human activity
data is captured from sensors and videos, and then transmitted
to the Cloud Gateway. After data filtering, it transmits the data
to the Cloud via TCP/IP socket. In the Cloud, raw data is used
to deduce user activity and location information such as
patient is walking, eating, and staying in the kitchen. Activity
and location information are forwarded to ontology for
representation and inferring higher level activity and situation
analysis. The decisions are also made based on the situation to
respond to some context; for example, if patient is reading a
book then TV should be turned off.
To access patient data, doctors and/or nurses are first
authenticated based on their access permission. Some of the
main services of SC3 are (1) Safety monitoring services for
home users: SC3’s WSN can monitor home user’s movement;
WSN can monitor home user’s movement, location by using
various sensors. The sensory data is then disseminated to the
Figure 1, The system model of SC3
From that, SC3’s Life care services such as emergency service,
caregivers can monitor and has immediate responses in case of
emergent situations. (2) Information sharing services: With
SC3, patient information and data can be accessed globally and
resources can be shared by a group of hospitals rather than
each hospital having a separate IT infrastructure. It can help in
the early identification and tracking of disease outbreaks,
environment related health problems, and other issues. (3)
Emergency-connection services: SC3 can be deployed to real-
time monitor home environments, including gas, fire, and
robbery. Through SC3, an alarm system connects to users, u-
119, police department can give an emergency alert in case
any emergent situation occurs. (4) Users can monitor their
home, their family health anywhere, any time with any kind of
connected devices over Internet such as cell phone, PDA,
laptop, and computer.
Core of SC3 is a Human Activity Recognition Engine
(HARE) as shown in Figure 2. HARE is composed of various
sub-components such as; Location Tracking: to track human
location, Activity Recognizer (including embedded, wearable,
2D camera, and 3D camera based activity recognition): to
recognize human activities. Schema Mapping and XML
Transformer: to transform activity output in a machine-
understandable and flexible OWL format, and Context-Aware
Activity Manipulation Engine: to infer high level activities or
make decisions based on subject performed activity and
profile information.
In addition, a number of supporting components are also
integrated to make HARE work properly, to mention these: AR
Fusion and Collaborator: is to make collaboration among
different activity recognition engine approaches. It is
necessary to increase the accuracy of activity recognition. For
example, if wearable sensor-based AR detects a person is
taking medicine with 70% accuracy, and 2D camera-based AR
also detects the person is taking medicine with 80%, so the
collaborator can ensure that he is taking medicine. HARE
Repository: is back bone of HARE, it stores raw data collected
by sensors and cameras, stores real-time activities recognized
by activity recognition engines, activity history and activities
in machine understandable format (OWL) to infer high level
activities. We have developed successfully a Mobile Activity
Sensor Logger (MASoL) (see Figure 1). MASoL serves in the
infrastructure layer under HARE to collect and monitor human
and environment information.
In this section we briefly discuss the main components of
HARE namely: Video Based Activity Recognition, Sensor
Based Activity Recognition, Intelligent Life-care Service
Provider (i-LiSP), and Context-aware Activity Manipulation
Engine (CAME). The activities are detected by Video based
and Sensor based AR engines and then given to i-LiSP and
CAME for further manipulation and decision making.
A. Video Based Activity Recognition
The accuracy of the video-based AR depends significantly
on the accuracy of human body segmentation. In the field of
image segmentation [15], active contour (AC) model has
attracted much attention. Recently, Chan and Vese (CV)
proposed in [3] a novel form of AC based on the Mumford
and Shah functional for segmentation and the level set
framework. The CV AC model utilize the difference between
the regions inside and outside of the curve, making itself one
of the most robust and thus widely used techniques for image
segmentation with energy function;
() ()
( ) () ()
in out
in C out C
FC Icd Icd= −+
xx x x
(the image plane)
is a
certain image feature such as intensity, color, or texture.
are respectively the mean values of image feature
[ ]
()in C
and outside
[ ]
()out C
the curve
, which
represents for the boundary between two separate segments.
Considering image segmentation as a clustering problem, we
can see that this model forms two segments (clusters).
However, the global minimum of the above energy functional
does not always guarantee the desirable results, especially
when a segment is highly inhomogeneous, e.g., human body,
as can be seen in Figure 3(b). It is due to the fact that CV AC
is trying to minimize the dissimilarity within each segment but
does not care for distance between different segments. Our
methodology is to incorporate the Bhattacharyya distance [2]
to the CV energy functional such that not only the differences
within each region are minimized but the distance between the
two regions is maximized as well. The proposed energy
functional is;
() ()(1 )()E C FC BC
= +−
( ) () ()
in out
B C B p z p z dz≡=
Figure 2, Framework architecture of Human Activity Recognition Engine (HARE)
the Bhattacharyya coefficient with
( ( )) ( ( ))
() )
( ()
zI H d
pz Hd
x xx
( ( )) ( ( ))
() ( ( ))
zI H d
pz Hd
x xx
the level set function, and
(? (?H
respectively the Heaviside and the Dirac
functions. Note that the Bhattacharyya distance is defined by
[ ]
log ( )BC
and the maximization of this distance is
equivalent to the minimization of
. Note also that to be
comparable to the
term, in our implementation,
is multiplied by the area of the image because its value
is always within the interval
calculated based on the integral over the image plane. In
general, we can regularize the solution by constraining the
length of the curve and the area of region inside it. Therefore,
the energy functional is;
are constants.
The intuition behind the proposed energy functional is that
we seek for a curve which 1) is regular (the first two terms)
and 2) partitions the image into two regions such that the
differences within each region are minimized (i.e., the
term) and the distance between the two regions is maximized
(i.e., the
term). The level set implementation for the
overall energy functional can be derived as
are areas inside & outside curve
As a result, the proposed model can overcome the CV AC’s
limitation in segmenting inhomogeneous objects as shown in
Figure 3(c).
Figure 3, Sample segmentation of inhomogeneous body-shape object
using active contours. (a) Initial contour, (b) result of CV AC [3], and (c)
result of our approach.
After obtaining a set of body silhouettes segmented from a
sequence of images; we propose to apply ICA (Independent
Component Analysis) [10 and 11] to get the motion features of
that sequence. The extracted features are then symbolized
using vector quantization algorithms such as K-mean
clustering [14]. Symbol sequence generates a codebook of
vectors for training and recognition. The overall architecture
of proposed framework is shown in Figure 4, where
represents the number of testing shape images,
number of
trained HMMs, and
Figure 4, Architecture of the proposed approach for motion feature
extraction and recognition.
B. Sensor Based Activity Recognition
Based on existing work [16], we develop our own
recognition which is called “semi-Markov Conditional
Random Fields (semiCRF)” [18], furthermore we propose a
novel algorithm which helps to reduce the complexity of
training and inference by more than 10 times in comparison
with the original work. In our model, we assume that
are input signal and label respectively. We optimize the model
parameter so that P (Y|X) is maximized. Where in CRF,
P(Y|X) is calculated by
Where F is a vector of feature functions (which are often
delta functions), WT is a vector of model parameters, and ψ is
called potential functions. ZX (normalization factor), is
computed by using forward/backward algorithm. However,
conventional CRF is limited to Markov assumption which is
not able to model the duration of activity as well as long-
transition between activities. To overcome these, we introduce
a semi-Markov model by defining a new state as si = (yi,bi,ei)
where si is the ith state, yi, bi, ei in that order are label, begin
and end time of the state. For example, given an input label
sequence Y=(1,1,2,2,2,3,4,4), then the semi-Markov state
sequence is (1,1,2), (2,3,5), (3,6,6), (4,7,8). Note that, in AR
we consider states with expected label. With these definitions,
potential function is rewritten as:
is a weighted transition potential function, wT(y’,y) is the
weight of transition from y’ to y.
Making use of semi-Markov conditional random fields, we
proposed our algorithm for computing gradients of the target
function by extending [16]. It reduces the complexity of
computing each gradient from O(TN2D) to O(TN(N+D)),
where T, N, D are length of the input sequence, number of
label values, and maximum duration of a label respectively.
For experiments, we used the dataset of long-term activities
available at
Then we show our result in comparison with the original one.
The dataset contains 7 days of continuous data, measured by
two triaxial accelerometers [18].
C. Intelligent Life-care Service Provider
Intelligent Life-care Service Provider (i-LiSP) module is
responsible to provide intelligent services to the users by
analyzing their context information. Service could be an act of
help, assistance, and recommendations. Various kinds of
services are considered in i-LiSP, such as entertainment,
medication, and sport services. The context information used
in i-LiSP is obtained from various sources. They mainly
include: the activity information from low level sensors, the
activity information from Human Activity Recognition Engine,
and the high-level context information from CAME Engine
(discussed later).
i-LiSP is designed to provide intelligent
services/recommendations to users. The services can be
divided into two types: (1) Service by long-term observations
(SLO): This service is provided after i-LiSP module analyzed
the long-term history data of users. For example, by analyzing
one week data of a user, we can create the model for the user’s
toileting times per day (e.g., estimating the probability density
function of the user’s toilet times). Then the medical doctors
can give some recommendations to the user by analyzing the
generated model. (2) Service by current observations (SCO):
This service is the immediate response/recommendations
provided by i-LiSP by analyzing current context information.
For example, the current context information of the user is: at
9AM, he is watching TV. Based on the knowledge (stored
information) of i-LiSP, this time he should do the exercise.
Therefore, the system will remind him to stop watching TV
and do exercise.
i-LiSP have three sub modules that work using different
techniques: (1) Topic Model Based Service Provider (TMSP):
TMSP adopts topic model as the reasoning algorithm to
provide SLO. Topic model is originally proposed to
summarize (finding concise restatements) of spoken/text
document to improve the navigation quality of speech/text
collections. Here, TMSP use topic model to summarize the
history information of users. (2) Rough Set Based Service
Provider (RSSP): RSSP adopts rough set theory as the
reasoning algorithm to provide SCO. Rough set theory is used
here for the following reasons: it directly works on the data
and does not require any other prior knowledge for the data
(such as probability distributions); it can automatically filter
out the irrelevant and redundant information from data. The
knowledge of RSSP is represented by rules which are
understandable by humans. Therefore, experts can flexibly
manipulate the knowledge by adding or removing the rules.
(3) Bayesian Network Based Service Provider (BNSP): BNSP
uses Bayesian network as the reasoning algorithm. It can
provide both SLO & SCO by varying the models it generates.
A Bayesian network is a probabilistic graphical model that
represents a set of random variables and their conditional
independencies via a directed acyclic graph. For example, a
Bayesian network could represent the probabilistic
relationships between the user’s diseases and symptoms.
D. Context-aware Activity Manipulation Engine
Ontology is formally defined as an explicit and formal
specification of a shared conceptualization [5, 17]. Ontology
defines a formal semantics for information allowing
information to be process-able by computer system agents.
Use of ontology in activity recognition is a new area of
research and helps in better understanding the activity in a
given context. In [19] the authors only focused on the location
and time information of an activity and use the method of
Event-Condition-Action (ECA) to respond to particular
activity. In our approach, we not only use the location and
time information but also use information about the subject
profile and environment information.
Ontology helps in properly extracting the higher level
activity of a set of activities in a series, e.g. an activity of fall
down detected with low level sensor only will always generate
alarm for emergency situation. On the other hand with the use
of ontology having context information about the fall down
activity like the location information, time information, profile
information of the subject, and linked other low level activities
can easily identify that it’s an emergency situation or a
jumping competition.
All the activities are store managed in domain ontology. For
manipulation of information (activities in ontological format)
we used SPARQL, Pellet3.4, and for decision making we use
Description Logic (DL) Rules. Table 1 represents the DL rules
that we used in our system implementation. The list is not an
exhaustive list of all the rules we used in project
Our HARE is designed in such a flexible manner that its
client can easily communicate with it from a small hand held
devices such sensors, PDA, or cell phones. Various
entertainment applications can make use of HARE to apply in
reality, such as online game and console game.
Table 1, Customized rules for making decisions
Activity(a1) ┙hasContents(taking medicine) hasNextActivity(a2)
∃Activity(a2) ⊓ hasContents(eating) Activity.Create(a1) ⊓ Activity.Create(a2)
reminder(take me dicine)
Activity(a1) hasConte nts(reading) hasNextActivity(a2) ⊓ ∃Activity(a2) ⊓
hasContents(TV On) Activity.Create(a1) ⊓ Activity.Create(a2) ⊓ turnOff(TV)
Activity(a1) hasContents(unknown e xercise) ⊓ hasNext Activity(null)
Activity.Create(a1) ⊓ reminder(movem ents are wrong)
Activity(a1) hasContents(entering kitchen) ∃Activ ity(a2)
hasContents(entering bedroom) Activity.Create(a1) ⊔ Activity.Create(a2) ⊓
One of the main targeting services of u-Life care is to
enable people to live independently longer through the early
detection and prevention of chronic disease and disabilities.
Computer vision, emplaced wireless sensor networks (WSN),
and body networks are emerging technologies that promise to
significantly enhance medical care for seniors living at home in
assisted living facilities. With these technologies, we can
collect video, physiological, and environmental data, identify
individuals’ activities of daily living (ADL), and act for
improved daily medical care as well as real-time response to
medical emergencies.
To achieve this, accurately identifying individuals’ ADL,
so-called activity recognition (AR) which can be based on both
video and sensor (e.g., accelerometer, gyroscope,
physiological) data, is of vital importance. However, it is a
significant challenge, for instance; Video-based AR can be
complex due to abrupt object motion, noise in images, non-
rigid nature of human body, partial and full object occlusions,
scene illumination changes, and real-time processing
requirements [4]. In this paper we discuss the overall results of
our HARE system. The activities are detected using camera
based and sensor-based activity recognition engines. The
detected activities are then forwarded to context-aware activity
manipulation engine (CAME) to infer higher level activities
and decision making.
The activities recognized with the help of different sensors
(i.e. body, location, motion, and video sensors) are low level
activities and they are not in a capacity to be used for certain
types of analysis and decision making. With the help of
ontology, where we use the context information and link all
the related activities in a chain, then with the help of
customized rules we get the higher level activities that are
more usable for decision making. For instance, low level
activities in a series, e.g. bending, sitting, jumping, and
walking with the use of ontology will result in higher level
activity e.g. exercising. To implement CAME, Jena2, Protégé,
Protégé-OWL, Arq, and Pellet 3.4 are used. The outcome of
CAME is partially dependent on the results of AR modules.
Figure 5 is the OWL (using N3 notation) of “Entering
Kitchen” activity in Activity Repository.
a activityOnto:Activity ;
activityOnto:hasID 345;
activityOnto:hasName "Entering Kitchen";
activityOnto:hasType "Motion";
activityOnto:isA activityOnto:Room_Instance_Class;
activityOnto:performedAtTime 2009:06:14:14:00:13;
activityOnto:performedBy activityOnto:Person_Instance_345.
Figure 5, N3 representation of activity
We tested CAME using 12 different experiments with
increasing number of activities, where all activities are real-
time activities detected by sensors discussed above. In Figure
6, y-axis is the % of Precision and Recall for match making
process while x-axis represents the number of experiments.
The graph in Figure 6 shows that precision and recall
decreases with increasing number of activities, however, with
the increasing number of experiments, both precision and
recall are smoothening with average of 0.759 and 0.636,
We use two phase filtering for decision making as using
only the results of match making is not sufficient in healthcare
systems. In the second phase we use the description logic rules
(see Table 1) compiled with the help of expert knowledge
(Doctors) to filter out appropriate information from those of
match making process. The output of second phase filter is
then used for decision making or suggestions about current
Figure 6, Precision and Recall of CAME for match making against
number of performed experiments.
Framework architecture of Human Activity Recognition
Engine (HARE) has been presented for detecting real-time
daily life activities of a person. By making use of ontologies to
model the domain and expert knowledge (including activity,
location, time, and environment information), better service
provisioning, and intelligent healthcare facilities have been
achieved. Detail discussion on HARE and its subcomponents
with the experimental results is made. The support of HARE
to doctors, care-givers, clinics and pharmacies are all
elaborated using the capabilities of Cloud computing. From
experimental results, it is observed the HARE component
worked well in combination for a particular domain. We are
looking forward to implement HARE for Alzheimer,
Parkinson’s, and Stroke patients in their normal daily life.
This research was supported by the MKE (Ministry of
Knowledge Economy), Korea, under the ITRC (Information
Technology Research Center) support program supervised by
the NIPA (National IT Industry Promotion Agency)" (NIPA-
2009-(C1090-0902-0002)). This work was supported by the IT
R&D program of MKE/KEIT. [2009-S-033-01, Development
of SaaS Platform for S/W Service of Small and Medium sized
[1] R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I. Brandic, “Cloud
computing and emerging IT platforms: Vision, hype, and reality for
delivering computing as the 5th utility”. Future Generation Computing
Systems, June 2009.
[2] E. Choi, C. Lee, “Feature extraction based on the Bhattacharyya
distance”, Pattern Recognition, Volume 36, Issue 8, Pages 1703-1709,
August 2003.
[3] T. Chan and L. Vese, “Active contours without edges”, IEEE Trans.
Image Proc. 10, pp-266-277, 2001.
[4] Gorelick L, Blank M, Shechtman E, Irani M, Basri R, “Actions as space-
time shapes”. IEEE Trans Patt Anal Mach Intell 29(12): 22472253,
[5] T. Gruber, "A Translation Approach to Portable Ontology
Specifications", Knowledge Acquisition , pp 199-220, 1993.
[6] M. Hassan, E. Huh, B. Song. “A framework of sensor-cloud integration
opportunities and challenges”, 3rd International Conference on
Ubiquitous information Management and Communication, Korea,
January 2009.
[7] K. Henricksen, and J. Indulska, “Modelling and Using Imperfect
Context Information”. In Proceedings of the Second IEEE Annual
Conference on Pervasive Computing and Communications
Workshops,Washington DC, March 14 - 17, 2004.
[8] L. Hung, S. Lee, Y. Lee, H. Lee, “Activity-based Access Control Model
to Hospital Information,” IEEE 13th Int. Conf. Embedded and Real-
Time Computing Systems and Applications, pp. 488-496, Korea, 2007.
[9] L. X. Hung, P. T. H. Truc, L. T. Vinh, A. M. Khattak, and et al.
“Secured WSN-integrated Cloud Computing for u-Life Care”, 7th IEEE
Consumer Communications and Networking Conference (CCNC), USA,
[10] A. Hyvarinen, “New approximations of differential entropy for
independent component analysis and project ionpursuit Adv. Neural
Inform”. Process. Syst. 10 2739, 1998.
[11] Hyv¨arinen A, Karhunen J and Oja E, “Independent Compon. Anal”.
(New York: Wiley), 2001.
[12] H. Jameel, R. Shaikh, H. Lee and S. Lee, “Human Identification through
Image Evaluation Using Secret Predicates”, Topics in Cryptology - CT-
RSA, 67-84, 2007.
[13] A. M. Khattak, K. Latif, S. Y. Lee, and et al., “Change Tracer: Tracking
Changes in Web Ontologies”, 21st IEEE International Conference on
Tools with Artificial Intelligence (ICTAI), Newark, USA, November
[14] L. Kaufman and P. J. Rousseeuw, “Finding Groups in Data: An
Introduction to Cluster Analysis,” Wiley Series in Probability and
Statistics. John Wiley and Sons, New York, Novemeber 1990.
[15] M. Kass et al., “Snakes: active contour models”, Int. Journal on
Computer Vision, 1 (4) pp-321-33, 1988.
[16] S. Sarawagi and W. Cohen. “Semi-markov conditional random fields for
information extraction”. Advances in Neural Information Processing
Systems, 2004.
[17] M. Singh, and M. Huhns, “Service Oriented Computing: Semantics,
Processes, Agents”, John Wiley & Sons, West Sussex, UK, 2005.
[18] L. Vinh, L. Hung, S. Lee. “Semi Markov Conditional Random Fields for
Accelerometer Based Activity Recognition”, The International Journal
of Artificial Intelligence, Neural Networks, and Complex Problem-
Solving Technologies, (In Press).
[19] F. Wang and K. J. Turner. “An Ontology-Based Actuator Discovery and
Invocation Framework in Home Care Systems,” 7th International
Conference on Smart Homes and Health Telematics, pp. 66-73, LNCS
5597, Springer, Berlin, June 2009.
... Description-based ( Ijsselmuiden & Stiefelhagen, 2010 ) x Location, identity, visual focus Logic rules of attention, speech and head pose ( Khattak et al., 2010 ) Location, sound Ontologies and time ( Brendel & Todorovic, 2011 ) Object detection Description-based ( Pei et al., 2011 ) x Object detection CSG ( Rodriguez-Benitez et al., 2011 ) x ...
... Also entities (actors, objects), environment and interactions between them can be easily included in the ontology formalism. Relevant ontology-based activity recognition systems are presented in Akdemir, Turaga, and Chellappa (2008) ; Nevatia, Zhao, and Hongeng (2003) and Khattak et al. (2010) . Nevatia et al. (2003) describe human activities with three levels of hierarchy: subsubevents, subevents and event. ...
... In Khattak et al. (2010) the activity is recognized through the integration of data coming from sensors and videos. Such data are processed by low-level modules. ...
... Aggregating context information with real-time daily life activities can help provide better services, service suggestions, and changes in system behavior for better healthcare [11]. ...
Full-text available
Assessment of body kinematics during performance of daily life activities at home plays a significant role in medical condition monitoring of elderly people and patients with neurological disorders. The affordable and non-wearable Microsoft Kinect (" Kinect ") system has been recently used to estimate human subject kinematic features. However, the Kinect suffers from a limited range and angular coverage, distortion in skeleton joints' estimations, and erroneous multiplexing of different subjects' estimations to one. This study addresses these limitations by incorporating a set of features that create a unique " Kinect Signature ". The Kinect Signature enables identification of different subjects in the scene, automatically assign the kinematics feature estimations only to the subject of interest, and provide information about the quality of the Kinect-based estimations. The methods were verified by a set of experiments, which utilize real-time scenarios commonly used to assess motor functions in elderly subjects and in subjects with neurological disorders. The experiment results indicate that the skeleton based Kinect Signature features can be used to identify different subjects in high accuracy. We demonstrate how these capabilities can be used to assign the Kinect estimations to the Subject of Interest, and exclude low quality tracking features. The results of this work can help in establishing reliable kinematic features, which can assist in future to obtain objective scores for medical analysis of patient condition at home while not restricted to perform daily life activities.
... We can monitor user for activities such as walk, stairs or cycling, still position and in vehicle (reader is referred to [3] for details on this aspect.) Although not implemented in our system, additional activities like eating, exercising, sleeping, and watching television can be carried out using previously developed approaches [22][23][24][25] ; however, this discussion is beyond the scope of this article. Moreover, the users are able to enter information about their diet and medication into the application manually. ...
Full-text available
Recently, a large number of mobile wellness ap- plications have emerged for assisting users in self-monitoring of daily food intake and physical activities. While such applications are in abundance, many research surveys have found that the users soon give them up after giving some initial try. This article presents our application for healthcare self-management that monitors users' activities but -unlike the existing applications- it focuses on keeping the users engaged for self-management. The distinguishing feature of our application is that it uses persuasive mechanisms to help users adopt healthy behavior. For this purpose, users' various activities are monitored and then they are persuaded using different persuasion strategies that are adaptive and are according to their behavior. For each user, a behavior model is created that is based on Fogg's behavior model but, in addition, it also holds within it user preferences, and user health profile. The behavior model is then used to create a persuasion profile of the user that allows us to propose personalized suggestions targeted to overcome his lacking behavior. We also describe a case study that describes the actual application.
... Aggregating context information with real-time daily life activities can help provide better services, service suggestions, and changes in system behavior for better healthcare [11]. ...
... After diagnosis and treatment from the hospitals, the doctors want to monitor and analyze the real-time health information of specific patients for further healthcare and medical decision making, even emergency alarm [15]. Generally, a patient may be treated by multiple doctors, even in different regions. ...
Conference Paper
Full-text available
Mobile cloud computing is a promising technology for pervasive healthcare, which guarantees real-time health monitoring and electronic medical records sharing in different environments. In this paper, we present a hierarchical mobile cloud computing framework with three layers for pervasive healthcare. The scalable and hierarchical mobile cloud framework can be used to disperse the global storage and management load. We also study the social characteristics among patients and divide the patients into different social groups for privacy protection. A secure electronic medical records sharing scheme and a real-time health information transmission scheme are proposed. The security analysis shows that our schemes not only provide secure communication but also protect privacy of the patients.
In this work we present a detailed conception of weather monitoring system which displays weather, cloud and air purity also, we represent it by using graphs and bar graph. In our web application a user can get the weather information upto 7 days. Here we used an Application programming interface (API). An application programming interface, or API, enables companies to open up their applications’ data and functionality to external third-party developers, business partners, and internal departments within their companies. In this framework the climate parameters estimations taken are temperature, moistness, wind course, and wind speed. In this proposed work we will monitor the live weather’s parameter of entire world. With the help of this proposed system, we measure the weather condition of whichever city entered in search bar. After getting results from API(Open weather map), it is observed that our proposed model achieves better results in comparison with the standard weather parameters
Smartphones are a promising platform for continuous monitoring of human behavior. However, the ability to capture people's behavioral patterns in-the-wild is a challenge, as the user's behavior and physical activities can vary, given the variability of settings and environments. Modeling and understanding of human activity in-the-wild must not overlook a user's behavioral context, which is just as crucial as recognizing the range of physical activities. The work in this paper presents a novel framework for context-aware human activity recognition by incorporating human behavioral contexts with physical activities. The proposed framework utilizes a series of machine learning classifiers to validate the efficiency of the proposed method.
Automatic reshaping of human bodies is a computer vision and graphics technique with many applications. It manipulates various shape attributes of the visual appearance of a person without any manual editing. Keeping coherent reshaping results across many video frames is more challenging. In this paper, we present a novel pipeline to reshape the human body using noisy depth data from multiple RGB-D sensors. Compared with a single view, the data from multiple RGB-D sensors provide more constraints and lead to more consistent results. However, there exist a number of challenges in estimating the pose and shape of human in RGB-D data due to self-occlusion and motion complexity. To cope with the time-varying articulated human shape, we propose a new approach that combines a Gaussian Mixture Model (GMM) based fitting approach with a morphable model learned from range scans. Without any user input, this approach can automatically account for the variations in pose and shape, and enable different types of reshaping by changing body attributes such as height, weight or other physical features. Experimental results are provided to demonstrate the effectiveness of our system in manipulation of human body shapes.
Full-text available
The Semantic Web emerged with the vision of eased integration of heterogeneous, distributed data on the Web. The approach fundamentally relies on the linkage between and reuse of previously published vocabularies to facilitate semantic interoperability. In recent years, the Semantic Web has been perceived as a potential enabling technology to overcome interoperability issues in the Internet of Things (IoT), especially for service discovery and composition. Despite the importance of making vocabulary terms discoverable and selecting the most suitable ones in forthcoming IoT applications, no state-of-the-art survey of tools achieving such recommendation tasks exists to date. This survey covers this gap by specifying an extensive evaluation framework and assessing linked vocabulary recommendation tools. Furthermore, we discuss challenges and opportunities of vocabulary recommendation and related tools in the context of emerging IoT ecosystems. Overall, 40 recommendation tools for linked vocabularies were evaluated, both empirically and experimentally. Some of the key findings include that (i) many tools neglect to thoroughly address both the curation of a vocabulary collection and effective selection mechanisms, (ii) modern information retrieval techniques are underrepresented, and (iii) the reviewed tools that emerged from Semantic Web use cases are not yet sufficiently extended to fit today’s IoT projects.
In a highly competitive environment, surgery is forced to continuously improve the outcome and, simultaneously to reduce costs. These contradicting aims can only be reached by the combined use of cyber-physical systems . Digitalization of surgery may be denominated as “surgery 4.0 ”. This process will be primarily focussed on the surgical operation room which is the “profit centre” of any surgical unit. The aim is to transform it into a “collaborative environment”. Based upon a multitude of continuous real-time data, a support system should be capable to interpret the actual situation (context sensivity) and to predict the next steps required. In addition to the necessary medical and organizational structured knowledge which has to be provided before, the system should be able to learn from repeated procedures. Thus, it should offer active assistance to the surgical team to use the technical environment adequately, to smoothen the workflow, to avoid mistakes, and to improve the safety level. To reach this goal, some preconditions have still to be met: Comprehensive systems integration, the development of surgical and patient models, and a perfect communication not only between the devices and instruments but also with the human user. Making this vision mature for regular clinical care is challenging but first promising approaches have already been developed.
Conference Paper
Full-text available
Knowledge constantly grows in scientific discourse and is revised over time by domain experts. The body of knowledge will get structured and refined as the Communities of Practice concerned with the field of knowledge develop a deeper understanding of issues. The knowledge model, as a result evolves to a new state to accommodate the new knowledge. Keeping trail of these changes in semantically rich and formally sound mechanism, has pragmatic advantages for providing the undo and redo facility and to recover to a previous state of the knowledge body (i.e. ontology). In this research, we have developed and tested comprehensive methodological framework for Change Tracer. The ontology changes are captured and then stored in Change History Log (CHL) in conformance to Change History Ontology (CHO). The CHL is later used for reverting ontology to a previous consistent state and visualization of change effects on ontology. The system is compared with ChangesTab of Protégé, a comprehensive evaluation of the accuracy of roll- back and roll-forward algorithm has been conducted over Documentation ontology. The system is also tested over a standard dataset of OMV and high accuracy results are observed for both roll-back and roll-forward algorithms.
We derive a first-order approximation of the density of maximum entropy for a continuous 1-D random variable, given a number of simple constraints. This results in a density expansion which is somewhat similar to the classical polynomial density expansions by Gram-Charlier and Edgeworth. Using this approximation of density, an approximation of 1-D differential entropy is derived. The approximation of entropy is both more exact and more robust against outliers than the classical approximation based on the polynomial density expansions, without being computationally more expensive. The approximation has applications, for example, in independent component analysis and projection pursuit.
To support the sharing and reuse of formally represented knowledge among AI systems, it is useful to define the common vocabulary in which shared knowledge is represented. A specification of a representational vocabulary for a shared domain of discourse—definitions of classes, relations, functions, and other objects—is called an ontology. This paper describes a mechanism for defining ontologies that are portable over representation systems. Definitions written in a standard format for predicate calculus are translated by a system called Ontolingua into specialized representations, including frame-based systems as well as relational languages. This allows researchers to share and reuse ontologies, while retaining the computational benefits of specialized implementations.We discuss how the translation approach to portability addresses several technical problems. One problem is how to accommodate the stylistic and organizational differences among representations while preserving declarative content. Another is how to translate from a very expressive language into restricted languages, remaining system-independent while preserving the computational efficiency of implemented systems. We describe how these problems are addressed by basing Ontolingua itself on an ontology of domain-independent, representational idioms.
Conference Paper
This paper presents a Secured Wireless Sensor Network-integrated Cloud computing for u-Life Care (SC3). SC3 monitors human health, activities, and shares information among doctors, care-givers, clinics, and pharmacies in the Cloud, so that users can have better care with low cost. SC3 incorporates various technologies with novel ideas including; sensor networks, Cloud computing security, and activities recognition.
With the significant advances in Information and Communications Technology (ICT) over the last half century, there is an increasingly perceived vision that computing will one day be the 5th utility (after water, electricity, gas, and telephony). This computing utility, like all other four existing utilities, will provide the basic level of computing service that is considered essential to meet the everyday needs of the general community. To deliver this vision, a number of computing paradigms have been proposed, of which the latest one is known as Cloud computing. Hence, in this paper, we define Cloud computing and provide the architecture for creating Clouds with market-oriented resource allocation by leveraging technologies such as Virtual Machines (VMs). We also provide insights on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain Service Level Agreement (SLA)-oriented resource allocation. In addition, we reveal our early thoughts on interconnecting Clouds for dynamically creating global Cloud exchanges and markets. Then, we present some representative Cloud platforms, especially those developed in industries, along with our current work towards realizing market-oriented resource allocation of Clouds as realized in Aneka enterprise Cloud technology. Furthermore, we highlight the difference between High Performance Computing (HPC) workload and Internet-based services workload. We also describe a meta-negotiation infrastructure to establish global Cloud exchanges and markets, and illustrate a case study of harnessing ‘Storage Clouds’ for high performance content delivery. Finally, we conclude with the need for convergence of competing IT paradigms to deliver our 21st century vision.
In this paper, we present a feature extraction method by utilizing an error estimation equation based on the Bhattacharyya distance. We propose to use classification errors in the transformed feature space, which are estimated using the error estimation equation, as a criterion for feature extraction. The construction of linear transformation for feature extraction is conducted using an iterative gradient descent algorithm, so that the estimated classification error is minimized. Due to the ability to predict error, it is possible to determine the minimum number of features required for classification. Experimental results show that the proposed feature extraction method compares favorably with conventional methods.