Content uploaded by Vinh La
Author content
All content in this area was uploaded by Vinh La on Mar 26, 2014
Content may be subject to copyright.
Context-aware Human Activity Recognition and
Decision Making
Asad Masood Khattak, La The Vinh, Dang Viet Hung, Phan Tran Ho Truc, Le Xuan Hung, D. Guan, Zeeshan Pervez,
Manhyung Han, Sungyoung Lee, Young-Koo Lee
Dept. of Computer Engineering, Kyung Hee University, Korea,
{asad.masood, vinhlt, dangviethung, pthtruc, lxhung, donghai, zeeshan, smiley, sylee}@oslab.ac.kr, yklee@khu.ac.kr
Abstract— Ubiquitous Life Care (u-Life care) nowadays becomes
more attractive to computer science researchers due to a demand
on a high quality and low cost of care services at anytime and
anywhere. Many works exploit sensor networks to monitor
patient’s health status, movements, and real-time daily life
activities to provide care services to them. Context information
with real-time daily life activities can help in better services,
service suggestions, and change in system behavior for better
healthcare. Our proposed Secured Wireless Sensor Network -
integrated Cloud Computing for ubiquitous - Life Care (SC3)
monitors human health as well as activities. In this paper we
focus on Human Activity Recognition Engine (HARE)
framework architecture, backbone of SC3 and discussed it in
detail. Camera-based and sensor-based activity recognition
engines are discussed in detail along with the manipulation of
recognized activities using Context-aware Activity Manipulation
Engine (CAME) and Intelligent Life Style Provider (i-LiSP).
Preliminary results of CAME showed robust and accurate
response to medical emergencies. We have deployed five different
activity recognition engines on Cloud to identify different set of
activities of Alzheimer’s disease patients.
I. INTRODUCTION
As the standard of living rises, people are more interested
in their health and desire healthy life. Due to aging of
population, rising cost of workforce and high quality treatment,
the cost of life care or healthcare system is increasing
worldwide. According to OECD1
Cloud Computing can provide a powerful, flexible, and
cost-effective infrastructure for life care services that can
fulfill the vision of “ubiquitous life care” that is providing life
care to people anywhere at any time with increasing coverage
(Organization of Economic
Cooperation and Development) Health data 2008, total health
spending accounted for 15.3% of GDP in the United States in
2006. Korea was 6.4% of GDP to health in 2006. The United
States also ranks far ahead of other OECD countries in terms
of total health spending per capita, with spending of 6,714
USD (adjusted for purchasing power parity (PPP)), more than
twice the OECD average of 2,824 USD in 2006. For Korea it
was 1480 USD.
1 http://www.oecd.org/statsportal/0,3352,en_2825_293564_1_1_1_1_1,00.html
and quality. Because of its elasticity, scalability, pay-as-you-go
model [1], Cloud Computing can potentially provide huge cost
savings, flexible, high-throughput, and ease of use for life care
services. For example, with life care providers looking at
automating processes for lower cost and higher gains, Cloud
Computing can act as an ideal platform. For this reason we
have developed Secured Wireless Sensor Network (WSN) -
integrated Cloud Computing for u-Life Care (SC3) [9] that
provide all the above discussed facilities.
Our focus in this paper is on Human Activity Recognition
Engine (HARE) component of SC3 architecture highlighted in
Figure 1. HARE can help in enhancing capabilities and
provides tremendous value for smarter service provisioning.
HARE can provide efficient model for managing real-time
data from various sensors, efficiently detection of human
activities, and better manipulation of detected activities using
ontologies. System accuracy in healthcare systems is the most
important issue. The existing systems are based on simple
condition and action [19], not using context information, or in
some cases use imperfect context information [7] where the
result of system is unpredictable. Their focus is more on
environment sensors rather than on real-time person activity.
Due to space limitation, we have provided the details of
each HARE component and its preliminary results achieved
with the help of Context-aware Activity Manipulation Engine
(CAME). Experimental results of the proposed HARE
framework showed robust response for health care services in
emergency situations. As a proof of concept, in initial phase
HARE is deployed on Cloud server for an Alzheimer’s disease
patient’s for his better life care using five different activity
recognition modules. The demonstration2
This paper is arranged as follows: Section II provides the
overview of the overall SC3 architecture. Section III is detail
description of proposed HARE architecture and its
subcomponents. Section IV comprises of the implementation
and results details. Finally we conclude our findings in Section
V and talk about future directions and applications of HARE.
was very successful
for a set of 14 different real time activities that Alzheimer’s
disease patients commonly perform.
2 http://www.youtube.com/watch?v=FfRpsjD3brg
II. SC3 OVERVIEW
The system architecture for SC3 is shown in Figure 1,
proposed in [9]. In this architecture, WSNs are deployed in
home environments for collecting data. This sensed data is
either human health data and/or data to be used for detection
of human activities for care services. To detect human
activities, we propose novel approaches: embodied-sensor
based activity recognition [18], video-based activity
recognition, wearable sensor-based activity recognition,
location tracking, and ontology based intelligent activity
logging and manipulation [13]. The sensors are either attached
to a person or to the walls in the home environment. The
video-based approach is based on images collected from
camera, extracting the background to get the moving object
and inferring activities such as walking, sitting, standing,
falling down, bending, jacking, jumping, running, siding,
skipping, one hand waving, both hands waving, and exercising.
Location tracking helps in properly locating the subject’s
current position. On top of these, ontology engine is
implemented to deduce high level activities and make
decisions according to situation based on user profile
information.
Sensed data is transferred to Cloud by using sensor data
dissemination and WSN-Cloud integration mechanisms [6]. To
access medical data on Cloud, the user must be authenticated
and granted access permission. An image-based authentication
and activity-based access control are proposed to enhance
security and flexibility of user’s access [12 and 8].
For Independent Clouds Collaboration (ICC) with each
other, we proposed a dynamic collaboration procedure [6].
Numerous u-life care services can access Clouds to provide
better and low cost care for end-users such as secure u-119
service, secure u-Hospital, secure u-Life care research, and
secure u-Clinic.
In SC3, we mostly focused on WSN, WSN-Cloud, activity
recognition, authentication and access control to Cloud data,
and a sample care service for different disease patients at
home environment. We have implemented SC3 for Alzheimer
disease discussed in Section VI. First of all, human activity
data is captured from sensors and videos, and then transmitted
to the Cloud Gateway. After data filtering, it transmits the data
to the Cloud via TCP/IP socket. In the Cloud, raw data is used
to deduce user activity and location information such as
patient is walking, eating, and staying in the kitchen. Activity
and location information are forwarded to ontology for
representation and inferring higher level activity and situation
analysis. The decisions are also made based on the situation to
respond to some context; for example, if patient is reading a
book then TV should be turned off.
To access patient data, doctors and/or nurses are first
authenticated based on their access permission. Some of the
main services of SC3 are (1) Safety monitoring services for
home users: SC3’s WSN can monitor home user’s movement;
WSN can monitor home user’s movement, location by using
various sensors. The sensory data is then disseminated to the
Clouds.
Figure 1, The system model of SC3
From that, SC3’s Life care services such as emergency service,
caregivers can monitor and has immediate responses in case of
emergent situations. (2) Information sharing services: With
SC3, patient information and data can be accessed globally and
resources can be shared by a group of hospitals rather than
each hospital having a separate IT infrastructure. It can help in
the early identification and tracking of disease outbreaks,
environment related health problems, and other issues. (3)
Emergency-connection services: SC3 can be deployed to real-
time monitor home environments, including gas, fire, and
robbery. Through SC3, an alarm system connects to users, u-
119, police department can give an emergency alert in case
any emergent situation occurs. (4) Users can monitor their
home, their family health anywhere, any time with any kind of
connected devices over Internet such as cell phone, PDA,
laptop, and computer.
III. HARE ARCHITECTURE
Core of SC3 is a Human Activity Recognition Engine
(HARE) as shown in Figure 2. HARE is composed of various
sub-components such as; Location Tracking: to track human
location, Activity Recognizer (including embedded, wearable,
2D camera, and 3D camera based activity recognition): to
recognize human activities. Schema Mapping and XML
Transformer: to transform activity output in a machine-
understandable and flexible OWL format, and Context-Aware
Activity Manipulation Engine: to infer high level activities or
make decisions based on subject performed activity and
profile information.
In addition, a number of supporting components are also
integrated to make HARE work properly, to mention these: AR
Fusion and Collaborator: is to make collaboration among
different activity recognition engine approaches. It is
necessary to increase the accuracy of activity recognition. For
example, if wearable sensor-based AR detects a person is
taking medicine with 70% accuracy, and 2D camera-based AR
also detects the person is taking medicine with 80%, so the
HARE
collaborator can ensure that he is taking medicine. HARE
Repository: is back bone of HARE, it stores raw data collected
by sensors and cameras, stores real-time activities recognized
by activity recognition engines, activity history and activities
in machine understandable format (OWL) to infer high level
activities. We have developed successfully a Mobile Activity
Sensor Logger (MASoL) (see Figure 1). MASoL serves in the
infrastructure layer under HARE to collect and monitor human
and environment information.
In this section we briefly discuss the main components of
HARE namely: Video Based Activity Recognition, Sensor
Based Activity Recognition, Intelligent Life-care Service
Provider (i-LiSP), and Context-aware Activity Manipulation
Engine (CAME). The activities are detected by Video based
and Sensor based AR engines and then given to i-LiSP and
CAME for further manipulation and decision making.
A. Video Based Activity Recognition
The accuracy of the video-based AR depends significantly
on the accuracy of human body segmentation. In the field of
image segmentation [15], active contour (AC) model has
attracted much attention. Recently, Chan and Vese (CV)
proposed in [3] a novel form of AC based on the Mumford
and Shah functional for segmentation and the level set
framework. The CV AC model utilize the difference between
the regions inside and outside of the curve, making itself one
of the most robust and thus widely used techniques for image
segmentation with energy function;
22
() ()
( ) () ()
in out
in C out C
FC Icd Icd= −+ −
∫∫
xx x x
where
∈Ωx
(the image plane)
2
R⊂
,
:IΩ→Z
is a
certain image feature such as intensity, color, or texture.
in
c
and
out
c
are respectively the mean values of image feature
inside
[ ]
()in C
and outside
[ ]
()out C
the curve
C
, which
represents for the boundary between two separate segments.
Considering image segmentation as a clustering problem, we
can see that this model forms two segments (clusters).
However, the global minimum of the above energy functional
does not always guarantee the desirable results, especially
when a segment is highly inhomogeneous, e.g., human body,
as can be seen in Figure 3(b). It is due to the fact that CV AC
is trying to minimize the dissimilarity within each segment but
does not care for distance between different segments. Our
methodology is to incorporate the Bhattacharyya distance [2]
to the CV energy functional such that not only the differences
within each region are minimized but the distance between the
two regions is maximized as well. The proposed energy
functional is;
0
() ()(1 )()E C FC BC
ββ
= +−
where
[0,1]
β
∈
,
( ) () ()
in out
B C B p z p z dz≡=
∫
Z
Figure 2, Framework architecture of Human Activity Recognition Engine (HARE)
the Bhattacharyya coefficient with
( ( )) ( ( ))
() )
( ()
in
zI H d
pz Hd
δφ
φ
Ω
Ω
−−
=−
∫
∫
x xx
xx
( ( )) ( ( ))
() ( ( ))
out
zI H d
pz Hd
δφ
φ
Ω
Ω
−
=∫
∫
x xx
xx
:R
φ
Ω→
the level set function, and
(?H
and
(? (?H
δ
′
respectively the Heaviside and the Dirac
functions. Note that the Bhattacharyya distance is defined by
[ ]
log ( )BC−
and the maximization of this distance is
equivalent to the minimization of
()BC
. Note also that to be
comparable to the
()FC
term, in our implementation,
()BC
is multiplied by the area of the image because its value
is always within the interval
[0,1]
whereas
()FC
is
calculated based on the integral over the image plane. In
general, we can regularize the solution by constraining the
length of the curve and the area of region inside it. Therefore,
the energy functional is;
where
0
γ
≥
and
0
η
≥
are constants.
The intuition behind the proposed energy functional is that
we seek for a curve which 1) is regular (the first two terms)
and 2) partitions the image into two regions such that the
differences within each region are minimized (i.e., the
()FC
term) and the distance between the two regions is maximized
(i.e., the
()BC
term). The level set implementation for the
overall energy functional can be derived as
where
in
A
&
out
A
are areas inside & outside curve
C
.
As a result, the proposed model can overcome the CV AC’s
limitation in segmenting inhomogeneous objects as shown in
Figure 3(c).
Figure 3, Sample segmentation of inhomogeneous body-shape object
using active contours. (a) Initial contour, (b) result of CV AC [3], and (c)
result of our approach.
After obtaining a set of body silhouettes segmented from a
sequence of images; we propose to apply ICA (Independent
Component Analysis) [10 and 11] to get the motion features of
that sequence. The extracted features are then symbolized
using vector quantization algorithms such as K-mean
clustering [14]. Symbol sequence generates a codebook of
vectors for training and recognition. The overall architecture
of proposed framework is shown in Figure 4, where
T
represents the number of testing shape images,
N
number of
trained HMMs, and
L
likelihoods.
Figure 4, Architecture of the proposed approach for motion feature
extraction and recognition.
B. Sensor Based Activity Recognition
Based on existing work [16], we develop our own
recognition which is called “semi-Markov Conditional
Random Fields (semiCRF)” [18], furthermore we propose a
novel algorithm which helps to reduce the complexity of
training and inference by more than 10 times in comparison
with the original work. In our model, we assume that
are input signal and label respectively. We optimize the model
parameter so that P (Y|X) is maximized. Where in CRF,
P(Y|X) is calculated by
Where F is a vector of feature functions (which are often
delta functions), WT is a vector of model parameters, and ψ is
called potential functions. ZX (normalization factor), is
computed by using forward/backward algorithm. However,
conventional CRF is limited to Markov assumption which is
not able to model the duration of activity as well as long-
transition between activities. To overcome these, we introduce
a semi-Markov model by defining a new state as si = (yi,bi,ei)
where si is the ith state, yi, bi, ei in that order are label, begin
and end time of the state. For example, given an input label
sequence Y=(1,1,2,2,2,3,4,4), then the semi-Markov state
sequence is (1,1,2), (2,3,5), (3,6,6), (4,7,8). Note that, in AR
we consider states with expected label. With these definitions,
potential function is rewritten as:
Where
is a weighted transition potential function, wT(y’,y) is the
weight of transition from y’ to y.
Making use of semi-Markov conditional random fields, we
proposed our algorithm for computing gradients of the target
function by extending [16]. It reduces the complexity of
computing each gradient from O(TN2D) to O(TN(N+D)),
where T, N, D are length of the input sequence, number of
label values, and maximum duration of a label respectively.
For experiments, we used the dataset of long-term activities
available at http://www.mis.informatik.tu-darmstadt.de/data.
Then we show our result in comparison with the original one.
The dataset contains 7 days of continuous data, measured by
two triaxial accelerometers [18].
C. Intelligent Life-care Service Provider
Intelligent Life-care Service Provider (i-LiSP) module is
responsible to provide intelligent services to the users by
analyzing their context information. Service could be an act of
help, assistance, and recommendations. Various kinds of
services are considered in i-LiSP, such as entertainment,
medication, and sport services. The context information used
in i-LiSP is obtained from various sources. They mainly
include: the activity information from low level sensors, the
activity information from Human Activity Recognition Engine,
and the high-level context information from CAME Engine
(discussed later).
i-LiSP is designed to provide intelligent
services/recommendations to users. The services can be
divided into two types: (1) Service by long-term observations
(SLO): This service is provided after i-LiSP module analyzed
the long-term history data of users. For example, by analyzing
one week data of a user, we can create the model for the user’s
toileting times per day (e.g., estimating the probability density
function of the user’s toilet times). Then the medical doctors
can give some recommendations to the user by analyzing the
generated model. (2) Service by current observations (SCO):
This service is the immediate response/recommendations
provided by i-LiSP by analyzing current context information.
For example, the current context information of the user is: at
9AM, he is watching TV. Based on the knowledge (stored
information) of i-LiSP, this time he should do the exercise.
Therefore, the system will remind him to stop watching TV
and do exercise.
i-LiSP have three sub modules that work using different
techniques: (1) Topic Model Based Service Provider (TMSP):
TMSP adopts topic model as the reasoning algorithm to
provide SLO. Topic model is originally proposed to
summarize (finding concise restatements) of spoken/text
document to improve the navigation quality of speech/text
collections. Here, TMSP use topic model to summarize the
history information of users. (2) Rough Set Based Service
Provider (RSSP): RSSP adopts rough set theory as the
reasoning algorithm to provide SCO. Rough set theory is used
here for the following reasons: it directly works on the data
and does not require any other prior knowledge for the data
(such as probability distributions); it can automatically filter
out the irrelevant and redundant information from data. The
knowledge of RSSP is represented by rules which are
understandable by humans. Therefore, experts can flexibly
manipulate the knowledge by adding or removing the rules.
(3) Bayesian Network Based Service Provider (BNSP): BNSP
uses Bayesian network as the reasoning algorithm. It can
provide both SLO & SCO by varying the models it generates.
A Bayesian network is a probabilistic graphical model that
represents a set of random variables and their conditional
independencies via a directed acyclic graph. For example, a
Bayesian network could represent the probabilistic
relationships between the user’s diseases and symptoms.
D. Context-aware Activity Manipulation Engine
Ontology is formally defined as an explicit and formal
specification of a shared conceptualization [5, 17]. Ontology
defines a formal semantics for information allowing
information to be process-able by computer system agents.
Use of ontology in activity recognition is a new area of
research and helps in better understanding the activity in a
given context. In [19] the authors only focused on the location
and time information of an activity and use the method of
Event-Condition-Action (ECA) to respond to particular
activity. In our approach, we not only use the location and
time information but also use information about the subject
profile and environment information.
Ontology helps in properly extracting the higher level
activity of a set of activities in a series, e.g. an activity of fall
down detected with low level sensor only will always generate
alarm for emergency situation. On the other hand with the use
of ontology having context information about the fall down
activity like the location information, time information, profile
information of the subject, and linked other low level activities
can easily identify that it’s an emergency situation or a
jumping competition.
All the activities are store managed in domain ontology. For
manipulation of information (activities in ontological format)
we used SPARQL, Pellet3.4, and for decision making we use
Description Logic (DL) Rules. Table 1 represents the DL rules
that we used in our system implementation. The list is not an
exhaustive list of all the rules we used in project
implementation.
Our HARE is designed in such a flexible manner that its
client can easily communicate with it from a small hand held
devices such sensors, PDA, or cell phones. Various
entertainment applications can make use of HARE to apply in
reality, such as online game and console game.
Table 1, Customized rules for making decisions
Rule1
∃
Activity(a1) ⊓ ┙hasContents(taking medicine) ⊓ hasNextActivity(a2) ⊓
∃Activity(a2) ⊓ hasContents(eating) Activity.Create(a1) ⊓ Activity.Create(a2)
⊓ reminder(take me dicine)
Rule2
∃
Activity(a1) ⊓ hasConte nts(reading) ⊓ hasNextActivity(a2) ⊓ ∃Activity(a2) ⊓
hasContents(TV On) Activity.Create(a1) ⊓ Activity.Create(a2) ⊓ turnOff(TV)
Rule3
∃
Activity(a1) ⊓ hasContents(unknown e xercise) ⊓ hasNext Activity(null)
Activity.Create(a1) ⊓ reminder(movem ents are wrong)
Rule4
∃
Activity(a1) ⊓ hasContents(entering kitchen) ⊔ ∃Activ ity(a2) ⊓
hasContents(entering bedroom) Activity.Create(a1) ⊔ Activity.Create(a2) ⊓
turnOn(lights)
IV. IMPLEMENTATION AND RESULTS
One of the main targeting services of u-Life care is to
enable people to live independently longer through the early
detection and prevention of chronic disease and disabilities.
Computer vision, emplaced wireless sensor networks (WSN),
and body networks are emerging technologies that promise to
significantly enhance medical care for seniors living at home in
assisted living facilities. With these technologies, we can
collect video, physiological, and environmental data, identify
individuals’ activities of daily living (ADL), and act for
improved daily medical care as well as real-time response to
medical emergencies.
To achieve this, accurately identifying individuals’ ADL,
so-called activity recognition (AR) which can be based on both
video and sensor (e.g., accelerometer, gyroscope,
physiological) data, is of vital importance. However, it is a
significant challenge, for instance; Video-based AR can be
complex due to abrupt object motion, noise in images, non-
rigid nature of human body, partial and full object occlusions,
scene illumination changes, and real-time processing
requirements [4]. In this paper we discuss the overall results of
our HARE system. The activities are detected using camera
based and sensor-based activity recognition engines. The
detected activities are then forwarded to context-aware activity
manipulation engine (CAME) to infer higher level activities
and decision making.
The activities recognized with the help of different sensors
(i.e. body, location, motion, and video sensors) are low level
activities and they are not in a capacity to be used for certain
types of analysis and decision making. With the help of
ontology, where we use the context information and link all
the related activities in a chain, then with the help of
customized rules we get the higher level activities that are
more usable for decision making. For instance, low level
activities in a series, e.g. bending, sitting, jumping, and
walking with the use of ontology will result in higher level
activity e.g. exercising. To implement CAME, Jena2, Protégé,
Protégé-OWL, Arq, and Pellet 3.4 are used. The outcome of
CAME is partially dependent on the results of AR modules.
Figure 5 is the OWL (using N3 notation) of “Entering
Kitchen” activity in Activity Repository.
activityOnto:Activity_Instance_20090614140013345
a activityOnto:Activity ;
activityOnto:hasConsequentAction
activityOnto:Action_Instance_145413546;
activityOnto:hasID 345;
activityOnto:hasName "Entering Kitchen";
activityOnto:hasType "Motion";
activityOnto:isA activityOnto:Room_Instance_Class;
activityOnto:performedAtTime 2009:06:14:14:00:13;
activityOnto:performedBy activityOnto:Person_Instance_345.
Figure 5, N3 representation of activity
We tested CAME using 12 different experiments with
increasing number of activities, where all activities are real-
time activities detected by sensors discussed above. In Figure
6, y-axis is the % of Precision and Recall for match making
process while x-axis represents the number of experiments.
The graph in Figure 6 shows that precision and recall
decreases with increasing number of activities, however, with
the increasing number of experiments, both precision and
recall are smoothening with average of 0.759 and 0.636,
respectively.
We use two phase filtering for decision making as using
only the results of match making is not sufficient in healthcare
systems. In the second phase we use the description logic rules
(see Table 1) compiled with the help of expert knowledge
(Doctors) to filter out appropriate information from those of
match making process. The output of second phase filter is
then used for decision making or suggestions about current
situation.
Figure 6, Precision and Recall of CAME for match making against
number of performed experiments.
V. CONCLUSIONS AND FUTURE WORK
Framework architecture of Human Activity Recognition
Engine (HARE) has been presented for detecting real-time
daily life activities of a person. By making use of ontologies to
model the domain and expert knowledge (including activity,
location, time, and environment information), better service
provisioning, and intelligent healthcare facilities have been
achieved. Detail discussion on HARE and its subcomponents
with the experimental results is made. The support of HARE
to doctors, care-givers, clinics and pharmacies are all
elaborated using the capabilities of Cloud computing. From
experimental results, it is observed the HARE component
worked well in combination for a particular domain. We are
looking forward to implement HARE for Alzheimer,
Parkinson’s, and Stroke patients in their normal daily life.
ACKNOWLEDGMENT
This research was supported by the MKE (Ministry of
Knowledge Economy), Korea, under the ITRC (Information
Technology Research Center) support program supervised by
the NIPA (National IT Industry Promotion Agency)" (NIPA-
2009-(C1090-0902-0002)). This work was supported by the IT
R&D program of MKE/KEIT. [2009-S-033-01, Development
of SaaS Platform for S/W Service of Small and Medium sized
Enterprises].
REFERENCES
[1] R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I. Brandic, “Cloud
computing and emerging IT platforms: Vision, hype, and reality for
delivering computing as the 5th utility”. Future Generation Computing
Systems, June 2009.
[2] E. Choi, C. Lee, “Feature extraction based on the Bhattacharyya
distance”, Pattern Recognition, Volume 36, Issue 8, Pages 1703-1709,
August 2003.
[3] T. Chan and L. Vese, “Active contours without edges”, IEEE Trans.
Image Proc. 10, pp-266-277, 2001.
[4] Gorelick L, Blank M, Shechtman E, Irani M, Basri R, “Actions as space-
time shapes”. IEEE Trans Patt Anal Mach Intell 29(12): 2247–2253,
2007.
[5] T. Gruber, "A Translation Approach to Portable Ontology
Specifications", Knowledge Acquisition , pp 199-220, 1993.
[6] M. Hassan, E. Huh, B. Song. “A framework of sensor-cloud integration
opportunities and challenges”, 3rd International Conference on
Ubiquitous information Management and Communication, Korea,
January 2009.
[7] K. Henricksen, and J. Indulska, “Modelling and Using Imperfect
Context Information”. In Proceedings of the Second IEEE Annual
Conference on Pervasive Computing and Communications
Workshops,Washington DC, March 14 - 17, 2004.
[8] L. Hung, S. Lee, Y. Lee, H. Lee, “Activity-based Access Control Model
to Hospital Information,” IEEE 13th Int. Conf. Embedded and Real-
Time Computing Systems and Applications, pp. 488-496, Korea, 2007.
[9] L. X. Hung, P. T. H. Truc, L. T. Vinh, A. M. Khattak, and et al.
“Secured WSN-integrated Cloud Computing for u-Life Care”, 7th IEEE
Consumer Communications and Networking Conference (CCNC), USA,
2010.
[10] A. Hyvarinen, “New approximations of differential entropy for
independent component analysis and project ionpursuit Adv. Neural
Inform”. Process. Syst. 10 273–9, 1998.
[11] Hyv¨arinen A, Karhunen J and Oja E, “Independent Compon. Anal”.
(New York: Wiley), 2001.
[12] H. Jameel, R. Shaikh, H. Lee and S. Lee, “Human Identification through
Image Evaluation Using Secret Predicates”, Topics in Cryptology - CT-
RSA, 67-84, 2007.
[13] A. M. Khattak, K. Latif, S. Y. Lee, and et al., “Change Tracer: Tracking
Changes in Web Ontologies”, 21st IEEE International Conference on
Tools with Artificial Intelligence (ICTAI), Newark, USA, November
2009.
[14] L. Kaufman and P. J. Rousseeuw, “Finding Groups in Data: An
Introduction to Cluster Analysis,” Wiley Series in Probability and
Statistics. John Wiley and Sons, New York, Novemeber 1990.
[15] M. Kass et al., “Snakes: active contour models”, Int. Journal on
Computer Vision, 1 (4) pp-321-33, 1988.
[16] S. Sarawagi and W. Cohen. “Semi-markov conditional random fields for
information extraction”. Advances in Neural Information Processing
Systems, 2004.
[17] M. Singh, and M. Huhns, “Service Oriented Computing: Semantics,
Processes, Agents”, John Wiley & Sons, West Sussex, UK, 2005.
[18] L. Vinh, L. Hung, S. Lee. “Semi Markov Conditional Random Fields for
Accelerometer Based Activity Recognition”, The International Journal
of Artificial Intelligence, Neural Networks, and Complex Problem-
Solving Technologies, (In Press).
[19] F. Wang and K. J. Turner. “An Ontology-Based Actuator Discovery and
Invocation Framework in Home Care Systems,” 7th International
Conference on Smart Homes and Health Telematics, pp. 66-73, LNCS
5597, Springer, Berlin, June 2009.