Content uploaded by Vasilis Efthymiou
Author content
All content in this area was uploaded by Vasilis Efthymiou on Oct 22, 2015
Content may be subject to copyright.
P. Novais et al. (Eds.): Ambient Intelligence - Software and Applications, AISC 153, pp. 67–74.
springerlink.com © Springer-Verlag Berlin Heidelberg 2012
S-CRETA: Smart Classroom Real-Time
Assistance
Koutraki Maria, Efthymiou Vasilis, and Antoniou Grigoris*
Abstract. In this paper we present our work in a real-time, context-aware system,
applied in a smart classroom domain, which aims to assist its users after recogniz-
ing any occurring activity. We exploit the advantages of ontologies in order to
model the context and introduce as well a method for extracting information from
an ontology and using it in a machine learning dataset. This method enables real-
time reasoning on high-level-activities recognition. We describe the overview of
our system as well as a typical usage scenario to indicate how our system would
react in this specific situation. An experimental evaluation of our system in a real,
publicly available lecture is also presented.
Keywords: AmI, Smart Classroom, Activity Recognition, Context modeling.
1 Introduction
In a typical classroom, a lot of time and effort is sometimes spent on technical is-
sues, such as lighting, projector set-up, photocopy distribution etc. This time could
be replaced with “teaching time”, if all these issues were solved automatically.
There are many Smart Classroom systems that try to change the behavior of an
environment in order to improve the conditions of a class. One of them is [1] that
focuses on making real-time context decisions in a smart classroom based on in-
formation collecting from environment sensors, polices and rules. Another context
aware system is [2] that supports ubiquitous computing in a school classroom.
In this paper, we present a system that assists instructors and students in a smart
classroom, in order to avoid spending time in such minor issues and stay focused
on the teaching process, by also having more studying material at their disposal.
To accomplish this, we have taken advantage of the benefits that ontologies and
machine learning offer, unlike other similar systems.
Koutraki Maria · Efthymiou Vasilis · Antoniou Grigoris
Foundation of Reasearch and Technology Heraklion
e-mail: {kutraki,vefthym,antoniou}@ics.forth.gr
68 K. Maria, E. Vasilis, and A. Grigoris
Ontologies play a pivotal role not only for the semantic web, but also in perva-
sive computing and next generation mobile communication systems. They provide
formalizations to project real-life entities onto machine-understandable data con-
structs [3]. In our system, machine learning algorithms use these data constructs to
conceive a higher level representation of the context.
Following the modern needs, we tried to make a system that will be robust and
fast enough, to run reliably in real-time conditions. However, we did not want to
sacrifice accuracy in favor of speed or vice versa. So we believe that our approach
manages to achieve accuracy and execution time that is comparable to or even bet-
ter than the state-of-the-art systems.
2 Motivating Usage Scenarios
Scenarios in our work express the activities that can take place in a classroom e.g.
lecture, exam. In order to identify these activities, a series of simpler events should
be applied, in each case. After that, we undertake to assist the particular activity.
A typical scenario is “Student Presentation”. A student is giving a lecture or
paper presentation in the Smart Classroom. For this scenario many students and
teachers may appear in classroom. The classroom’s calendar is checked for lecture
at this time and personal calendars of all these people are checked for participation
in this presentation, in order to know that everyone is in the right Classroom. The
lights and projector are turned on. A student stands near the display of presenta-
tion. Teachers and students should be seated. After all these minor events “Student
Presentation” activity is identified. After the identification of the activity, we as-
sist the presentation by turning off the lights and adapting the presentation file in
students’ and teachers’ devices.
3 Architecture for Building AmI Systems
A simple description of a complete cycle of our system is the following, as de-
picted in Figure 1:
Fig. 1 System Architecture
S-CRETA: Smart Classroom Real-Time Assistance 69
1. Data from sensors and from services of AMI Sandbox [4] are stored in an on-
tology. These services provide functionality like localization searching or
speech recognition. In our scenario we assume that the localization searching
service provide data about students’ and teacher’s location which are stored in-
to our ontology as well data from lights sensor or data from RFID sensors for
people identification.
2. SWRL rules are used for a first level of reasoning, to create Simple Events. In
“Student Presentation”, some simple events are: student stands near the display
of presentation, teacher sits.
3. The Simple Events that occurred within a timeframe are passed to the Activity
Recognition System.
4. The Activity Recognition System loads the cases and finds the current activity.
5. The result is written in the case base as a new case and also passed to the
Classroom Assistant system.
6. Depending on the current activity, the Classroom Assistant changes the con-
text. In our scenario turns of the lights and adapt the presentation’s file in stu-
dents’ and teachers’ devices.
3.1 Modeling Context
In this session we propose a context ontology model for an intelligent university
Classroom that responds to students’ and teachers’ needs. Our context ontology is
divided into two hierarchical parts, upper-level ontology and low-level ontologies.
The Upper Level Ontology or Core Ontology captures general features of all
pervasive computing domains. It is designed in a way that can be reused in the
modeling of different smart space environments, like smart homes and smart
meeting spaces. The Core ontology’s context model is structured by a set of ab-
stract entities like Computational Entity, Person, Activity, Location, Simple Event
and Environment (Figure 2). All these entities are widely used, except Simple
Event entity. Simple Event entity aims to capture knowledge obtained from rea-
soning on sensors data e.g. ‘Projector’s status is “on”’ or ‘Teacher is in front of
the smart board’.
The Low Level or Domain-Specific Ontologies are based on upper level ontolo-
gy and specified by the domain. In our case the domain is an intelligent classroom
in a university campus. Some of the domain-specific ontologies are Person and
Location ontology. All of the ontologies are expressed in OWL.
3.2 Reasoning: SWRL
In our implementation we try to transform our scenarios for intelligent classroom
into rules. This rule based approach is implemented by using SWRL (Semantic
Web Rule Language). The first step is to capture data from sensors and services
into the ontology (e.g. status of devices). After that, SWRL rules are applied on
these data and save the result into the Simple Event class. Examples of rules are
shown below.
70 K. Maria, E. Vasilis, and A. Grigoris
Rule 1: Person:Teacher(?t) ^ Device:SmartBoard(?b) ^ Core:hasRelativeLocation(?t,?b) ^
Core:inFrontOf(?t,?b) ^ Core:SimpleEvent(Teacher in front of the Board)
Core: isActi-
vated(Teacher in front of the Board, “true”)
Rule 2: Location:SmartClassroom(?c)^Environment(SmartClassroomEnv) ^
Core:hasEnvironment(?c,SmartClassroomEnv)^Core:noiseLevel(SmartClassroomEnv,?noise)
^swrlb:greaterThan(?noise,80) ^ Core:SimpleEvent(High Level Noise)
Core:isActivated(High Level Noise, “true”)
Fig. 2 Core Ontology
3.3 Activity Recognition
For this part of our system, we used (an adjustment of) an activity recognition
project, based on Case-Based Reasoning[5], that we are currently developing in
parallel, described briefly in chapter 3. After the SWRL rules trigger, the resulting
Simple Events that occurred within a time frame are sent to the Activity Recogni-
tion system, building an unsolved case. One or no activity is then recognized using
Bayesian Networks (BNs) and the solved case is added to the case base.
Based on a given set of activities that are to be recognized, a case base is initial-
ly created and classified manually. It is essential for this dataset to be a product of
real observations and not just random cases. For the implementation of the BNs,
we use WEKA[6] with default parameters, so our dataset is stored as arff .
Some signal segmentation approaches are described in details in [7]. The one
we chose is “not overlapping sliding windows”, since this implementation is simp-
ler, faster and activities occurring in the edge of a time window are rare. Especial-
ly in a smart classroom, activities usually last longer than other domains and their
number is significantly lower. The length of the time window (10 seconds) is cho-
sen based on the nature of the activities and experimental results.
S-CRETA: Smart Classroom Real-Time Assistance 71
4 Activity Recognition in the Presence of Ontologies
Since ontologies offer ways of expressing concepts and relationships between
them, we found it interesting to exploit such expressiveness and assist machine
learning.
The most popular Case Based Reasoning (CBR) tool that supports ontologies is
JCOLIBRI [8, 9]. As most of the CBR tools do, JCOLIBRI uses the k Nearest
Neighbours (kNN) algorithm to classify a new case, so it bares the problems of
kNN. Another promising tool, CREEK[10] - recently renamed as AmICREEK – is
not publicly available, although some case-evaluation studies have been published
[11]. The most recent and similar tool that we found is SituRes[12], a case-based
approach to recognizing user activity in a smart-home environment. SituRes is
based on one of the publicly available datasets that use PlaceLab [13].
When it comes to real-time activity recognition, there is a need for accuracy as
well as speed. Our approach aims to take advantage of the rich expressiveness that
ontologies can offer and provide solid answers, using machine learning algo-
rithms, like Support Vector Machines (SVMs) or BNs. The key factor that led to
the design of a new system, using machine learning, is the lack of speed observed
in the already existing systems that use algorithms such as kNN. Apart from that,
the robustness and accuracy of BNs led to a system faster and more accurate than
other systems that we are aware of. Coping with sensors usually means missing
data and SVMs – our initial choice - lack in this field. BNs are similar in terms of
accuracy and outperform SVMs when data are missing, as proven in [14].
The input of the complete system is an ontology with instances, from
which the user has to define the terms (classes) that describe the attributes,
the term that describes the solution and the term where the cases are stored.
For example, consider a simple ontology, where
Case, Activity, Location,
Time, Winter, Summer, Autumn, Spring,
Indoors, Outdoors,
FirstFloor
and
SecondFloor
are classes. As expected,
Winter, Summer, Autumn
and Spring
are subclasses of Time. Indoors and Outdoors are subclasses of Location and
FirstFloor and SecondFloor are subclasses of Indoors. In this ontology there
should be some Case instances like the following:
<Case rdf:ID="Case74">
<has-Activity rdf:resource="#Cooking"/>
<has-Time rdf:resource="#July"/>
<has-Location rdf:resource="#Kitchen"/>
</Case>
where July is an instance of Summer and Kitchen is an instance of FirstFloor. The
user should define that terms Time and Location describe the attributes of the
problem, term Activity is used to describe the solution and each case is stored as an
instance of term Case. The suggested solution to the problem of grasping the hie-
rarchy information of an ontology and storing it as attributes, is keeping the whole
path of each instance in Boolean values. For example Case74 of the example
above, would be stored as:
72 K. Maria, E. Vasilis, and A. Grigoris
Cooking, July,0,0,0,1,Kitchen,1,0,1,0
The Boolean values following Kitchen mean that it belongs to FirstFloor and In-
doors and not SecondFloor or Outdoors. In other words, after an attribute value,
we store one Boolean number for each of its subclasses, representing that the sub-
class belongs or not to the path that leads to the instance value.
In our first version of the activity recognition system, we used ontologies as de-
scribed above and the results were satisfying enough. For facilitating the evalua-
tion process we developed a simpler but quicker version without using ontologies.
In this version, events occurring within a timeframe are sent as plain text (and not
within an ontology) to the activity recognition system, and added as a case to the
case-base. The accuracy of the simpler version was close to the first version, but
time performance was better, as expected.
5 Experimental Evaluation
In the absence of a real dataset for a smart classroom, we decided to create one, in
order to evaluate our system. The precision of our system is mostly based on the
activity recognition’s precision, since everything else is rule based. Therefore we
present here the evaluation results for a dataset that we built based on our observa-
tions on a publicly available video from a lecture1. Our observations – which act
as sensor data – include the position of the lecturer, the lighting, the persons that
speak etc. In this video the activities observed are 4: lecture with slides, lecture
with whiteboard, question and conversation. A 10 fold cross validation based on
this dataset is illustrated in Tables 1 and 2. Table 2 can be read as “y activity was
actually classified as x activity”. It would ideally contain zeros only in the non-
diagonal positions. So it appears that no “conversation” was classified correctly.
Further evaluation experiments of the activity recognition system, will be pre-
sented in a later work.
Total Number of Instances 326
Correctly Classified Instances 313 96.0123 %
Incorrectly Classified Instances 13 3.9877 %
Table 1 Detailed accuracy by class
TP
Rate
FP
Rate
Precision Recall F-
Measure
ROC
Area
Class
0.995 0.077 0.959 0.995 0.977 0.994 lecture_slides
0.955 0.008 0.977 0.955 0.966 0.994 lecture_wb
0.955 0.007 0.913 0.955 0.933 0.997 question
0 0 0 0 0 0.816 conversation
1 http://videolectures.net/mlss08au_hutter_isml/. Part 2
S-CRETA: Smart Classroom Real-Time Assistance 73
Table 2 Confusion matrix
lecture_slides lecture_wb question conversation classified as
208 1 0 0 lecture_slides
4 84 0 0 lecture_wb
1 0 21 0 question
4 1 2 0 conversation
The time performance of our system is based on the performance of SWRL
plus the performance of the activity recognition system. As stated in §3.3, we have
set a typical reasoning cycle (case) to 10 seconds. This means that for a typical ac-
tivity recognition dataset we would need around 43000 cases, which correspond to
5 (working) days of data, namely a week. We have reproduced the same 326 cases
acquired from the video to create a 43000-case-large dataset, just to simulate the
time performance of our system, ignoring accuracy. The average time perform-
ance of the activity recognition system that was executed 100 times on this dataset
is 0.757401 seconds in a rather outdated machine. Similarly, the average time per-
formance of SWRL rules that was executed 10 times is 0.686215 seconds.
6 Conclusions – Future Work
In this paper, we introduced the use of a real-time AmI system in a smart class-
room. We presented how an ontology can be used to model the context in a smart
environment and how we can take advantage of this modeling to assist activity
recognition.
With some simple scenarios, we illustrated how such a system could assist its
users and also provided some experimental results on the performance. Although
the first results are promising, we still have some work to do in order to test our
system in a real smart classroom environment.
Apart from that, we have already started to work on the Ambient Assisted Liv-
ing domain and particularly on the assistance of the elderly. We also plan to ex-
tend our Activity Recognition system, in order to grasp more information that an
ontology can offer, reduce the case-base’s size efficiently and finally verify which
activities were recognized correctly.
Acknowledgments. We would like to thank Ioannis Hryssakis and Dimitra Zografistou for
their support and advice in the preparation of this paper, as well as their eagerness to help
whenever needed.
References
[1] O’Driscoll, C., Mohan, M., Mtenzi, F., Wu, B.: Deploying a Context Aware
Smart Classroom. In: International Technology and Education Conference. INTED,
Valencia (2008)
74 K. Maria, E. Vasilis, and A. Grigoris
[2] Leonidis, A., Margetis, G., Antona, M., Stephanidis, C.: ClassMATE: Enabling Am-
bient Intelligence in the Classroom. World Academy of Science, Engineering and
Technology 66, 594–598 (2010)
[3] Krummenacher, R., Strang, T.: Ontology-based Context Modeling. In: Proceedings
Third Workshop on Context-Aware Proactive Systems, CAPS 2007 (2007)
[4] Grammenos, D., Zabulis, X., Argyros, A., Stefanidis, C.: FORTH-ICS Internal RTD
Programme Ambient Intelligence and Smart Environments. In: Proceedings of the 3rd
European Conference on Ambient Intelligence (AMI 2009) (2009)
[5] Aamodt, A., Plaza, E.: Case-Based Reasoning: Foundational Issues, Methodological
Variations, and System Approaches. AI Communications 7(1), 39–59 (1994)
[6] Witten, I.H., Frank, E., Hall, M.A.: Data Mining: Practical Machine Learning Tools
and Techniques, 3rd edn. Morgan Kaufmann (2011)
[7] Tapia, E.M.: Using Machine Learning for Real-time Activity Recognition and Estima-
tion of Energy Expenditure. Dissertation, Massachusetts Institute of Technology
(2008)
[8] Recio-Garcia, J.A.: jCOLIBRI: A multi-level platform for building and generating
CBR systems. Dissertation, Universidad Complutense de Madrid (2008)
[9] Recio-Garcia, J.A., Diaz-Agudo, B., Gonzalez-Calero, P., Sanchez-Ruiz-Granados,
A.: Ontology based CBR with jCOLIBRI. Applications and Innovations in Intelligent
Systems Xiva (2007)
[10] Aamodt, A.: A knowledge-intensive, integrated approach to problem solving and sus-
tained learning. Dissertation, University of Trondheim, Norwegian Institute of Tech-
nology, Department of Computer Science, University Microfilms PUB 92-08460
(1991)
[11] Kofod-Petersen, A., Aamodt, A.: Case-Based Reasoning for Situation-Aware Am-
bient Intelligence: A Hospital Ward Evaluation Study. In: McGinty, L., Wilson, D.C.
(eds.) ICCBR 2009. LNCS, vol. 5650, pp. 450–464. Springer, Heidelberg (2009)
[12] Knox, S., Coyle, L., Dobson, S.: Using ontologies in case-based activity recognition.
In: Proceedings of FLAIRS 2010, pp. 336–341. AAAI Press (2010)
[13] Intille, S.S., Larson, K., Beaudin, J.S., Nawyn, J., Tapia, E.M., Kaushik, P.: A Living
Laboratory for the Design and Evaluation of Ubiquitous Computing Technologies. In:
Proceedings of CHI Extended Abstracts, pp. 1941–1944 (2005)
[14] Jayasurya, K., Fung, G., Yu, S., Dehing-Oberije, C., De Ruysscher, D., Hope, A., De
Neve, W., Lievens, Y., Lambin, P., Dekkera, A.L.A.J.: Comparison of Bayesian net-
work and support vector machine models for two-year survival prediction in lung
cancer patients treated with radiotherapy. Med. Phys. 37, 1401–1407 (2010)