Content uploaded by Moushir M. El-Bishouty
Author content
All content in this area was uploaded by Moushir M. El-Bishouty
Content may be subject to copyright.
Int. J. Mobile Learning and Organisation, Vol. 3, No. 4, 2009 337
Copyright © 2009 Inderscience Enterprises Ltd.
LORAMS: linking physical objects and videos for
capturing and sharing learning experiences towards
ubiquitous learning
Hiroaki Ogata*, Yoshiki Matsuka,
Moushir M. El-Bishouty and
Yoneo Yano
Department of Information Science and Intelligent Systems,
Faculty of Engineering,
The University of Tokushima,
2-1, Minamijosanjima,
Tokushima 770 8506, Japan
Fax: +81 88 656 7498
E-mail: ogata@is.tokushima-u.ac.jp
E-mail: matsuka@is.tokushima-u.ac.jp
E-mail: mbishouty@is.tokushima-u.ac.jp
E-mail: yano@is.tokushima-u.ac.jp
*Corresponding author
Abstract: This paper proposes a personal learning assistant called LORAMS
(Link of RFID and Movies System), which supports the learners with a system
to share and reuse learning experience by linking movies and environmental
objects. These movies are not only kind of classes’ experiments but also daily
experiences movies. Therefore, you can share these movies with other people.
LORAMS can infer some contexts from objects around the learner and search
for shared movies that match with the contexts. We think that these movies are
very useful to learn various kinds of subjects. We did evaluation experiments.
The target of some experimenters is to record movies and link objects, while
the target of other experimenters is to learn using LORAMS and to try doing a
task. We got the result that the learner’s performance of doing a task using
LORAMS is better than doing a task without its assistant.
Keywords: computer-supported ubiquitous learning; m-Learning; mobile
learning; pervasive learning; RFID tag; SMIL; u-Learning; video.
Reference to this paper should be made as follows: Ogata, H., Matsuka, Y.,
El-Bishouty, M.M. and Yano, Y. (2009) ‘LORAMS: linking physical objects
and videos for capturing and sharing learning experiences towards ubiquitous
learning’, Int. J. Mobile Learning and Organisation, Vol. 3, No. 4, pp.337–350.
Biographical notes: Hiroaki Ogata is an Associate Professor in the Faculty of
Engineering, Tokushima University, Japan. He received his BE, ME and PhD
from Tokushima University in 1992, 1994 and 1998, respectively. He was a
Visiting Researcher at L3D, the University of Colorado at Boulder, USA, from
2001 to 2003. His current interests include computer-supported ubiquitous
learning and computer-supported collaborative learning. He received the best
paper awards from JSiSE in 1998, from WebNet in 1999, from ICALT in 2006,
from MULE in 2007 and from CollabTech in 2008.
338 H. Ogata et al.
Yoshiki Matsuka is a Postgraduate Student in the Department of Information
Science and Intelligent Systems, Tokushima University. His research interests
are in mobile learning, ubiquitous computing and multimedia broadcasting.
Moushir M. El-Bishouty received his BSc and MSc in Computer Science from
Alexandria University. He is a Research Assistant at the Informatics Research
Institute, Mubarak City for Science and Technology, Egypt, and also is holding
a scholarship to pursue his PhD at the Information Science and Intelligent
Systems Department, the University of Tokushima, Japan. His research
interests include ubiquitous computing environments, awareness and
personalisation.
Yoneo Yano is currently a Full Professor and also the Dean at the Faculty of
Engineering, Tokushima University. He received his BE, ME and PhD in
Communication Engineering from Osaka University in 1969, 1971 and 1974,
respectively. He was a Visiting Research Associate at the Computer-Based
Education Research Lab, University of Illinois, USA.
1 Introduction
Ubiquitous computing (Abowd and Mynatt, 2000) will help organise and mediate social
interactions wherever and whenever these situations might occur (Lyytinen and Yoo,
2002). Its evolution has recently been accelerated by improved wireless
telecommunications capabilities, open networks, continued increases in computing
power, improved battery technology and the emergence of flexible software architectures
(Sakamura and Koshizuka, 2005). With these technologies, ubiquitous learning
(u-Learning) is defined as an everyday learning environment that is supported by mobile
and embedded computers and wireless networks in our everyday life.
There are similar words such as mobile learning (m-Learning) and pervasive learning
(p-Learning) related to u-Learning. m-Learning is to increase the learners’ capability to
physically move their own learning environment using lightweight devices such as
personal digital assistant (PDA) and cellular mobile phones (Barak et al., 2007;
Davidrajuh, 2007; Denk et al., 2007; Eschenbrenner and Nah, 2007; Lipovszki and
Molnar, 2007; Singh and Bakar, 2007; Yin et al., 2007). Since mobile devices work alone
without any communication with embedded computers in the learner’s surrounding
environment, they cannot provide suitable information for the learner’s context. In
p-Learning, computers can obtain information about the context of the learning from the
learning environment where small devices such as sensors, pads, badges, radio frequency
identification (RFID) tags and so on, are embedded and communicate mutually.
However, the availability and the usefulness of p-Learning are limited and highly
localised. Finally, u-Learning has integrated m-Learning with p-Learning. While
m-Learning uses just mobile devices without any communication with physical
environmental objects, u-Learning supports learners through the communication among
mobile devices and embedded devices (Ogata and Yano, 2004).
One of the important ubiquitous computing technologies is RFID tag, which is a
rewritable IC memory with non-contact communication facility. This cheap, tiny RFID
tag will make it possible to tag almost everything, replace the bar code, help computers to
LORAMS: linking physical objects and videos 339
be aware of their surrounding objects by themselves and detect the user’s context
(Borriello, 2005). The features of RFID tag are as follows:
1 Non-line-of-sight reading: RFID is not necessary for line-of-sight reading like a bar
code. In addition, the distance range for RFID reader is longer than bar code
scanning range.
2 Multiple tag reading: unlike a bar code reader, RFID unit can read multiple tags at
the same time. This feature enables counting the number of objects in a second, thus
finding application in supply-chain management.
3 Data rewritable: RFID has a memory chip that can be rewritten using an RFID unit;
on the other hand, the data of bar code are not changeable.
4 High durability: tags are very sturdy from vibrations, contamination (dust and dirt)
and abrasion (wear). Hence, tags can be permanently used.
5 Ease of maintenance: there are two types of RFID tags. One is passive, which does
not use any battery. The power comes from the reader unit. Therefore, passive tags
can be used permanently. The other one is active, which contains batteries and has a
longer range than passive ones. There are also some issues related to RFID tags.
First, tags are influenced by the capacities to read RFID tag put on metallic surface
(for example, for cans). Second, there is an issue related to standards of RFID such
as radio frequency bands and identifications. We use passive RFID tags based on
ISO/IEC 15693 standard and ubiquitous ID, which is different uID is already given
in every tag. It costs 1 US dollar for this tag, but we assume that it will be very
cheap, less than 1 cent, and it will replace bar codes for most of products. For
example, Wal-Mart is promoting suppliers to implement RFID on their products.
We assume that almost all the products will be attached with RFID tags in the near
future, enabling us to learn every object at anytime at anyplace by scanning its RFID tag.
The fundamental issues related to u-Learning are
1 How to capture and share learning experiences that happen at anytime and anyplace.
2 How to retrieve and reuse them for learning.
As for the first issue, video recording with handheld devices will allow us to capture
learning experiences. Also, consumer-generated media services like YouTube
[http://www.youtube.com/] helps to share those videos. The second issue will be solved
by identifying objects in a video with RFID so that the system can recommend the videos
in situations similar to the situation where the learner has a problem.
This paper proposes LORAMS (Linking of RFID and Movie System) for u-Learning.
There are two kinds of users in this system. One is a provider who records his/her
experience in videos. The other is a user who has some problems and retrieves the videos.
In this system, a user uses his/her own PDA with RFID tag reader and digital camera, and
links real objects and the corresponding objects in a movie and shares it among other
learners. Scanning RFID tags around the learner enables us to bridge the real objects and
their information into the virtual world like tangible user interface (Ishii and Ullmer,
1997) in computer–human interaction research field. LORAMS detects the objects
around the user using RFID tags and provides the user with the right information in that
context.
340 H. Ogata et al.
As for related works, there are two kinds of educational applications using RFID tags.
The first type is the applications that can identify the objects on a table and support
face-to-face collaboration. For example, Envisionment and Discovery Collaboratory
(Arias et al., 1999) and Caretta (Sugimoto et al., 2004) consist of a sensing board and
objects with RFID tags such as house and school. Detecting objects on the table enables
the systems to show the simulation such as urban planning. Also, TANGO (Tag Added
learNinG Objects) system supports learning vocabularies (Ogata and Yano, 2004). The
idea of this system is to stick RFID tags on real objects instead of sticky labels, annotate
them (e.g. questions and answers) and share them among others. The tags bridge
authentic objects and their information into the virtual world.
The second type is the applications that can detect the learner’s location using RFID
tags that allows the system to track the learner’s positions and to send the right messages
to the learner. eXspot (Hsi and Fait, 2005) is an example of this type of application,
which is designed for museum educators; it can capture the user’s experiences at a
museum for later reflection. This system consists of a small RFID reader for mounting on
museum exhibits, and RFID tag for each visitor. While using RFID, a visitor can
bookmark the exhibit she/he is visiting, and then the system records the visitor’s
conceptual pathway. After visiting the museum, the visitor can review additional science
articles, explore online exhibits and download hands-on kits at home via a personalised
web page.
In this way, RFID is very useful for identifying objects precisely. u-Learning and
LORAMS allows a user to find the video segments that include the scanned physical
objects around the user. The life-logging technology using videos, photos and sensor
networks is being developed due to the low cost and high capacity of hard disk drives
(Gemmell et al., 2006; Sellen et al., 2007). The objective of our research is to examine
the effectiveness of the life-log with video and RFID in the context of u-Learning.
2 LORAMS
2.1 Features
The characteristics of LORAMS are as follows:
1 Learner’s experience is recorded in a video and linked to RFID tags of real objects.
The video can be shared with other learners.
2 Learners can find suitable videos by scanning RFID tags and/or entering keywords
of physical objects around them.
3 Based on the ratings by learners and the system, the results are listed.
There are three phases for LORAMS:
a video recording phase
b video search phase
c video replay phase.
Video recording process needs PDA, RFID tag reader, video camera and wireless access
to the internet. First, a user has to start recording video at the beginning of the task.
LORAMS: linking physical objects and videos 341
Before using objects, the user scans RFID tags and the system automatically sends the
data and its time stamp to the server. After completing the task, the user uploads the
video file to the server, and the server automatically generates SMIL (Synchronized
Multimedia Integration Language) file to link the video and the RFID tags.
On the other hand, video search and reply processes need PDA, RFID tag reader and
real player. The user scans RFID tags around him/her and enters keywords of the objects,
and then the system sends them to the server and shows the list of the videos that match
the objects and keywords. Moreover, the system extracts a part of the video that matches
these objects. The video is replayed using RealPlayer.
2.2 User interface
In recoding phase, the user sets up the information on the RFID reader such as port
number and code type and enters the experiment name and user name. When the user
uses an object, she/he pushes ‘start’ button and scans the RFID of the object. Also, when
the user finishes the work using the object, she/he pushes ‘end’ button and scans RFID of
the object. The RFIDs and the time stamps of the scans are sent to the server by pushing
‘send’ button. As shown in the right of Figure 1, the RFIDs are linked to the video.
First, the user scans RFIDs and enters keywords in (A). Then, the system displays the
result in (B). The user can select one of the videos, and by pushing the replay button (C),
RealPlayer automatically appears and plays the selected video. The objects in the video
are listed below the movie area in (D).
Figure 1 The interface of the recording phase (left) and video time line (right). (see online
version for colours)
2.3 System configuration
We have developed LORAMS, which works on a Fujitsu Pocket Loox v70 with
Windows Mobile 2003 2nd Edition, RFID tag reader/writer (OMRON V720S-HMF01),
and WiFi (IEEE 802.11b) access. RFID tag reader/writer is attached on a CF (compact
flash) card slot of PDA as shown in Figure 2. The tag unit can read and write data into
and from RFID tags within 5 cm distance, and it works with a wireless LAN at the same
time. The LORAMS program has been implemented using Embedded Visual C++ 4.0
and PHP 5.0 (Figure 3)
342 H. Ogata et al.
Figure 2 The interface of the video search (left) and RealPlayer for video replay (right).
(see online version for colours)
Figure 3 System configuration (see online version for colours)
LORAMS: linking physical objects and videos 343
The server application consists of the following modules.
xDatabase entry: it stores the RFID reading time stamp into the DB.
xDatabase: this system uses MySQL server as a database.
xDatabase search: this module matches videos with keywords and RFID tags.
xSMIL generation: after finding the segments that contain the keywords and RFID
tags, this module generates SMIL files for each segment.
xVideo streaming: this module consists of Real Helix Server to deliver streaming
video. By this module, the system does not deliver a whole video to the client but
delivers only the part of the video that contains the targeted objects.
2.4 Ranking method
Ubiquitous computing environment enables people to learn at anytime and anyplace. The
challenge in an information-rich world is not only to make information available to
people at anytime, at anyplace, and in any form, but specifically to say the right thing at
the right time in the right way (Fischer, 2001). To do so, there are a lot of researches for
information filtering and recommender systems (Resnick and Varian, 1997). LORAM
system uses the collaborative filtering method by the users’ ratings (X1 and X2), and
content-based filtering based on the objects in the video (X3,X4 and X5). This system
uses the following equation to rank the search results in order to recommend the targeted
videos:
5
1
ii
i
lwx
¦
where x1 is self-rating given by the provider and 1
01;xddx2 others rating given by the
average of the other users’ ratings (learner) and 2
01;xdd x
3, number of the target
objects in the video/number of the target objects given by the user; x4, the period of at
least one of the target objects shown in the video/the length of the video; x5, the period of
all target objects shown in the video at the same time/the length of the video; and wi, the
weight defined by the system administrator and ()100
i
w
¦.
Target object is the object that contains the keywords and/or RFID tags given by the
user. According to the value of the above equation, the search results are ordered.
3 Experimentation
We conducted the evaluation to measure how LORAMS can support u-Learning. The
tasks were installation of some devices to personal computers as shown in Figure 4.
344 H. Ogata et al.
Figure 4 Scene of the experimentation (see online version for colours)
3.1 Experimentation design
Twenty students from the department of computer science in the University of
Tokushima were involved in this experiment. Although they have already learnt theories
of computer architectures in the classroom, they have never been taught how to assemble
a computer in practice. Each of them was given 30 min to complete one of the following
tasks:
Task 1: plug a hard disk drive 40 GB as a master device and a CD-ROM as a slave drive
using one IDE cable.
Task 2: plug a hard disk drive 30.7 GB as a master device and a CD-ROM as a master
drive.
Task 3: plug an AGP VGA card 32 MB and 2 u 128 MB RAM.
Task 4: plug an AGP VGA card 16 MB and 1 u 256 MB RAM.
In a laboratory at the computer science department, students are replaced every 2 or 3
years. Sometimes, their computers in the lab break down, and some parts need be
replaced. Also, the computers have to be upgraded by adding memory and/or hard disk
LORAMS: linking physical objects and videos 345
for their continued use. It is difficult for the students to accomplish the task because of
the lack of experiences. In this real situation, LORAMS is very useful to archive all the
actions of the past and retrieve and reuse them.
The goal of learning through this task is that learner can assemble PC alone without
any help of others, even if the learner had to see expert’s videos or peer helper before.
Therefore, after 1 month, we asked the subjects to do the tasks that were achieved by the
use of LORAMS, and we observed how the subjects could assemble PC without
LORAMS.
Before starting the task, the devices and how to use PDA and RFID tag reader were
explained to them. All devices were attached to different RFID tag as shown in Table 1.
According to the pre-questionnaire, the students’ experiences about PC assembling were
evaluated, and they were defined as expert or inexpert. Expert students were five
students, who had the enough experience to complete the above tasks, and the inexpert
students were 15 students who had no much experience, and then the students were
divided into two groups as follows.
1 Production group: consists of 11 students: five experts and six inexperts.
2 Learner group: consists of six inexpert students.
3 Video rating group: consists of three inexpert students.
The experiment was based on the following evaluation methods:
1 Evaluation method of video production group: while doing the tasks, they recorded
videos, shared and rated them using the system.
2 Evaluation method of learners group: each learner from the learner group was asked
to do the following:
a Use the system and complete one task after watching the recommended video.
b Do the same task again after 1 month without using system, in order to verify
whether they have learnt while using the system.
3 Evaluation method of ranking movies: the students of the video rating group were
asked to rate all videos with a number that presented the objective value ‘X2’.
Table 1 Objects and IDs used in the experiment
Object ID Object name
4F303032 SDDR RAM Hunix 256 MB
4F303034 SDDR RAM SEC 128 MB
4F303033 DDR RAM NANAYA 256 MB
4F303036 VGA SST MPF 39V512 – (16 MB)
4F303038 Hard Disk Drive Maxtor IDE 40 GB
4F303039 Hard Disk Drive IBM IDE 30.7 GB
4F303130 CD ROM Drive LG
4F303133 IDE Data Cable double
346 H. Ogata et al.
After 3 months, six students from the learner group were asked to use the system, enter
some keywords that describe the task’s objects that each of them did before, watch the
first three ranked videos by the system and rerate them according to their opinions. The
target of this step was to evaluate the system rating method. Moreover, the weight
parameters were defined as follows.
1 Weight parameter of subjective evaluation:
12
10, 40ww
W2 was set a high value to increase the weight and the importance of the learner’s rating
effect.
2 Weight parameter of objective evaluation:
345
20, 10, 20www
W3 and W5 were set high values to increase the weight and the importance of the videos
that contain almost all the task objects.
3.2 Result
After the experiment, all students filled in a questionnaire. They gave a rate from 1 (the
worst) to 5 (the best) as an answer for each question. The result is shown in Table 2. The
average (Avg.) and standard deviation (SD) for the learners’ answers are illustrated.
3.2.1 Evaluation of the video production
First, all students of the production group executed tasks and produced videos using PDA
and RFID. In this phase, the students could use a web search engine like Google to search
for some information, while LORAMS system was not allowed to use. As a result, all the
five expert students and two (out of six) inexpert students could complete the task
successfully. After that the students of the learner group executed their tasks using
LORAMS, and five students (out of six) could complete the task. One month later, the
learner group was asked to do the same tasks again without getting any help from
internet, LORAMS or other people and all the six students completed the task
successfully.
According to Q1 and Q2, the recording phase is accepted. However, they mentioned
that sometimes they forget scanning tags and found it boring. On the other hand, there
were two or more affirmative opinions, where some students commented that linking real
objects to videos is very easy and did not require typing. Especially, for novice users, it
was difficult to identify the difference in similar types of parts, for instance, IDE and
SCSI HDD, DIMM and SIMM memory. Therefore, scanning RFID tags of objects was
very useful for them to find appropriate videos without any input.
Other subjects commented that the linking process does not need any special video
editing skills, and it is considered a good training for the producers. Therefore, an
improvement of the user’s interface is needed that decreases the button operations and
displays what objects have been read so far.
LORAMS: linking physical objects and videos 347
Table 2 Results of questionnaires
No. Questionnaire Ave SD
Q1 Is it so easy for you to read RFID tags, record a video and complete the
task at the same time?
3.4 0.85
Q2 Is it so easy for you to make a link between RFID and movie? 3.5 1.71
Q3 Do you think that the recorded videos are very useful for the beginners
to complete the task?
4.3 0.67
Q4 Do you think that the retrieved video is effective for learning? 4.5 1.21
Q7 Is it easy to find the suitable movie using this system? 4.0 1.10
Q8 Overall, is it easy for you to use this system? 3.7 0.52
Q9 Overall, do you think this system is useful for learning? 4.5 0.30
Q10 Do you want to use this system again? 4.3 0.67
There are many related works to make the annotation and the keywords to the video in
order that the system can provide only the video that the user requires (Davis, 1993;
Smith and Lugeon, 2000; Nagao et al., 2001). However, a lot of human costs and time are
necessary for these methods of producing videos. So a system has been proposed so that
viewers can put their annotations into the video contents, and the production person’s
load is decreased. The collective intelligence is added to the contents by giving the
annotation (Yamamoto and Nagao, 2005). In contrast to those systems, LORAMS does
not need to make annotations manually. Therefore, most of the subjects stated that it was
very easy to link physical objects and video.
3.2.2 Evaluation of the learning process
Success rate (SR) and execution time (ET) for the learner group are shown in Figure 5.
SR is calculated by the following equation:
the number of the successful tasks
SR = 100
the number of the tasks that the sujects tried u
More frequently the subjects successfully accomplished tasks, the higher SR is. Also, ET
is the average time from the beginning of a task to the end of the task.
In Figure 5, the solid line (diamond) shows SR (%) and the dotted line (square) shows
ET (second) in three different settings:
1 ‘Web’ indicates that the subjects did a task using web pages only without LORAMS.
2 ‘LORAMS’ indicates that the subjects tried a task using LORAMS.
3 ‘After 1 month’ indicates that the subjects accomplished a task without the use of
web pages and LORAMS.
The SR increased by using LORAMS from 33.3% to 83.3%, and the ET decreased. This
means that LORAMS helps the students to learn how to install computer devices. Also,
the SR became 100% and ET decreased after 1 month. This means that all the subjects
could successfully assemble PC without any help with web pages and LORAMS. It
indicates that the students have learnt and improved their skills while using LORAMS.
348 H. Ogata et al.
Figure 5 Achievement rate and time in each (see online version for colours)
Note: Success rate (solid line) and execution (dotted line).
From Q2 (Table 2), it is clear that the system can efficiently retrieve the matched videos
and extract a part that includes the real object. Several students commented that only the
part that they want to watch could be retrieved directly and that there was no need to
waste their time by searching for this part and watch unnecessary videos.
From these results, it is useful to watch the real experience into videos rather than to
read related documents. Also, it is confirmed that the learner could watch only the
important part of the video that matches his need. Moreover, the learners have gained
new experiences and knowledge while using the system.
3.2.3 Evaluation of the ranking method
As shown in Table 3, for the first ranked movies by the system, the average rating score
for video rating group was 1.3. For the second and third ranked videos, the average rating
score for video rating group was in opposite order. One of the main reasons that may
cause this wrong order is the similarity between many computer components. Therefore,
the ranking method needs more enhancements.
Table 3 Video ranking
Student 1st 2nd 3rd
A 2 3 1
B 1 3 2
C 2 3 1
D 1 3 2
E 1 2 3
F 1 3 2
Average 1.3 2.8 1.8
LORAMS: linking physical objects and videos 349
4 Conclusion
This paper described a u-Learning environment called LORAMS, which supports the
learners with a system to share and reuse learning experiences by linking movies and
environmental objects. Using RFID tags, LORAMS can identify the physical objects in
each segment of the recorded video, and makes it easy to retrieve the segment that
includes the objects surrounding the user.
The evaluation results are as follows:
1 From the comments of the subjects, it is understood that they easily registered their
videos that have links between the videos and objects in them, and that they could
find the video segment that the user wants.
2 The subjects could acquire skills for computer assembling using LORAMS.
Therefore, the system was helpful for learning.
3 The order of the search results was almost appropriate for the user. However, more
improvement of the ranking method is necessary to increase the accuracy.
In this evaluation, the number of the subjects was not significant and more deep analysis
of the usability is necessary. Therefore, we will conduct further evaluation and data
analysis with a large number of subjects.
In future work, we will improve the user interface and ranking methods based on the
students’ comments. Also, we will apply LORAMS to other domains (e.g. cooking,
second-language learning for the people who live in a foreign country). Also, we think
that LORMAS is useful not only for everyday learning but also for training in specific
domains such as checking oil, battery and tires in cars; surgeries; and chemical bioreactor
experimentations. We believe LORAMS can be applied to many domains by those who
need different kinds of skills in their everyday life. As for the system improvement, we
will expand the recommendation (ranking) function in order to provide the right video to
the right user in the right context. Also, we will examine Adobe flash player instead of
Real player and will enable LORAMS to work in PDAs and mobile phones. In addition,
we will use a very small headset video camera and microphone to allow video recording
at anytime and at anyplace. Finally, ubiquitous computing society is not still realised, but
we believe we should start to design learning environments in the future society.
References
Abowd, G.D. and Mynatt, E.D. (2000) ‘Charting past, present, and future research in ubiquitous
computing’, ACM Transaction on Computer-Human Interaction, Vol. 7, No. 1, pp.29–58.
Arias, E., Eden, H., Fischer, G., Gorman, A. and Scharff, E. (1999) ‘Beyond access: informed
participation and empowerment’, Proceedings of the Conference on Computer Supported
Collaborative Learning (CSCL ‘99), Stanford, CA, USA, 11–12 December.
Barak, M., Harward, J. and Lerman, S. (2007) ‘Studio-based learning via wireless notebooks: a
case of a Java programming course’, Int. J. Mobile Learning and Organisation, Vol. 1, No. 1,
pp.15–29.
Borriello, G. (2005) ‘RFID: tagging the world’, Communications of the ACM, Vol. 48, No. 9,
pp.34–37.
Davidrajuh, R. (2007) ‘Array-based logic for realising inference engine in mobile applications’, Int.
J. Mobile Learning and Organisation, Vol. 1, No. 1, pp.41–57.
350 H. Ogata et al.
Davis, M. (1993) ‘An iconic visual language for video annotation’, Proceedings of IEEE
Symposium on Visual Language, Bergen, Norway, 24–27 August.
Denk, M., Weber, M. and Belfin, R. (2007) ‘Mobile learning – challenges and potentials’, Int. J.
Mobile Learning and Organisation, Vol. 1, No. 2, pp.122–139.
Eschenbrenner, B. and Nah, F.F (2007) ‘Mobile technology in education: uses and benefits’, Int. J.
Mobile Learning and Organisation, Vol. 1, No. 2, pp.159–183.
Fischer, G. (2001) ‘User modeling in human-computer interaction’, Journal of User Modeling and
User-Adapted Interaction (UMUAI), Vol. 11, No. 1/2, pp.65–86.
Gemmell, J., Bell G. and Lueder, R. (2006) ‘MyLifeBits: a personal database for everything’,
Communications of the ACM, Vol. 49, No. 1, pp.88–95.
Hsi, S. and Fait, H. (2005) ‘RFID enhances museum visitors’ experiences at the Exploratorium.’
Communications of the ACM, Vol. 48, No. 9, pp.60–65.
Ishii, H. and Ullmer, B. (1997) ‘Tangible bits: towards seamless interfaces between people, bits and
atoms’, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems.
Atlanta, Georgia, USA, 22–27 March, pp.234-241.
Lipovszki, G. and Molnar, I. (2007) ‘Mobile learning for mechanical engineers’, Int. J. Mobile
Learning and Organisation, Vol. 1, No. 3, pp.239–256.
Lyytinen, K. and Yoo, Y. (2002) ‘Issues and challenges in ubiquitous computing’, Communications
of ACM, Vol. 45, No. 12, pp.63–65.
Nagao, K., Shirai, Y. and Squire, K. (2001) ‘Semantic annotation and transcoding: making web
content more accessible’, IEEE MultiMedia, Vol. 8, No. 2, pp.69–81.
Ogata, H. and Yano, Y. (2004) ‘Context-aware support for computer-supported ubiquitous
learning’, Proceedings of IEEE Wireless and Mobile Technologies in Education
(WMTE2004), Taipei, Taiwan, March 23–15.
Resnick, P. and Varian, H.R. (1997) ‘Recommender systems’, Communications of the ACM,
Vol. 40, No.3, pp.56–58.
Sakamura, K. and Koshizuka, N. (2005). ‘Ubiquitous computing technologies for ubiquitous
learning’, Proceeding of the International Workshop on Wireless and Mobile Technologies in
Education (WMTE2005), Tokushima, Japan, 28–30 November.
Sellen, A., Fogg, A., Hodges, S. and Wood, K. (2007) ‘Do life-logging technologies support
memory for the past? An experimental study using SenseCam’, Proceedings of International
Conference on Human Factors in Computing System (CHI ’07), San Jose, CA, USA, 28
April–3 May.
Singh, D. and Bakar, Z.A. (2007) ‘Wireless implementation of a mobile learning environment
based on students’ expectations’, Int. J. Mobile Learning and Organisation, Vol. 1, No. 2,
pp.198–215.
Smith, J.R. and Lugeon, B. (2000) ‘Visual annotation tool for multimedia content description’,
Proceedings of SPIE Photonics East, Internet Multimedia Management Systems’, Boston,
MA, USA, 6 November.
Sugimoto, M., Hosoi, K. and Hashizume, H. (2004) ‘Caretta: a system for supporting face-to-face
collaboration by integrating personal and shared spaces’, Proceedings of International
Conference on Human Factors in Computing System (CHI2004), Vienna, Austria, 24–29
April.
Yamamoto, D. and Nagao, K. (2005) ‘Web-based video annotation and its applications’, Journal of
the Japanese Society for Artificial Intelligence, Vol. 20, No. 1, pp.67–75.
Yin, C., Ogata, H. and Yano, Y. (2007) ‘Participatory simulation framework to support learning
computer science’, Int. J. Mobile Learning and Organisation, Vol. 1, No. 3, pp.288–304.