Linking Everyday Presentations through Context Information
Alessandra Alaniz Macedo
Faculdade de Filosofia, Ciˆ encias e Letras de Ribeir˜ ao Preto
Universidade de S˜ ao Paulo, Brazil
Renato Bulc˜ ao Neto, Jos´ e Antonio Camacho-Guerrero, Carlos Henrique O. Jardim,
Renan G. Cattelan, Valter R. In´ acio Jr., Maria da Grac ¸a C. Pimentel
Instituto de Ciˆ encias Matem´ aticas e de Computac ¸˜ ao
Universidade de S˜ ao Paulo, Brazil
A relevant issue on linking services based on informa-
tion retrieval techniques is how to define scopes that de-
limit homogeneous information so as to obtain good re-
sults. An interesting way to achieve this is to provide such
scope delimitation by means of context information. This
process of linking can be tailored according to contextual
constraints explicitly provided by users. We propose linking
services enhanced with context information captured from
everyday presentations. We present the LinkDigger Context
Service, which creates hyperlinks following information ob-
tained from users. As a result, different hypertexts can be
defined upon the same information.
Ubiquitous computing (Ubicomp) proposes the seamless
integration of hardware and software into a physical envi-
ronment so as to aid humans in their everyday activities,
without changing the way they usually perform those ac-
tivities . Ubicomp investigates the construction of cap-
ture and access applications : information in everyday
experiences is captured and made available as hyperdocu-
ments . Capture and access applications have been built
for meeting   and classrooms  environments.
Another ubicomp theme is context-aware computing: fo-
cuses on the ability of a computational entity to customize
its behavior based on contextual information obtained ex-
plicitly, from the users themselves, or implicitly, from in-
strumented environments. Context is information that de-
scribes something about the conditions where either a user
is or an application executes . Classic dimensions for
context are who, where, when and what .
Arguing the automation provided by ubicomp, Abowd
et al. suggest computational support before, during and af-
ter a live presentation . This support can include linking
services focused on presenting complementary information
since the information obtained from different live sessions
is usually related. This kind of service is able to allow a lec-
turer to augment automatically his presentation. Pimentel
et al. suggested services to allow linking related documents
before and after a live presentation takes place . Macedo
et al. observed that those services should also be available
during live sessions .
The literature has remarked that the more homogeneous
the repositories to be processed, the better the query re-
sults . A problem with linking services based on in-
formation retrieval techniques is how to define scopes that
delimit homogeneous information. We propose the use of
context information informed by users to related homoge-
neous information captured from live sessions.
We illustrate our proposal with the LinkDigger Context
mation obtained from a Context Web Service in all phases
of a capture and access process. Our implementation was
carried out by extending previous results   . By
using the LinkDigger Context Service, users are able to re-
late captured everyday presentations according to specific
lecturers (who), subjects (what), dates (when) and places
(where). This allows different links to be created using the
same captured information and the same organization just
considering context information as a filter.
After presenting previous work, we describe LinkDig-
and access applications. We then present usage scenarios,
contributions and comments on future work.
Abstractions for Capture and Access. A large num-
ber of capture and access applications present recurrent fun-
ctionalities. Typical examples occurs in the educational do-
main: most applications implement software to capture au-
dio as well as user interaction with an electronic whiteboard
and Web browsers.
Since these functionalities may appear in any combina-
tion and suggest a component-based approach, we devel-
oped xINCA (eXtended INfrastructure for Capture and Ac-
cess Applications) , which provides software compo-
nents that capture and access user-interactions with elec-
tronic whiteboards and PDAs, text generated in chat ses-
sions, URLs visited by Web browsing as well as streams
of audio and video recorded during live sessions. By com-
bining the required components, a functional capture and
access application can be easily and rapidly prototyped.
xINCA software components intercommunicate ac-
cording to the INCA’s publish/subscribe communication
model . They register at runtime in a Registry entity
that coordinates the exchange of messages based on topics
of interest bound to session identifiers. A session repre-
sents a period of interaction between components and has a
unique identifier as long as it lasts. Once sharing the same
session identifier and functionality type, components from
a given application are able to communicate and exchange
information with any other component from any other cap-
ture and access application. Sessions are managed by appli-
cations according to the logic used to manage collaboration
among users. A centralized service is responsible to assign
session IDs, for instance a primary key from a relational
database. Applications running unrelated sessions (where
users do not need to exchange information) receive differ-
ent session IDs. Similarly, applications which users share
related content (context) are assigned the same session ID.
Context Kernel Web Service. A Web service is a soft-
ware component identified by a URI and delivered in whole
or in part via the Internet through its interface. Context Ker-
nel (CK)  is a Web service that allows applications to
handle context information based on the classic dimensions
who, where, when, what and how . CK classifies those
dimensionsas: (i)primitive, thosehandledindependentlyof
other dimensions; (ii) derivative, those obtained by relating
other dimensions: primitive or derivative. The definition of
two kind of dimensions facilitates combinations of deriva-
A primitive dimension is represented as a premise de-
fined by a tuple containing type, value and an optional qual-
ifier. A derivative dimension is defined by means of a rule
that contains at least one premise and one inference. The
definition of premises and inferences rules facilitates the
representation of derivative information. Any dimension
can be primitive or derivative, being strictly dependent on
the application requirements. Therefore, the applications
themselves are responsible for the specification of which
kind of data and rules are particularly relevant to them. The
following XML excerpt illustrates the vocabulary defined
by CK: a set of premisses related to the who dimension
gives details about a user (login, name, and email) and de-
fines, by means of the what dimension, that the user is a
member of the group XYZ. The advantage of using XML
specification to represent dimensions manipulated by CK is
due to the interoperability provided by this language.
<premise dimension="who" type="login" value="jd"/>
<premise dimension="who" type="name" value="John Doe"/>
<premise dimension="who" type="mbox" value="jd@.."/>
<inference dimension="what" type="group" value="XYZ"/>
The CK API offers five categories of services: registry,
status, storage, retrieval and event notification. The core
of the CK API includes services for storing and retrieving
context information in respect with the value of premises
and inferences, as demanded by the XML excerpt shown
above. Applications may also retrieve context information
by querying premises, specifying the number of answers
or even combine premises using boolean operators. Note
that CK relies on context-aware applications regarding the
validity of the data and rules being stored. Moreover, the
relevance of each piece of context information or rule is a
prerogative of the applications themselves.
We have used the CK Web service infrastructure to
integrate representative CSCW applications  via con-
text information such as the WebMemex recommender sys-
tem . This integration was possible because CK repre-
sents context in a generic way although it is not a complete
representation. We present further details about the use of
CK in the next sections.
LinkDigger Service. Macedo et al. have developed ser-
vices  that automatically identify links among homo-
geneous Web repositories using lexical matching, Latent
Semantic Indexing  and integrating an open linkbase
 to store the computed links. The infrastructure was
redesigned for reuse in linking service called LinkDigger
that also allowed user feedback , and used to built the
WebMemex recommender service . To define the rela-
tionships between Web documents, LinkDigger executes its
underlying processing periodically – we do not process con-
tinuously given the size of the matrices representing terms
and documents. This means that updated information is
made available only periodically by LinkDigger. The un-
derlying processing of LinkDigger is as follows:
Indexing. Initially all documents extracted from Web are
indexed, i.e., significant words (excluding stop words) are
extracted from the documents (we use the mnoGoSearch
 search engine).
Compute Weight. Since many words are extracted, it is
important that, at the time of the indexing, the words be
given an appropriate weight in terms of the number of times
they appear in a given document relative to the number of
times that they appear in the whole repository. We use a
term-weighting scheme .
Generate the Terms by Documents Matrix. The index
resulting from the previous step is used to generate a term
by document matrix called matrix X.
Compute SVD. The matrix X is decomposed into the
product of three other component matrices T, S and D
using Single Value Decomposition (SVD) which is part of
LSI theory. Following the decomposition by SVD, the k
most important dimensions (those with the highest values
in the singular matrix S) are selected since the aim is to
reduce the dimension of the working space. The amount of
dimensionality reduction, i.e., the choice of k, is critical and
is an open issue in the literature. Ideally, k should be large
enough to fit the real structure in the data, but small enough
such that noise, sampling errors and unimportant details are
not modeled. Generally we have considered k as 200 as
suggested in the literature .
Define the Semantic Matrix. A semantic matrix is gen-
erated by the computation of the inner-product among each
column of the matrix generated. This process manipulates
SVD component matrices received from the previous pro-
cess to generate a semantic matrix.
Compute Similarities. Given the semantic matrix gener-
ated in the previous step, relationships between documents
are identified by considering the cells that have the higher
values of similarity. One threshold level of similarity is
chosen to filter the links created to generate a relevance se-
mantic matrix which is used to identify semantic links be-
tween documents. The links generated are stored in an open
linkbase. The general approach to create links automati-
cally over Web repositories demands the capability to edit
documents in those repositories so as to embed link spec-
ifications. Such a writing permission is an obstacle when
a system aims at automatically creating links within any
repository. One attractive way of supporting hypertext links
without changing the original document is to use open hy-
permedia concepts. In open hypermedia systems, links are
managed and stored in special database called linkbases.
This approach incorporates advantages like link mainte-
nance, reuse and flexibility to the documents.
Generate Lexical Links. It is a processing module that
is considered by applications interested in lexical links. Af-
ter input information, a simple matching algorithm is per-
formed as presented in .
Next, we propose the LinkDigger Context Service de-
fined to relate information extracted from all phases of a
capturedandaccessapplicationina system builtwithINCA
and xINCA exploiting context information provided by the
Context Kernel Web Service. LinkDigger used to relate
documents just according to their words without consider-
ing a filter such as context information.
3 Linking Live Presentations Tailored by
We illustrate how a linking service can make use of con-
text information before, during and after live presentations.
The pre-production phase is supported by AutorE ,
which allows the preparation of sessions to be captured. For
instance, it allows a user to set the capture sessions up, and
to associate them with metadata and prepared slides. The
live capture phase is supported by iClass, which records
material presented during a lecture such as strokes and
slides from an electronic whiteboard, audio, video and Web
pages . In the access phase, AutorE allows a user to ex-
tend captured information by including links and textual an-
notations. All information from those three phases is stored
in an XML database.
Figure 1 illustrates the LinkDigger Context Service com-
posed of two basic services: (i) a linking generator, which
creates links between information from all phases of a cap-
ture and access application, and stores those links on the
WLS linkbase ; and (ii) a context information man-
ager, which collects, manages and stores premises and in-
ferences in the CK database. Those premises and infer-
ences are generated considering context information, meta-
data and captured information from a capture and access
process  . LinkDigger consults the context informa-
tion manager to obtain information in order to filter it and
compose different matrices according to context informa-
tion. When someone asks for hyperlinks considering con-
text information, the context information manager requires
from LinkDigger the relationships from specific matrices
concerned with the context information.
Linking by context in the pre-production phase. The
pre-production phase is related to the session setup. Differ-
ent applications deal with this functionality, and thus man-
age the preparation of material to be exploited in a live ses-
sion. LinkDigger Context Service architecture allows the
management of such high-level abstraction by means of the
AutorE authoring system. AutorE provides capture and ac-
cess systems with modules to support authoring of multi-
media information in terms of preparation, reuse, extension
and reference. Aiming our proposal, the architecture pre-
sented just needs to interact with the preparation module.
In Figure 2, the top Web interface is a typical one for the
Figure 1. LinkDigger Context Service architecture with a capture and access application.
By using the preparation module of AutorE, a user (e.g.
a lecturer) can set a new capture session up by reusing mate-
rial either from previous sessions or by adding new material
in the form of metadata or prepared slides. In general, at
the time of the creation of the course, the lecturer should
explicitly provide metadata about the course (e.g. title, key-
words and subject) and some context information, such as
the name of the classroom (WHERE) as well as the corre-
The lecturer implicitly provides your identification (WHO)
by logging into the system as well as the term (WHEN) of
the year when the course has been offered. All context in-
formation gathered is used to prepare a session and stored
by the context information manager (implemented as Con-
text Kernel Web Service) on a database (CK DB in Fig. 1).
However when users are not interesting in defining hyper-
links between captured information they just do not check
the checkboxes which provide these functionalities. These
checkboxes are depicted in the bottom area of the top Web
interface of Figure 2.
As the preparation module requests for context informa-
tion, the corresponding text (e.g. automatically extracted
from slides or metadata forms) is used by LinkDigger to
query the existing repository (Fig. 2(a)). Before querying
the repository, the user can check the “context information
checkbox” in order to explicitly provide or change dimen-
sion(s) of context information (Fig. 2(e)) not initially in-
formed by the user. Those dimensions are used to group
documents (homogeneous collections of documents) con-
sidered during the linking process (Fig. 2(f)).
Besides the context information, the query vector sent
to the LinkDigger (Fig. 2(a)) can be extended by users
(Fig. 2(b)). Once the query vector is generated, it is sent to
the LinkDigger (Fig. 2(c)). This vector is processed by the
linking service, which identifies documents semantically-
or lexically-related to the vector. The results from the link-
ing processing are sent to the interface (Fig. 2(d)), which
allows the user to select references that should be used in
the presentation being prepared.
The Linking Generator (implemented as LinkDigger
Service) receives context information and queries, relates
information considered both two inputs and sends links to
the preparation module presents. This means that, when
preparing a session, users may be automatically presented
with suggestions of related material presented in other ses-
sions. Exploiting context information or not, the result of
the linking process is a list of slides that correspond to other
sessions grouped by context information. The suggestions
are presented in the lower part of the original AutorE inter-
face presented in Figure 2.
It is important to observe that any other authoring appli-
cation can be used in the pre-production phase, since it only
needs to send a query vector to the LinkDigger API.
Linking by context in the live capture phase. Given
the requirements for ubicomp applications, we exploit voice
and handwriting recognition as well as text extraction.
The live capture phase depicted in Figure 1 shows a live
session in progress with support of the iClass system: the
prepared slides are presented on an electronic whiteboard
via a Java applet (Fig. 1(b1)); a lecturer may write on top of
the slides and may wear a microphone to capture an audio
stream for the whole session (Fig. 1(b2)) — all functionali-
ties being provided by xINCA capture components (white-
board, video, audio, chat, weblog, etc.).
Figure 2. Linking processing in the pre-production phase: (a) a query vector is created with text
extracted from slides and other session metadata, (b) the query vector can be expanded by keywords
provided by users, (c) the query vector is sent to the linking service, (d) the resulting links are shown
as recommendations, (e) the context information checkbox is selected to allow the use of context
information and (f) the context information selected is sent to LinkDigger.
are used to form a query vector along with text automatically extracted from prepared slides. The
activation of the query is done via a voice command.
in the capture phase. (b) The context information provided is sent to LinkDigger. The resulting
recommendations are shown in a pop-up window.
(a) Context information can be provided
ships between information manipulated in the live presenta-
tions and that previously stored on the XML database from
previous sessions. However, during the live capture session,
after activating the creation of links using voice, users can
provide context information in order to filter the documents
to be related to the strokes captured, which are converted
to textual information (Fig. 3(a)). The context information
provided is sent to LinkDigger service (Fig. 3(b)). In order
to recognize on-line handwritten characters, we developed
the jInk API according to the method proposed by Chan &
Yeung . Using voice recognition,1users can activate the
identification of links among information being captured.
To define links during the live phase, firstly the capture
application (iClass) composes a query vector with informa-
tion from capture components aggregated (or not) with con-
text information. Then iClass sends the vector to the link-
ing service. Texts from the slides presented during the ses-
sion are used to transparently build the query vector. How-
ever, considering such ubicomp environment, it is important
from the capture audio stream may be provided as interac-
tion alternatives to built the query vector. Also, context in-
formation can be aggregated to the query vector when a user
uses a specific voice command to activate the composition
of a query vector with information extracted from capture
components (this activation can also be manually activated
via a button on the user interface). Issued the command,
a pop-up window presenting dimensions of context infor-
mation will be shown to the lecturer allowing that he/she
selects dimensions to filter other captured sessions to be re-
lated to the live session in progress.
The query (with context information or not) is sent to the
LinkDigger Context Service. As a result of the processing,
links are defined on-the-fly and presented as recommenda-
tions in a small pop-up window, as shown in Figure 3.
After presenting the links, LinkDigger provides users
with a button in the interface so as to indicate whether or
not a given link should be added as an annotation to the
document automatically generated for the session.
Linking by context in the access phase. At the end of
a session, XML information created by iClass correspond-
ing to a session is used to generate several alternatives of
hyperdocuments for users to review the session. A Web
interface (Fig. 4, window on top) presents a hierarchical
structure of years and terms that gives access to a list of
corresponding sessions that can be reviewed in several for-
mats (Fig. 4, window on the bottom) — the sessions can be
associated with lectures from courses or project meetings,
for instance. The presentation formats supported in the cur-
rent version are HTML, XHTML, SMIL and an applet that
plays back the session by synchronously animating the cor-
1We exploit IBM ViaVoiceTM.
responding strokes (on top of their images) with the audio
stream. While interacting with all those interfaces, users
can obtain recommendations being automatically issued, or
can make explicit queries that can be used in a query vector
submitted to the LinkDigger.
Figure 4(a) illustrates that users are able to provide key-
words to be used as a query vector. Besides provides key-
words, users are able to inform dimensions of context in-
formation to be used to filter captured information stored
on the iClass XML database (Fig. 4(b)). The dimensions
chosen are added to the query vector (Fig. 4(c)). Once this
vector has been defined, a requisition is made to the linking
service (Fig. 4(d)). The vector is processed by LinkDig-
ger which verifies if there is context information associated
in order to filter the whole iClass repository. The results of
this process are sent to the presentation interface (Fig. 4(e)).
Soon we intend to use information from the session seen to
relate it to other captured sessions besides using keywords.
By using the LinkDigger Context Service, we aim to
allow people to be able to interrelate captured everyday
presentations using context information according to their
needs. That way different hypertext networks can be tai-
lored upon the same set of captured information. In the next
Context Service creates links based on context information
in the access phase of a capture and access application.
4 Prototype Implementation
LinkDigger and iClass manipulate context information
via the Context Kernel (CK) by, first, registering themselves
to receive their CK identifiers. iClass publishes context in-
formation on the CK database and LinkDigger reads that
information. iClass defines and stores CK rules. LinkDig-
ger retrieves information stored by iClass. The scenarios
presented next are rules stored on the CK by iClass.
In Scenario 1, iClass applies a rule such as “inserting in-
formation in the CK DB” to store that premise on the CK
database. The URL “http://....” refers to a Ubiquitous Com-
dimension is represented in element URL.
<premise dimension = "who" type = "author"
value = "John Doe"/>
<premise dimension = "what" type = "lecture"
value = "Ubiquitous Computing"/>
<premise dimension = "when" type = "date"
value = "Fall1999"/>
<inference dimension = "where" type = "url"
value = "http://...."/>
Figure 4. Linking processing in the access phase: (a) keywords provided by users are used as a
query vector, (b) context information to filter captured information, (c) the query vector is sent to the
linking service, (c) context information is added to the query vector and (d) links defined between
the query vector and the captured material are presented in the access interface.
LinkDigger will be able to retrieve that context using an
inference as presented in Inference 1. When receiving that
inference, CK checks the existence of some who inference
where type is “author” and value is equal to the name of the
author given as input parameter (e.g. “John Doe”).
<inference dimension="who" type="author"
In a different scenario, LinkDigger may consult the CK
in order to compute links between all URLs relative to
content that has been authored by John Doe. Scenario 2
presents the rule that could be used in such situation.
<premise dimension = "who" type = "author"
value = "John Doe"/>
<inference dimension = "where" type = "url" />
Finally, to present all links between all URLs with con-
tent authored by John Doe in Fall1999, the rule presented
in Scenario 3 could be stored by iClass and retrieved by
LinkDigger from a specific inference.
<boolean type = "AND">
<premise dimension = "who" type = "author"
value = "John Doe"/>
<premise dimension = "when" type = "date"
value = "Fall1999"/>
<inference dimension = "where" type = "url" />
Figure 5 presents links generated by LinkDigger in two
situations: (a) the links generated between captured ses-
sions considering the keywords provided by the user and
(b) the links filtered according to context information. The
second situation facilitates the use of generated information
when users have specific goals (when a user just wants to
see lectures given by John Doe in Fall 1999). A user with
this goal would suffer cognitive overhead looking for those
specific links in the interface (Figure 5(a)). Context infor-
mation filters related information; it does not adapt the hy-
perlink structure such as in adaptive hypermedia.
The LinkDigger Context Service has not been evalu-
ated yet in terms of quality measures such as precision and
recall2. However, our previous LinkDigger has been exper-
2Precision and Recall are respectively the fraction of retrieved
documents which are relevant and the fraction of relevant docu-
ments effectively retrieved.
Figure 5. (a) The links generated by the LinkDigger Service without considering context information.
(b) The links generated by LinkDigger Context Service considering as anchor the following context
information: who (John Doe) and when (Fall 1999).
imented considering these measures in a study that inves-
tigated how different weighting schemes, filtering methods
and similarity thresholds on similarity levels can improve
the quality of the links created .
5 Related Work
Infrastructures for Context-Awareness. Researches
on context-aware computing have pointed out two chal-
lenges for building context-aware applications: (i) the sup-
port for several levels of heterogeneity, a basic requirement
for ubicomp, (ii) and the apportioning of responsibilities be-
tween applications and infrastructures. There are efforts
geared towards the construction of services dedicated to
capture, store and process context information. The Con-
text Toolkit  provides applications with capture, storage,
conversion, aggregation, access and distribution of context
information. The GaiaOS  is a middleware in charge of
the context management, binding, mobility and adaptabil-
ity. The Aura framework  addresses adaptation accord-
ing to users mobility and needs. We have used the Con-
text Kernel to address the challenges mentioned in a sim-
pler, but efficient way. The Context Kernel makes appli-
cations context-aware by interchanging context information
by means of the standard protocols of the Internet infras-
tructure. Thus applications can not only store and retrieve
context information, but also exchange it through the Web.
Infrastructures for Link Authoring From the informa-
tion retrieval point of view, previous work on link authoring
presented methods so as to minimize ambiguities or prob-
lems associated with synonymy and polysemy. Other re-
searches proposed methods to generate links considering
user interactions  in portable computers . Struc-
tural analysis methods have exploited the creation of links,
such as the frequency of relationships between citations
of documents , structural analysis in different con-
texts , and the combination of textual and structural
analysis  . Open hypermedia concepts and agents
have also been used to define hyperlinks , where the
main parameters are the user needs, the context defined
in previous searches, and the semantic similarities calcu-
lated between Web pages visited by users. The HyCon is
a step further in context-aware mobile hypermedia where
user-authored annotations, links, and guided tours associate
locations with maps and Web pages . LinkDigger ser-
vice identifies semantically related documents by means of:
the LSI method, user feedback, and the statistical distribu-
tion of data. The structure of documents is also exploited,
but in a limited way, since so far these have similar structure
for contents and metadata (homogeneous repositories).
From the capture and access and the context-awareness
points of view, location tracking systems can be considered
instances of automatic identification of relationships be-
tween captured data. Those relationships have been usually
presented as hyperdocuments, for instance, to track peo-
ple’s routes so as to identify and locate them, and to record
actions to support short-term memory . Some authors
have also investigated prototypes that combine principles
from augmented reality and hypermedia to support organiz-
ing and managing digital and physical material in terms of
spatial relationships . The NoteLook system has an im-
age matching technique so as to define multimedia links in
paper-like interfaces . In Tivoli, gesture recognition in-
terrelates information during a live meeting. The integra-
tion of material produced before meeting sessions is carried
out by allowing users to import text, images and prepared
hypermedia documents. In the post-production phase the
Tivoli system allows to import captured content to a collab-
orative hypermedia system . In our work, LinkDigger
and Context Kernel services were integrated so as to create
links from all phases of a capture and access application.
ours in the domain of military meetings . Its foundation
is the MIT Metaglue multi-agent software, which allows the
relationship of information that can be used to describe the
situation of an entity considered relevant. The most likely
use of context information to define relationships is search-
ing for information. For instance, an application integrated
to the iRoom may automatically gather information related
to the subject discussed by members of a meeting, with
no explicit intention to start a search. Context information
seems to be a reasonable solution to allow the environment
to identify which activities are taking place in the meeting
room, and which information should be retrieved.
As far as we understand, the above related work does not
exploit context information in order to create links on repos-
itories containing captured information. Although the liter-
ature reports important advances in the mentioned realms,
the integration of their results towards automatically identi-
fying relationships has not been reported in the same degree
as we propose in this paper.
6 Final Remarks
Considering information registered in capture and access
applications, linking services can be used to relate informa-
tion produced before, during and after captured sessions.
Moreover, context-aware services can be exploited to allow
that different hypertexts be defined upon the same set of
In this paper, we discussed how we have designed a soft-
ware infrastructure that, making use of previous efforts, al-
lows the automatic generation of links to be influenced by
context sensitive information. Exploiting this work, it is
possible and useful to use context to achieve links creation
in ubicomp systems. The novelty of our contribution is the
use of automatically extracted context information to define
relationships in capture and access applications.
The LinkDigger Context Service is a preliminary proto-
type. More experiments with real users are needed in or-
der to prove the efficiency of the proposed approach, for
example, reuse of information. Also as future work we in-
tend to apply our service in different lecture settings – tradi-
tional presentations, laboratory sessions, and collaborative
sessions – so as to investigate demands for context infor-
mation. The iClass system is being reformulated to manip-
ulated metadata according to the e-learning standards (e.g.
LOM). In the long term, we plan LinkDigger takes into ac-
count those metadata in order to ensure the interoperability
of our approach with Learning Objects repositories.
Brazilian funding agencies FAPEMA (n.03/345), FAPESP
(03/13930-4 and 04/12477-7) and CNPq.
The authors are supported by the
 M. Weiser, “The computer for the 21st century,” Scientific
American, vol. 265, pp. 94–104, September 1991.
 G. Abowd, E. Mynatt, and T. Rodden, “The human experi-
ence,” IEEE Pervasive Computing, vol. 1, no. 1, pp. 48–57,
 M. G. C. Pimentel, G. D. Abowd, and Y. Ishiguro, “Link-
ing by interacting: a paradigm for authoring hypertext,” in
Proc.ACM Conference on Hypertext, pp. 39–48, 2000.
 P. Chiu, J. Boreczky, A. Girgensohn, and D. Kimber,
“LiteMinutes: an Internet-Based System for Multimedia
Meeting Minutes,” in Proceedings of the 2001 International
World Wide Web Conference, pp. 140–149, 2001.
 H. Richter, G. D. Abowd, W. Geyer, L. Fuchs, S. Daijavad,
and S. Poltrock, “Integrating meeting capture within a col-
laborative team environment,” in Proc. of the International
Conference on ubiquitous Computing, pp. 123–138, 2001.
 G. D. Abowd, “Classroom 2000: an experience with the in-
tems Journal, vol. 38, pp. 508–530, 1999.
 A. K. Dey, G. D. Abowd, and D. Salber, “A context-based
infrastructure for smart environments,” in Proceedings of the
International Workshop on Managing Interactions in Smart
Environments, pp. 114–128, 1999.
 M. G. C. Pimentel, Y. Ishiguro, B. Kerimbaev, G. D. Abowd,
and M. Guzdial, “Supporting long-term educational activ-
ities through dynamic Web interfaces,” Interacting With
Computers Journal, vol. 13, pp. 353–374, 2001.
 A. A. Macedo, J. A. Camacho-Guerrero, R. G. Cattelan,
V. R. Inacio Jr, and M. G. C. Pimentel, “Interaction alter-
natives for linking everyday presentations,” in Proc. ACM
Conference on Hypertext, pp. 112–113, 2004.
 G.Salton, “Anotherlookatautomatictext-retrievalsystems,”
Comm. of the ACM, vol. 29, pp. 648–656, July 1986.
 A. A. Macedo, M. G. C. Pimentel, and J. A. Cammacho-
Guerrero, “An infrastructure for open latent semantic link-
ing,” in Proc.ACM Conference on Hypertext, (College Park,
Maryland, USA), pp. 107–116, 2002.
 A. A. Macedo, K. N. Truong, J. A. Camacho-Guerrero,
and M. G. C. Pimentel, “Automatically sharing Web expe-
riences through a hyperdocument recommender system,” in
Proc.ACM Conference on Hypertext, pp. 48–56, 2003.
 R. Cattelan, L. Baldochi, and M. Pimentel, “Processing and
storage middleware support for capture and access applica-
tions,” in Proceedings of the 2003 ACM/IFIP/USENIX Inter-
national Middleware Conference, p. 315, 2003.
 K. N. Truong and G. D. Abowd, “INCA: A software infras- Download full-text
tructure to facilitate the construction and evolution of ubiqui-
touscapture&access applications,” inProc.Pervasive2004:
The Second International Conference on Pervasive Comput-
ing, (Austria), pp. 140–157, April 2004.
 C. R. E. Arruda Jr, R. F. Bulc˜ ao Neto, and M. G. C. Pi-
mentel, “Open context-aware storage as a web service,” in
Proceedings of the International Workshop on Middleware
for Pervasive and Ad-Hoc Computing – ACM/IFIP/USENIX
International Middleware Conference, pp. 81–87, 2003.
 R. F. Bulc˜ ao Neto, C. O. Jardim, J. A. Camacho-Guerrero,
and M. G. C. Pimentel, “A web service approach for pro-
viding context information to cscw applications,” in Pro-
ceedings of the WebMedia & LA-Web 2004 Joint Conference
Brazilian Symposium on Multimedia and the Latin American
Web Congress, pp. 46–53, IEEE Computer Society, 2004.
 A. A. Macedo, M. G. C. Pimentel, and J. A. C. Guerrero,
“Latent semantic linking over homogeneous repositories,” in
Proceedings of the ACM Symposium on Document Engineer-
ing, (USA), pp. 144–151, 2001.
 G. W. Furnas, S. Deerwester, S. T. Dumais, T. K. Landauer,
R. A. Harshman, L. A. Streeter, and K. E. Lochbaum, “Infor-
mation retrieval using a singular value decomposition model
of latent semantic structure,” in Proceedings of Conference
on Research and Development in Information Retrieval (SI-
GIR), (Grenoble, France), pp. 465–480, ACM Press, 1988.
 J. A. Camacho-Guerrero, A. A. Macedo, and M. G. C. Pi-
mentel, “A look at some issues during textual linking of ho-
mogeneous Web repositories,” in Proc. ACM Document En-
gineering Symposium, (USA), pp. 74–83, October 2004.
engine software.” Internet (Visitada em 28/07/2005), 2005.
 G. Salton and C. Buckley, “Term-weighting approaches in
automatic text retrieval,” Information Processing and Man-
agement, vol. 24, no. 5, pp. 513–523, 1988.
 S. Deerwester, S. T. Dumais, T. K. Landauer, G. W. Furnas,
and R. A. Harshman, “Indexing by latent semantic analysis,”
Journal of the Society for Information Science, vol. 41, no. 6,
pp. 391–407, 1990.
 M. Pimentel, D. Sante, R. Bulc˜ ao Neto, C. Izeki, and
R. Fortes, “Preparing and extending capture-based docu-
ments,” in Proc. of the Int. Information and Telecommuni-
cation Technologies Symposium, (Brazil), pp. 1–8, 2003.
 R. F. Bulc˜ ao Neto, C. A. Izeki, M. G. C. Pimentel, R. P. M.
Pontin, and K. N. Truong, “An open linking service support-
ing the authoring of Web documents,” in Proc. ACM Docu-
ment Engineering Symposium, (USA), pp. 66–73, 2002.
 K. F. Chan and D. Y. Yeung, “A simple yet robust structural
approach for recognizing on-line handwritten alphanumeric
characters,” in Proceedings of the International Workshop on
Frontiers in Handwriting Recognition, pp. 229 – 238, 1998.
 C. K. Hess, M. Rom´ an, and R. H. Campbell, “Building ap-
plications for ubiquitous computing environments,” in Pro-
ceedings of the International Conference on Pervasive Com-
puting, pp. 16–29, 2002.
 D. Garlan et al., “Project Aura: toward distraction-free per-
vasive computing,” IEEE Pervasive Computing, vol. 1, no. 2,
pp. 22–31, 2002.
 G. Golovchinsky, “What the query told the link: the integra-
tions of hypertext and information retrieval,” in Proc.ACM
Conference on Hypertext, (UK), pp. 67–74, 1997.
 M. Price, G. Golovchinsky, and B. Schilit, “Linking by ink-
ing: trailblazing in a paper-like hypertext,” in Proc. ACM
Conference on Hypertext, (USA), pp. 30–39, 1998.
 H. Small, “Co-citation in the scientific literature: A new
measure of the relationship between two documents,” Jour-
nal of the American Society for Information Science, vol. 24,
pp. 265–269, February 1973.
 M. R. Henzinger, “Hyperlink analysis for the Web,” IEEE
Internet Computing, vol. 5, pp. 45–50, January 2001.
 P. Calado, B. Ribeiro-Neto, N. Ziviani, E. Moura, and
I. Silva, “Local versus global link information in the Web,”
ACM Trans. on Information Systems, vol. 21, no. 1, pp. 42–
 K. Sugiyama, K. Hatano, M. Yoshikawa, and S. Uemura,
“Refinement of TF-IDF schemes for Web pages using their
hyperlinked neighboring pages,” in Proc. ACM Conference
on Hypertext, (Nottingham, UK), pp. 198–207, 2003.
 S. R. El-Beltagy, W. Hall, D. DeRoure, and L. Carr, “Linking
in context,” in Proc.ACM Conference on Hypertext, (Arhus,
Denmark), pp. 151–160, ACM Press, August 2001.
 F. A. Hansen, N. O. Bouvin, B. G. Christensen, K. Groe-
baek, T. B. Pedersen, and J. Gagach, “Integrating the web
and the world: Contextual trails on the move,” in Proc. ACM
Conference on Hypertext, (USA), pp. 98–107, 2004.
 J. Trevor, D. Hilbert, D. Billsus, J. Vaughan, and Q. Tran,
“Contextual contact retrieval,” in Proc. of the Int. Confer-
ence on Intelligent User Interfaces, (Portugal), pp. 337–339,
 K. Grønbæk, J. F. Kristensen, P. Øbrbæk, and M. A. Eriksen,
““Physical hypermedia”: organising collections of mixed
physical and digital material,” in Proc.ACM Conference on
Hypertext, (Nottingham, UK), pp. 10–19, November 2003.
 P. Chiu, J. Foote, A. Girgensohn, and J. Boreczky, “Auto-
matically linking multimedia meeting documents by image
matching,” in Proc. ACM Conference on Hypertext, (San An-
tonio, TX, USA), pp. 244–245, ACM Press, 2000.
 J. M. Haake, C. M. Neuwirth, and N. A. Streitz, “Coexis-
tence and transformation of informal and formal structures:
Requirements for more flexible,” in Proceedings of the Eu-
ropean Conference on Hypertext, pp. 1–12, 1994.
 J. Scholz, M. Grigg, P. Prekop, and M. Burnett, “Develop-
ment of the software infrastructure for a ubiquitous com-
puting environment: the dsto iroom,” in Proceedings of the
Australasian information security workshop conference on
ACSW frontiers 2003, pp. 169–176, Australian Computer
Society, Inc., 2003.