Content uploaded by Mathieu Boussard
Author content
All content in this area was uploaded by Mathieu Boussard
Content may be subject to copyright.
1
EnvB: An Environment-based Mobile Browser
for the Web of Things
Pierrick Thébault 1, 2, Mathieu Boussard 1,
Monique Lu 1, Cédric Mivielle 1
1 Alcatel-Lucent Bell Labs France
Route de Villejust
91620 Nozay, France
{firstname.lastname}@alcatel-lucent.com
Simon Richir 2
2 Arts et Métiers Paristech, LAMPA
2, Bd du Ronceray
49000 Angers, France
pi.laval@ensam.fr
ABSTRACT
The growing number of tagged or Web-enabled objects
today opens up the possibility for object-based applications
or services to be designed. In this paper, we discuss the
concept of an “object browser” for the Web of Things and
present an Environment-based mobile Browser (EnvB) that
facilitates the interactions with the resources (objects,
services and people) of a physical place.
Author Keywords
Environment-based browser, object browser, web of things,
internet of things, smart objects, smart places, user
interface, mobile application.
ACM Classification Keywords
H5.m. Information interfaces and presentation (e.g., HCI):
Miscellaneous.
INTRODUCTION
By tagging every day things with visual markers (e.g.
linear or matrix barcodes) or radio frequency based labels
(e.g. passive RFID tags), but also by embedding
connectivity chipsets and limited computational capabilities
(e.g. tiny web servers) into all sorts of appliances,
researchers have tried to bridge the gap between physical
and digital worlds. While a lot of research is currently
conducted to create large-scale networks of sensors and
actuators, initiatives aiming to create small applications or
services on top of real-world objects (RWO) are launched
in the Web of Things community [6,10].
Most of these initiatives take advantage of a mobile device
as a way to interact with objects whose user interfaces were
not designed for extended capabilities or advanced
personalization. As mobile phones already enable the
identification of digitally enriched objects, thanks to the
ongoing integration of cameras and Near Field
Communication (NFC) modules, the concept of a mobile
“object browser” facilitating the representation and the
interaction with the digital counterpart of an object (e.g.
annotations, linked resources, web services, etc.) is
currently explored.
In this paper, we discuss the related work on mobile
browsers for the Web of Things and highlight the issues
that led us to change our approach on interaction with smart
objects. After presenting the overall concepts underlying
the creation of a mobile Environment-based Browser,
which allows users to interact with the available resources
(objects, services, people) of a physical place, we describe
the user interface and the implementation process of our
prototype. We conclude by illustrating the added value of
such a tool with short use case examples.
RELATED WORK
If the Ubiquitous Computing community pointed out the
use of mobile phones as input devices to various resources
(e.g. situated displays, vending machines or home
appliances) [3] and compared mobile interaction techniques
[9] few years ago, the concept of a browser for the Web of
Things is relatively new. Inspired by Kindberg’s work on
“Web presence” [7], most of the recent projects make use
of visual markers or NFC technology to shape in-situation
interactions with smart objects. In this section, we propose
to describe the related work and discuss its approach.
“Object browser” examples
With “BIT” [8], Roduner explored the possibility of
retrieving information and services directly from tagged
things or objects using a single runtime environment.
Services digitally enriching RWO (e.g. one that offers to
change the coffee machine’s water hardness settings) are
delivered to the user on a mobile application, allowing for
the design of a unified user experience. Based on the
concept of “physical mashups” [5], Guinard proposed a
mobile application enabling the creation of applications
mixing RWO and services (e.g. one that automatically turns
the heating off when the user is away from home). Users
are given the opportunity to easily create new object’s
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise,
or republish, to post on servers or to redistribute to lists, requires prior
specific permission and/or a fee.
CHI 2009, April 4–9, 2009, Boston, MA, USA.
Copyright 2009 ACM 978-1-60558-246-7/08/04…$5.00
behaviors through a mobile wizard-based editor and export
them to an execution framework. In previous work [4], we
created a mobile application offering users to interact with
the virtual representation of an object’s functions and status
(e.g. turning a lamp on or off), which we refer to as a
“virtual object” (VO). VOs also allow users to instantiate
composite applications on their Web-enabled RWO (e.g.
one that makes a lamp blinking with a specific color and a
television displaying a personalized message when the user
receives a phone call).
Issues of this approach
By relying on touching or pointing interaction techniques,
these examples require users to look for tagged objects in
their environment and to stand in front of them every time
they want to access their associated services or create new
behaviors. If this approach has benefits [9], it does not
permit users to properly deal with smart objects on a daily
basis or from a remote location. Aggregating all virtual
representations of objects (i.e. VOs) into a gateway allowed
us to provide the user with new in-situation and off-
situation object browsing experiences, but ways to filter
objects according to a user perspective are still needed. We
argue that people might want to only interact with a limited
number of objects that are physically present in a place and
group them in a way that supports their activities (e.g. one
might want to have a quick and clear view of the objects he
is monitoring or often interacting with). We also assume
that users would be interested in interacting with the
application built on top their RWO without having to touch
or point a tagged object first. As object-based applications
are most likely to rely on users’ presence and RWO’s
availability, we suggest considering objects as a component
of a larger ecosystem, where people and services also play
an important role. For these reasons, we propose to shift
from an object-based to an environment-based browser
presenting services based upon the resources of physical
places to users.
DESIGNING AN ENVIRONMENT-BASED BROWSER
In this section, we first present the overall concepts
underlying “EnvB”, an Environment-based Browser for
mobile platforms providing users with a unified
representation of the available resources of a physical place.
In a second time, we detail the user interface that has been
designed.
Concepts
We argue that physical places (e.g. a shop, a house, a
museum, a subway station) should be considered as the
primary entry point for browsing and interacting with
objects, services and people that are present or linked with
this specific environment. In our vision, place owners are
free to choose what resources (e.g. object-based
applications, virtual objects, traditional web services, user-
generated content, etc.) should be made visible or pushed to
users. EnvB is a personal application specifically designed
for mobility that mirrors the smart capabilities of physical
places. It assists users in their urban trips by giving them
the opportunity to seamlessly connect from one place to
another and interact with the resources they are interested
in. EnvB is based on several concepts described in the
following.
Place representation. In our vision, a digital space
aggregating resources is embodied into each physical place.
We chose to design these digital representations of places
as dedicated portals based on a common template that can
be slightly personalized or branded by place-owner. Place
portals differ from traditional website in the level of
contextual interactions they offer. If all portals can be
browsed in-situation and off-situation, some resources
might, for instance, not be accessible from a remote
location. Resources can also be filtered or re-organized
according to users’ preferences saved for each place.
Resources representation. VOs, object-based applications
and web services belonging to a physical place are
displayed on its portal as graphical widgets. These widgets
provide users with a comprehensive representation of
resources’ capabilities and direct manipulation interactions.
While VOs consist of single-view widgets that can by
easily restyled, object-based applications or traditional
services (e.g. a booking module for a museum or theater)
can be based on a more complex layout. In order to offer re-
configuration phases or multi-steps processes, multi-view
widgets (ideally designed according to common guideline)
should also be supported.
Place parameters. Instrumenting a physical place with a
small cellular base station (e.g. a Femtocell [1]) allows us to
enrich the experience of place portals. The identification of
users through their International Mobile Subscriber Identity
(IMSI) makes possible the creation of communication
features (e.g. an in-place chatroom or information wall) and
can be used to deal with access rights issues. Some object-
based applications or VOs can therefore be restricted to
Figure 1. Places and layers can be accessed through a list
view (left) or a map view (right).
3
people that are physically present in the place or part of the
place owner’s social graph. Based on the activity and the
presence of users, it is also possible to propose a
representation of the ambience that will help people to
decide whether visiting or not a place.
Resources layering. If we promote resource browsing
through physical places, we do not want people to be
constrained to constantly jump from a place portal to
another. We propose to let users create another type of
portals called layers, where they will be able to aggregate
resources belonging to different places. Inspired by the
augmented-reality browser Layar [2], this concept can be
seen as a way to bookmark widgets and store them for a
more practical use (e.g. a layer allowing family members to
monitor or interact with certain objects of different houses)
or to filter the physical places according to a certain theme
(e.g. a layer dedicated to pollution measurement).
User interface design
A first version of EnvB user interface has been designed for
high-resolution touch-screen mobile phones. It provides
users with two browsing modes and scrollable place and
layer portals described in the following.
List view. Lists of places, layers and bookmarks are
presented to users when they start the application (Figure
1). They can filter the results according to several
parameters (e.g. category of place, distance, presence,
ambience), search for a specific place or layer or teleport to
another location.
Map view. Users can switch from the list view to an
“explorer” view showing places and layers on a Google
Map mashup (Figure 1). This visualization mode gives an
overview of a place or layer and allows users to quickly
slide from one result to another.
Portal view. After selecting an item in the list or the
explorer, users enter a portal, whose color depends on the
type (i.e. red for places and green for layers). On both place
and layer portals, widgets are presented as cards that can be
scrolled, reconfigured or bookmarked (Figure 2). A bottom
menu bar triggering the display of a popup window allows
users to access the presence, ambience and activity
representations of the physical place. It also gives
information about the composition of the portal (e.g. layers
that are using the resources of the place and vice versa). By
touching the top right corner icon, users can finally
bookmark the place/layer portal and personalize the type of
widgets they want to be presented.
PROTOTYPE IMPLEMENTATION
A prototype of the EnvB mobile application following these
concepts has been implemented using the Android platform.
In this section, we describe the mobile client modules and
server-side components that are part of the overall
architecture of our system (Figure 3).
Mobile client
We chose to use Android native APIs for the logic-related
implementation of the prototype in order to ensure a
responsive and seamless access for end users. We also took
advantage of the “WebView” mechanism for rendering
VOs and object-based applications. This allows us to
dynamically plug into a place portal new resources
provided by third party developers. The mobile client
includes the following modules:
Place_Agent. This module interacts with the “place/layer
resolver” to retrieve the list of accessible portals.
VO_Agent. This module communicates with “place
enablers” to access and control resources.
Eventing. This module listens to the event channel of each
resource and reports any resource-related event.
WebViewRendering. This module loads resource-related
HTML data and renders it in a WebView.
The mobile client has been implemented in Java using
Android SDK 2.1. Tests have been made on Samsung
Galaxy S and HTC Desire smart phones.
Server-side components
The mobile client interacts with dedicated server-side
components related to place/layer retrieval and
management. These components include:
Place/layer resolver. This module provides the client with
a list of relevant places or layers depending on the user
location and profile. This location is calculated thanks to
modular mechanisms (e.g. GPS coordinates provided by the
mobile client or the micro-cell presence provided by the
Telco infrastructure).
Place enabler. This software component aggregates
information resources of a physical place. It includes a VO
Figure 2. Place and layer portals present resources as
widgets (left) and provide parameters like presence (right).
Gateway [4] and is implemented using the OSGi
framework. This allows for a high level of modularity (e.g.
virtual objects are embodied as OSGi bundles which
simplifies their provisioning) and provides utility functions
such as persistency, access control or eventing.
Layers. Layers are collections of resources that can be
hosted in different places. They are embodied by an XML
document served by a specific instance of the place enabler.
USE CASE EXAMPLES
The following examples illustrate the added value of our
Environment-based browser for end-users:
- In homes, guests have the opportunity to seamlessly
interact with existing object-based applications displayed
on the place portal. E.g. Natasha temporarily changes
Bob’s “metro warning” widget settings to be sure she will
not miss the last train.
- In restaurants or bars, people are free to interact with
public displays or request to change the sound level of
audio devices. E.g. Paul requests the playback of his
favorite video clip through the “video jukebox” widget.
- In trains, travelers can get in touch with each other’s, offer
to share their personal Internet connection or interact with
services. E.g. Mike uses the “Taxi sharing” widget and the
in-place chatroom to plan his trip to the airport.
- In the city, users can connect to a remote place to check
the number of people, the ambience and get information.
E.g. Elsa checks if the product she wants to buy is available
before going to her crowded RFID-enabled shop.
- Anywhere, users can browse and interact with resources
of different places through layers. E.g. Peter connects to his
“neighborhood” layer portal to check the “energy savers
leaderboard” widget and post a personal ad.
CONCLUSION
In this paper, we presented an overview of initiatives
leveraging Web of Things mobile browsers to facilitate
end-user interactions with tagged or Web-enabled objects
and highlighted the limits of their object-based approach.
We argued that using touching and pointing interaction
techniques to access digitally augmented objects could be
cumbersome for users because of their redundancy. As
object-based applications or services are most likely to rely
on several objects and users’ presence to operate, we
proposed to shift from an object-based browser to an
environment-based approach where physical places are
considered as a better entry point. After presenting the
overall concepts underlying our vision, we described the
current status of EnvB, which is a mobile application for
browsing and interacting with all resources (objects,
services, people) of a physical place. In future work, we
will explore the type of services that can be delivered by
physical places and investigate the user acceptance of our
concepts and prototype through several user research tracks. !
REFERENCES
1. :: Femto Forum :: http://www.femtoforum.org/femto/.
2. Augmented Reality - Layar Reality Browser -
Homepage. http://www.layar.com/.
3. Ballagas, R., Borchers, J., Rohs, M., et Sheridan, J.G.
The smart phone: a ubiquitous input device. Pervasive
Computing, IEEE 5, 1 (2006), 70–77.
4. Boussard, M. et Thébault, P. Navigating the Web of
Things: Visualizing and Interacting with Web-Enabled
Objects. Leveraging Applications of Formal Methods,
Verification, and Validation, (2010), 390–398.
5. Guinard, D. Mashing up Your Web-Enabled Home.
Adj. Proc. the International Conference on Web
Engineering (ICWE 2010), Vienna, Austria, (2010).
6. Guinard, D. et Trifa, V. Towards the web of things:
Web mashups for embedded devices. Workshop on
Mashups, Enterprise Mashups and Lightweight
Composition on the Web (MEM 2009), in proceedings
of WWW (International World Wide Web
Conferences), Madrid, Spain, (2009).
7. Kindberg, T., Barton, J., Morgan, J., et al. People,
places, things: web presence for the real world. Mob.
Netw. Appl. 7, (2002), 365–376.
8. Roduner, C. BIT–A Browser for the Internet of Things.
Proceedings of the CIoT Workshop 2010 at the Eighth
International Conference on Pervasive Computing
(Pervasive 2010), (2010), 4–12.
9. Rukzio, E., Leichtenstern, K., Callaghan, V., Holleis,
P., Schmidt, A., et Chin, J. An experimental
comparison of physical mobile interaction techniques:
Touching, pointing and scanning. UbiComp 2006:
Ubiquitous Computing, (2006), 87–104.
10. Wilde, E. Putting things to REST. School of
Information, UC Berkeley, Tech. Rep. UCB iSchool
Report 15, (2007).
Figure 3 – Overall prototype architecture