Content uploaded by Pierrick Thébault
Author content
All content in this area was uploaded by Pierrick Thébault on Jan 28, 2014
Content may be subject to copyright.
Navigating the Web of Things: visualizing and
interacting with Web-enabled objects
Mathieu Boussard
1
, Pierrick Thébault
1,2
,
1
Alcatel-Lucent Bell Labs France, Route de Villejust, 91620 Nozay,
France
{mathieu.boussard, Pierrick.Thebault,
}@alcatel-lucent.com
2
Arts et Métiers Paristech, LAMPA, 2 Bd du Ronceray, 49000 Angers,
France
pi.laval@ensam.fr
Abstract. The Web of Things vision aims to turn real-world objects into
resources of the Web. The creation of accessible and addressable Virtual
Objects, i.e. services that expose the status and capabilities of connected real-
world objects using REST APIs, allows for new machine-to-machine
interactions to be designed but also for a new user experience to be shaped.
Indeed, the change brought about by the connectivity of objects and their ability
to share information raises design issues that have to be considered by
manufacturers and service providers alike. In this paper, we present an approach
to the Web of Things based on both technical and user-centered points of view.
We argue that new user interfaces have to be designed to allow users to
visualize and interact with virtual objects and environments that host them. We
illustrate our vision with an interaction mockup that allows the user to navigate
the Web of Things through three devices (a smart phone, a web tablet and a
desktop web browser).
Keywords: Pervasive computing, Internet of Things, Web of Things, user
experience, interaction design, user interfaces, ambient intelligence, browser.
1 Introduction
When the term ‘Ubiquitous computing’ was coined in the early 90’s by Mark Weiser
from Xerox PARC [1], it was a mere vision depicting information services processed
and consumed outside of the personal computer terminal. Years have passed and
consequent research work has been conducted in this field trying to address subsets of
the bigger problem, but it was not before the past few years that conditions to actually
embody this vision in people’s everyday life were seemingly met.
A first phenomenon is the advent of networking as a basic (hardware) feature – it
has become cheap enough in the last few years to embed networking capabilities in
every object, starting with powerful mobile computing devices but also for everyday
objects with limited computation or interface capabilities. In parallel, consequent
research was conducted under the umbrella of the “Internet of Things” on how to
actually network these objects based on a common networking technology: IP.
Finally, the World Wide Web as imposed itself as the main platform to deliver
services to end-user, thanks to its simple and open foundations that fostered service
creation by virtually any of its users, in all aspects of people’s lives.
As a result, it has now become possible to embody the ubiquitous computing
paradigm using a Web of Things approach – where web technology allows exposing
real-world objects as resources on the Web. As a consequence, users can interact with
objects via a virtual representation of objects that surround them. In this paper we
discuss how user experience should be considered as crucial in the design and use of
connected objects.
In chapter II, we discuss the overall concepts underlying the Web of Things and
related work. We then present in chapter III general considerations regarding user
experience in environments where objects are exposed as Web resources, before
presenting some early results in chapter IV. We finally conclude and present future
opportunities in chapter V.
2 An approach to the Web of Things
Just as the term “Internet of Things” covers different aspects ranging from object
identification to wireless sensor networking, the expression Web of Things can be
used to describe many different topics. These range from embedding a Web server in
constrained resource devices to exposing real-world (connected) objects using Web
techniques and composing them. In the following, after presenting related work, we
focus on aspects that have a direct impact on user experience in a Web of Things
enabled environment.
2.1 Related work
Approaches to apply web principles to ubiquitous environments originate from the
early 2000s when the Web gained popularity as a delivery platform. Kindberg et al.
discussed the concept of “Web presence” under the form of a web page for people,
places and things in [2]. A number of solutions using web-related standards and
methods were applied, from distributed computing middleware like JXTA [3] to Web
Services technologies and for example the device-oriented flavor of the protocol suite
DPWS [4]. Using such solutions requires specific skills from the service developer to
create an application that takes full advantage of it, and potentially specific software
to be installed.
The term “Web of Things” was coined recently as an application-oriented reaction
to the network-oriented Internet of Things research, also feeding on the Web 2.0
concept of the Web as a (service-)platform [5]. Initiatives like SenseWeb and Pachube
have focused on a data-oriented exposure of real world objects, where a centralized
Web service proposes a repository for sensor readings, exposing an API for third-
party mashups ([6], [7]). Initiatives providing service-oriented exposure to objects
themselves seem more interesting to us, as they are likely to provide greater
scalability and interaction possibilities. Such approaches are described in [8] and [9]
where the advantages of using of RESTful principles to expose real-world connected
objects to Web applications are discussed.
Although a great deal has been achieved, most of this work has been focusing on
machine-to-machine (M2M) aspects, and very little on the representation to users of
possibilities offered by Web-exposed objects. Most work relies on HTML pages to
represent and interact with networked objects. Although as stated in [2] the browser is
the de-facto interface to the Web and has been broadly accepted by end-users, recent
developments on HTML5, Rich Internet Applications, and mobile interfaces have
shown that there is still a lot of room for improvement presentation-wise.
2.2 Virtual objects
In our approach, we focus on exposing connected real-world objects (RWO) as
resources accessible using Web technology – we call such avatars of physical objects
Virtual Objects (VO). Virtual Objects are responsible for exposing the RWO state and
functionalities on the Web; they should be addressable (via a URI), accessible
(presenting a REST interface) and provide a description of their nature, status and
capabilities towards both other services (for M2M purposes) and users through a
visual representation.
Depending on the nature of the RWO, different realization of the VO and related
hosting schemes have to be considered. If the RWO is Web-enabled, its VO can be
embedded in device. In another schema, a gateway could host a collection of VOs
based on network topology (e.g. of all objects connected to a network access point) or
geographical considerations (e.g. all devices in a building or room). This solution is
interesting in particular for objects that are not Web- or even IP- enabled, comparable
to the work described in [8]. Other deployment schemas can be envisioned, such as
the hosting in a cloud providers’ solution. It is worth noting that deployment schemes
are not only bound by technical choices – one could imagine different business
models where the manufacturer of an object would rather host the related VO on
behalf of their customers, or such hosting service being provided by the user’s
Internet Service Provider, etc.
As Virtual Objects are the finest grain of interaction users could meet in Web of
Things experience, it is important that they provide a representation to users, in a way
that is best suited to the usage situation. This representation should provide the user
with the possibility to interact with services exposed by objects (e.g. turn lamp on/off)
and to monitor user-relevant object state (lamp is on).
2.3 Composing objects of an environment and using them in applications
As pointed out in [3], "while the Web is a global-scale information system for which
physical location is transparent, we often want to access resources that are
conveniently near our physical location". It is therefore reasonable to think that
although searching for virtual objects representing remote real-world objects will
have its utility, visualizing objects in-situation (i.e. those present in the current
environment of the user) is crucial. For this, it is necessary to model the concept of
environment (as a grouping of physically regrouped objects, but also of potential
compositions between these objects). In our approach we map the concept of
environment on a virtual object gateway which hosts the VOs of the objects
connected to a same network access point, and as a consequence are located in the
same premises.
As objects get exposed as services on the Web, they enable new types of composition
or mashups. Guinard et al. [8] propose a first classification using the terms virtual-
physical mashups for applications that present a web user interface to objects and
physical-physical mashups for M2M compositions. While the latter are directly
constrained by the nature and capabilities of the composed objects (e.g. the
composition of a webcam with a display is directly linked to the nature of the two
considered objects), virtual-physical mashups can cover a much broader range of
application logic, only limited by the service developer imagination or needs. It is
important to note that from a user perspective both approaches should be made visible
as they participate to the ambient intelligence of an environment.
3 Towards new user experience
As the exposition of real-world objects on the Web allows the design of machine-to-
machine interactions, objects can be considered as actuators putting specific physical
or digital mechanisms in action, without any immediate user input. This leads to the
creation of ambient intelligence that is able to control object behaviors. Alternative
interfaces designed for a new means of control of objects allow for a new experience
to be shaped by manufacturers, service providers or end-users in order to assist people
in their daily life. From a user-centered perspective, we propose to discuss, in the
following sections, the issues related to the perception of objects’ connectivity by
users, the representation of interactions between objects and the creation of object
groups.
3.1 Distinguishing Web-enabled objects from non-connected objects
Users are likely to experience automation of multiple objects working together to
relieve the effort of turning on/off and configuring their devices or appliances. The
communication of two or more objects through the Web is initiated without explicit
permission from users and remains invisible to them. As communication chipsets are
often hidden inside objects, users will not make, in most cases, the difference between
connected and non-connected devices. Norman [10] tells us that user observation of
feedback depends upon the information conveyed by the physical device itself. We
argue that new graphic representations are needed to make the Internet connection
and VOs more tangible and perceptible by users. Many different solutions ranging
from a simple marker to a complete redesign of object’s user interfaces can be
proposed. As Arnall [11] introduced a graphic language for touch-based interactions,
we need to explore the visual link between information and real-world object.
Regardless of whether Internet capability of objects is made visible, users are likely to
be surprised by the automation of their surrounding, especially if they have not
shaped the objects’ behaviors by themselves. Woods tells us that situations where
“automated systems act in some way outside the expectations of their human
supervisor” [12] are caused by a poor feedback about the activities of automated
systems to users. “Automation surprises” and difficulties to anticipate objects
activities can be in some ways avoided by the integration of a dynamic visual
feedback, for example a LED that indicates to users a possible automation. We argue
that a means of controlling the Internet connection status is needed in a transitional
phase where users can perceive objects’ intelligence as a form of manipulation. It is
crucial that objects’ automation does not jeopardize the collective use of products or
constitute a barrier to the acceptance of the Web of Things vision.
3.2 Understanding objects’ behaviors
New exchange capabilities of objects allows for the design of behaviors to be shaped
according to their features’ descriptions. Virtual links between two objects can be
drawn by users and considered as a representation of interoperable functions. From a
user perspective, it seems crucial to be able to figure out the complexity of this
network. It is therefore needed to specify the link direction, related to the input/output
relationship of devices, to help people understand how objects interact together. If
such associations can, to some extent, be displayed on devices equipped with screens
(e.g. phone, television, radio, etc.), it would be impossible to do so with traditional
appliances (e.g. lamp, heater, shutter, etc.). We argue that consistent and flexible
visualization modes are needed to provide a broader and quick understanding of
object’s behaviors. If it remains difficult for the user to comprehend how several
objects spread in different rooms or locations are working, it is also harder to perceive
the unification of several objects by a web application hosted on the cloud. Such
applications make possible complex behavior modeling and allow the user to script
customized events triggered by an ambient intelligence. The application therefore
operates the functions of objects even if the input/output characteristics cannot be
combined. This leads to the creation of composite applications using multiple objects
that have to be presented to the user in a comprehensible way.
3.3 Grouping objects
The capacity to control objects from a remote location or system raises issues related
to privacy and ownership. To avoid misuse, it is needed to ensure that people have the
appropriate rights to access virtual objects and make modifications to their behaviors.
It is however extremely difficult for the system to determine what can be done or not
with objects. Insofar an object can be offered by a user to another, lent to somebody
else or sold for second-hand use, it would require the users to constantly update the
access rights of their objects on dedicated applications. We argue that it is possible to
identify users through the devices used to browse virtual objects and to build a social
network of “things” built on top of existing platforms.
Sterling [13] tells us that next generation of objects called “spimes” will be tracked
through space and time throughout its lifetime. The environment of use is therefore
another parameter that has to be considered even if GPS chipsets do not provide
relevant measures to locate an object in indoor environments [14]. Virtual objects can
however be attached to a geographically described environment. As objects are most
likely to be wirelessly connected to residential set top boxes, the latter, or a dedicated
access point, can be considered as a geographical markup and provide a global
understanding of the environment. This allows the creation of abstract environments
bridging multiple physical places (e.g. a campus of several building in different
neighbors) that users are likely to visit in a specific context.
4 Illustration of the Web of Things experience
In order to illustrate our Web of Things vision, we designed an interaction mockup
that allows users to interact with connected objects (including a lamp, a webcam, a
computer screen and a fixed phone) in a real life-like context. We argue that a new
user experience can be shaped by offering interactions with terminals that makes up
for the lack of dedicated user interfaces on objects. This approach led us to offer the
users three interfaces designed for a smart phone, a touch tablet and a desktop web
browser. We discuss below that each of these applications is best suited for specific
tasks and context (in-situation or off-situation), as described in the following sections
and summarized in Table 1.
Table 1. Characteristics of the three designed interfaces.
Smart phone Touch Tablet Web browser
Browse from One object
An environment
All objects, all
environments
Actions Quick visualization
and authoring Visualization,
authoring, application
download
Advanced
visualization,
authoring, application
download and coding
Visualization Augmented reality Graph, lists Graph, lists, maps,
search results
Interactions Shorts interactions Short and long
interactions Long interactions
Posture Stand up Relaxed Desk
Use Personal Semi-personal (with
authentication) Shared (with
authentication)
Fig. 1. Snapshots of the three access modalities: mobile, tablet and desktop browser.
4.1 Browsing virtual objects in-situation
As most of objects are generally used in-situation, users are likely to access virtual
objects while being close to the related real-world object. To provide a quick
overview of the objects’ properties and active links, we rely on mobile phones and
web tablets. Both terminals can be used to deliver a specific experience based on
observed user’s practices.
Mobile. The Internet capabilities of smart phones and the integration of camera, GPS
or RFID reader allow service providers to design new interactions in public space
[15]. Users have learnt that barcodes and matrix codes are encoded information or
links that can be scanned using specific applications in order to access content more
easily. In the same way augmented reality technologies are used to superpose digital
representation on reality [16], we created a mobile phone application that allows the
user to “reveal” the capabilities and behavior of Web-enabled objects (see Figure 1).
This is made possible by directly shooting the camera at objects equipped with visual
markers. The latter therefore bridge the virtual and physical objects and can be
considered as pointer to the corresponding Virtual Object. The aim is not only to
provide a quick understanding of an object’s behavior, but also interactions with the
Virtual Object. We argue that users are likely to want to deactivate a link or quickly
setup a new one and need quick access modes and immediate feedback. However,
screen sizes of mobile phones restrict the possibilities and require us to develop short
interactions that can be handled while a user is standing up in front of an object.
Tablet. Web or media tablets aim to fill the gap between the desktop computer and
the mobile phone. If the latter is also used as an Internet access terminal in homes,
touch tablets offer more natural interactions with the web and bigger screens [17, 18].
Whereas the mobile phone allows contextualizing the information of an object
through the mechanism described above, we argue that tablets are more appropriate to
give an overview an environment. McClard and Somers [19] tell us that such devices
allow multitasking, relaxed posture of use and play a social role in family. We
therefore designed an application that can be considered as a “radar” providing an
abstract representation of the user environment and its Web-enabled objects. It allows
the users to browse the Virtual Objects corresponding to objects available in the
immediate space, through multiple modalities (e.g. a graph with icons, lists, search
engine, bookmarks), and explore distant known environments (e.g. grandma’s house,
office, etc.). We argue that users are likely to want to administrate their ecosystem,
create new links and applications and browse existing applications with the tablet. As
the device is meant to be shared with different users, it is needed to ask for
authentication to activate authoring modes.
4.2 Browsing virtual objects off-situation
If the mobile and tablet interfaces provide a new means of visualization and control
over Web-enabled objects, an online aggregator specifically designed for desktop and
laptop computers is needed for longer and more complex interactions with Virtual
Objects. Personal computers are widely use to organize, maintain or retrieve
information and can be considered as an essential asset to other device’s management
(e.g. a PC is needed to transfer files from and to a mp3 player, camera or smart
phone). We argue that users are likely to want to browse all their virtual objects
through a web dashboard providing multiple access modalities (e.g. graphs, lists,
maps, galleries) and a robust search engine. As mentioned in part 2, there is an
opportunity to build a new social platform allowing users to navigate through objects,
environment and people. Whereas websites like ThingD [20] and ThingLink [21],
Talesofthings [22], Stickybits [23] offer to bookmark and comment a catalog of
objects, we propose to rely on existing social graph (e.g. Twitter, Facebook, FlickR)
to add a social dimension to objects. The desktop application can also offer authoring
tools to create object-based applications and mashups. Specific solutions can finally
be designed by manufacturers or building administrators to monitor and control
physical assets.
5 Conclusion
In this paper we have discussed how exposing real-world objects as resources on the
Web enables to realize the old pervasive computing paradigm. In our approach of this
Web of Things, we use the concept of Virtual Object to represent the real-world
object as a service to other Web entities, and enable compositions that participate in
ambient intelligence. We argue that to ensure the success of such a vision, user
experience should be carefully designed, in order to make users aware of the
possibilities and behaviors in the environment they visit. To illustrate these
considerations, we developed three user interfaces allowing users to interact with
connected objects of an environment, using mobile interfaces for in-situation
browsing. Further work includes actual user testing of both these interfaces and the
underlying principles. By mixing social and location-based approaches, we argue that
a suitable user experience can be shaped to offer users a new means of navigating the
Web of Things - keeping in mind that too much participation from the user or
complex representations of their personal ecosystems will lead to a poor adoption of
the vision.
References
1. Weiser, M.: The computer for the 21st century. Scientific American. 272, 78–89
(1995).
2. Kindberg, T., Barton, J., Morgan, J., Becker, G., Caswell, D., Debaty, P., Gopal,
G., Frid, M., Krishnan, V., Morris, H., others: People, places, things: Web
presence for the real world. Mobile Networks and Applications. 7, 365–376
(2002).
3. Traversat, B., Abdelaziz, M., Doolin, D., Duigou, M., Hugly, J.C., Pouyoul, E.:
Project JXTA-C: Enabling a web of things. System Sciences, 2003. Proceedings
of the 36th Annual Hawaii International Conference on. p. 9 (2003).
4. Chan, S., Conti, D., Kaler, C., Kuehnel, T., Regnier, A., Roe, B., Sather, D.,
Schlimmer, J., Sekine, H., Thelin, J., others: Devices profile for web services.
Microsoft Developers Network Library, May. (2005).
5. What Is Web 2.0, http://oreilly.com/web2/archive/what-is-web-20.html.
6. Kansal, A., Nath, S., Liu, J., Zhao, F.: Senseweb: An infrastructure for shared
sensing. IEEE multimedia. 14, 8 (2007).
7. pachube :: connecting environments, patching the planet,
http://www.pachube.com/.
8. Guinard, D., Trifa, V.: Towards the web of things: Web mashups for embedded
devices. Workshop on Mashups, Enterprise Mashups and Lightweight
Composition on the Web (MEM 2009), in proceedings of WWW (International
World Wide Web Conferences), Madrid, Spain. (2009).
9. Wilde, E.: Putting things to REST. School of Information, UC Berkeley, Tech.
Rep. UCB iSchool Report. 15, (2007).
10. Norman, D.A.: The design of everyday things. Basic Books New York (2002).
11. Arnall, T.: A graphic language for touch-based interactions. Proceedings of
Mobile Interaction with the Real World MIRW. (2006).
12. Woods, D.D.: Decomposing automation: Apparent simplicity, real complexity.
Automation and human performance: Theory and applications. 3–17 (1996).
13. Sterling, B., Wild, L.: Shaping things. MIT Press (2005).
14. Chen, G., Kotz, D.: A survey of context-aware mobile computing research.
Citeseer (2000).
15. O’Reilly, T., Battelle, J.: Web squared: Web 2.0 five years on. Proc. of the 6th
Annual Web. 2,
16. Azuma, R.T., others: A survey of augmented reality. Presence-Teleoperators and
Virtual Environments. 6, 355–385 (1997).
17. Saffer, D.: Designing Gestural Interfaces. O'Reilly Media, Inc. (2008).
18. Valli, A.: Natural Interaction White Paper.
19. McClard, A., Somers, P.: Unleashed: Web tablet integration into the home.
Proceedings of the SIGCHI conference on Human factors in computing systems.
p. 1–8 (2000).
20. thingd, http://www.thingd.com/.
21. Thinglink, http://www.thinglink.org/weSwitch.
22. Tales of Things, http://www.talesofthings.com/.
23. stickybits - tag your world™, http://www.stickybits.com/.