Conference PaperPDF Available

Engineering intuitive and self-explanatory smart products

Authors:
  • Google Inc., Switzerland, Zurich

Abstract and Figures

One of the main challenges in ubiquitous computing is mak- ing users interact with computing appliances in an easy and natural manner. In this paper we discuss how to turn or- dinary devices into Smart Products that are more intuitive to use and are self-explanatory. We present a general archi- tecture and a distributed runtime environment for building such Smart Products and discuss a number of user inter- action issues. As an example, we describe our smart coee machine and its validation through systematic user testing.
Content may be subject to copyright.
Engineering Intuitive and Self-Explanatory Smart Products
Erwin Aitenbichler, Fernando Lyardet, Gerhard Austaller,
Jussi Kangasharju, Max M¨
uhlh¨
auser
Telecooperation Group, Department of Computer Science
Darmstadt University of Technology, Germany
{erwin,fernando,gerhard,jussi,max}@tk.informatik.tu-darmstadt.de
ABSTRACT
One of the main challenges in ubiquitous computing is mak-
ing users interact with computing appliances in an easy and
natural manner. In this paper we discuss how to turn or-
dinary devices into Smart Products that are more intuitive
to use and are self-explanatory. We present a general archi-
tecture and a distributed runtime environment for building
such Smart Products and discuss a number of user inter-
action issues. As an example, we describe our smart coffee
machine and its validation through systematic user testing.
Categories and Subject Descriptors
C.3.h [Computer Systems]: Ubiquitous Computing; D.2.11
[Software Engineering]: Architectures; H.5.2 [Information
Systems]: User Interfaces
General Terms
Smart Products
Keywords
Smart Products, Ubiquitous Computing, User-centered de-
sign
1. INTRODUCTION
The vision of ubiquitous computing, as stated by Mark
Weiser, is that the computers become so commonplace and
so interwoven with our environment, that they practically
disappear and become invisible [15]. One of the main chal-
lenges in building ubiquitous computing applications is how
can the user interact with the invisible computer. The inter-
action should be easy and natural, yet allow for a sufficiently
rich set of actions to be taken.
A further challenge arises when the systems should be
usable by end users, that is, people who are not specialists
in interaction research. Such people, on the other hand,
expect easy and natural interaction and are not willing to
take additional steps to perform seemingly simple actions.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SAC’07 March 11-15, 2007, Seoul, Korea
Copyright 2007 ACM 1-59593-480-4 /07/0003 ...$5.00.
In this paper, we explore the challenge of building useful
and usable everyday applications, destined to be usable by
everyone. We present a platform that supports building
such smart products and as a practical application of our
research, we describe a smart coffee machine.
2. RELATED WORK
Research in smart appliances and environments has em-
phasized two major issues lately: i) activity detection, mainly
used as context for system adaptation and user interac-
tion [7]; ii) generic sentient computing infrastructures that
collect and provide information [2]. Many previously re-
ported systems and infrastructures are based on the instru-
mentation of consumer-oriented locations such as offices [3],
homes [4], and bathrooms [5]; they aim at detecting the ac-
tivities of occupants [4, 10] and the interplay between users
and mobile devices [12].
Several projects have explored the computerization of ev-
eryday objects, investigating the granularity of information
and the type of information that can be obtained. One
approach is the pervasive embedding of computing capabil-
ities as in Things That Think [14], the MediaCup [9], or the
InfoDoors [13], where simple objects gather and transmit
sensor data; the distributed infrastructure derives location
and other context information from these data for use by
applications. Another aspect that differentiate Smart Ob-
jects is the embedded knowledge they carry. This property
has also been explored in both business and consumer sce-
narios. A business example is asset tracking [8]: enhanced
sensing and perception provides for, e.g., autonomous mon-
itoring of the physical integrity of goods or detection of haz-
ardous physical proximity [6]. On the consumer side, for
instance, hard drives that can check their own health and
predict when they will fail, personal training assistants that
adjust the pace according to the user’s pulse, or smartphones
that can be voice controlled with expandable PDA features.
Another interesting example is the ABS system commonly
available in cars that integrates data from various sensors to
help the driver apply the breaks more effectively. The ABS
combines its knowledge about driving conditions and reacts
upon user’s pedal feedback, adjusting the actual breaks be-
havior to void skids. The user does not have to ever learn
about all the intricate relationships between the sensors and
their subsystems in order to take advantage of this advanced
functionality: just press a pedal.
Mr. Java, a project at MIT [11], has the closest rela-
tion to the coffee machine example reported here, at least
at first sight: both investigate household appliances with
connectivity and enhanced information processing capabili-
ties, and both integrate user identification and behavior cus-
tomization. The following three characteristics distinguish
our project from Mr. Java and the other references cited
above:
1. In contrast to the usual design - evaluate - publish
approach, we iterated over many design-evaluation cycles;
thereby, we leveraged the adaptivity of our smart products
platform, which we consider an outstanding feature; we con-
cluded that any particular social environment needs careful
customization in order to reach user satisfaction.
2. In addition to supporting simple event-action rules, we
emphasize complex tasks and procedures; we consider it a
major benefit if users can be guided through such procedures
as opposed to reading detested manuals - in addition, this
benefit helps vendors to leverage feature-rich appliances; in
the past, users often did not invest the necessary effort to
explore advanced features.
3. From our experiences, we distilled a general architec-
tural approach for turning an everyday item into a ubiqui-
tous appliance, in such a manner that average, non-technical
users find the device easy and natural to use. With the three
distinct characteristics above, the present paper clearly dif-
ferentiates from the yet another ubiquitous appliance cate-
gory.
3. SMART PRODUCTS ARCHITECTURE
Smart Products are real-world objects, devices or software
services bundled with knowledge about themselves, others,
and their embedding. This knowledge has been separated in
layers according to the level of abstraction they address: de-
vice capabilities, functionality, integrity, user services, and
connectivity. In Figure 1 we present a conceptual reference
architecture showing this separation of concerns to allow the
integration of different vendors providing their own tech-
nology. Such scenario is particularly critical at the device
level, since changes to embedded systems must be kept to
a minimum to keep their cost viable. Adopting a SOA ap-
proach allows devices to be extended in their functionality
and user adaptation capabilities with minimal embedded in-
frastructure requirements. Extensions to a Smart Product
may involve external hardware, software, or both.
Figure 1: Smart Product Conceptual Architecture
The first layer is the Smart Product Device Layer. In
embedded systems, this is where the runtime operating soft-
ware or firmware lies. The processing power available at this
level operates the actuators, sensors, I/O, and the user in-
terface (typically LCD displays, status LEDs and buttons).
The knowledge embedded in this layer defines a set of valid
events, states, and a set of Event-Condition-Action (ECA)
rules that govern the transitions between states. These rules
determine the functionality of the device, implement current
smart behavior, and ensure the operating conditions required
to preserve the hardware integrity and safe operation.
The second layer is the Smart Product Layer, which con-
sists of four main parts. First, the Controller that some-
times resides within the embedded device, coordinates pro-
cesses bridging the physical-world behavior and software-
based functionality. Second, the Embodiment that external-
izes the knowledge of what a device consists of, what are its
parts and their interrelationship. This knowledge also speci-
fies the processes controlling the functionality. For instance,
the process of making a coffee, or more complex, de-scaling
a coffee machine. The Embedding bridges the physical world
with software, enabling self-explanatory functionality. The
Embedding, in a sense, decorates the processes described by
the embodiment for adaptation using the information pro-
vided by the fourth constituent part: the User Model. The
User Model provides information regarding the user that op-
erates the device. Using other information sources and sen-
sors from the infrastructure, the User Model module can rec-
ognize a person and related data such as preferences. When
no external contextual information about the user is avail-
able, this module gathers the user input and compares it to
user expertise patterns to better match user level to device
functionality and assistance. The Embedding gathers the in-
formation from the User Model and determines the actual
steps and explanations required for a user to perform a task.
Finally, other services can further extend a smart prod-
uct functionality, for instance by subscription to the Smart
Product manufacturer or third-party providers.
4. RUNTIME ENVIRONMENT
We will now describe the software architecture of our
smart products runtime system (Figure 2).
Figure 2: Software Architecture
The common basis for our SOA architecture is our own
ubiquitous computing middleware MundoCore [1]. Mundo-
Core has a microkernel design, supports dynamic reconfigu-
ration, and provides a common set of APIs for different pro-
gramming languages (Java, C++, Python) on a wide range
of different devices. The middleware implements a peer-to-
peer publish/subscribe-system and an Object Request Bro-
ker which allows us to easily decouple services and spread
them on several different devices transparently. Many of the
services are of a general nature and they have also been used
in other projects. For example, the RFID reader service is a
Java-based service that interfaces with a reader device con-
nected to a computer’s USB port and emits an event each
time an RFID tag is moved close to the reader. The Speech
Engine service is written in C++ using Microsoft’s Speech
SDK and AT&T NaturalVoices. The Browser Service allows
to remote-control an Internet browser by means of remote
method calls (RMCs).
Finally, our smart coffee machine example presented here
requires some application-specific services. The Coffee Ma-
chine Service provides the hardware abstraction for the cof-
fee machine. It allows to control the machine by means
of RMCs and generates event notifications when the status
of the machine changes. The high-level application logic is
implemented as a workflow description. This allows us to
design the user interactions in a model-driven way. The use
of workflows and how the workflow engine is integrated with
other services in the system is described in Section 7.
5. THE SMART COFFEE MACHINE
The Saeco coffee maker is a normal off-the-shelf coffee
machine. When used out of the box, the user can choose
between three kinds of coffee namely espresso, small coffee,
and large coffee. There is a coffee bean container and a
water tank attached to the machine. If either is empty, a
small display on the machine prompts to refill them. The
display also prompts to empty the coffee grounds container
if it is full.
Our modifications allow us to control all the buttons re-
motely and determine the state of the machine (Figure 3).
With this state information and additional RFID readers, we
can detect user actions and automatically start processes or
proceed in a workflow.
Figure 3: Hardware components added to the coffee
machine
The hardware of the enhanced system consists of the cof-
fee machine, two RFID readers to identify cups, one reader
for reading the digital keys, and a PC running the control
software. Figure 4 shows the hardware architecture and the
individual components. The roles of the individual compo-
nents are as follows.
Figure 4: Hardware architecture
Coffee Machine: Because the machine does not come
with a data interface, we modified the front panel circuit
board and attached our own microcontroller to it. This al-
lows us to detect keypresses, simulate keypresses, check if
the water tank is empty and read the pump control signal.
The latter indicates that the machine is actually dispensing
fluid and gives a very accurate measure how much fluid has
run through the pump; one pulse on this signal corresponds
to 0.4 milliliters. The machine communicates with the rest
of the system via a Bluetooth module.
RFID reader at Coffee Machine: The antenna of this
reader is attached to the bottom of the coffee machine’s cup
holder surface. It is positioned such that the reader is able
to identify cups as soon as they are placed below the coffee
dispensing unit. The RFID tags are glued to the bottom of
the cups. This allows the system to start brewing coffee as
soon as a cup is put down.
RFID reader at Dishwasher: The second RFID reader
is placed next to the dishwasher. User can swipe cups over
this reader before they put them into the dishwasher.
Key reader: Our reader for the digital keys consists of
a SimonsVoss Smart Relais, a microcontroller, and a USB
interface. Like the electronic locks in our computer science
building, the relais can be triggered with the digital keys,
which are given to all employees and students. Every user
owns a key, and there is a one-to-one relationship between
users and keys, which makes these keys highly suitable for
identification purposes. In addition, the keys can already
serve as simple interaction devices. Because users are re-
quired to activate them explicitly by pressing a button, the
reception of a key ID can be directly used to trigger actions.
PC: The PC hosts most of the services and is hidden in
a cupboard. It gives users voice feedback via the speakers.
Users can view their favorite webpages on the monitor. The
monitor, keyboard, and mouse are optional and not required
to use the core functions of the system.
6. INTERACTION DESIGN
In the interaction design of a smart product, we distin-
guish between simple and complex interactions and support
them in two different ways.
Simple interactions are one-step functions that would be
normally triggered with a button. These interactions should
be natural to users. This should be even true if they use a
product for the first time. The product should behave as the
user intuitively expects. Such interactions are implemented
by monitoring the user’s actions with suitable sensors and
typically do not involve graphical or voice user interfaces.
An example for a simple interaction is that the user puts
her coffee mug under the coffee dispenser and automatically
gets her favorite coffee.
Complex interactions refer to multi-step procedures that
require the user to have product-specific knowledge. Such
interactions involve graphical or voice-based user interfaces
and the system guides the user during these interactions.
An example is de-scaling the coffee machine, which requires
the user to perform several manual steps.
As many interactions as possible should be designed as
simple interactions. To identify actions that are natural
to users and to verify that designed interactions are intu-
itive, user studies must be performed at multiple stages in
the product design. An important goal of our smart cof-
fee machine project was to evaluate how average users (i.e.,
non-technical people) could interact with ubiquitous appli-
cations and appliances in a natural and intuitive way. The
implementation has gone through several phases and in each
phase, we performed a user study in order to evaluate what
things needed to be changed and how did the users feel about
using the machine. In the following, we will present our ex-
periences to illustrate the lessons learned from our user tests.
Starting from the initial versions developed over 9 months
ago, the machine has been in daily use in our group. Our
group consists of 21 people. 5 people do not drink coffee
at all and were excluded from the test. The 5 authors only
participated in the first user test, therefore 11 users served
as main test subjects.
6.1 Initial Implementation
In the initial design we marked coffee cups with RFID
tags and when a user puts a coffee cup under the dispenser,
she would automatically get coffee. We have cups in many
different sizes, ranging from small espresso cups to large
coffee mugs. The cup database stores the RFID tag IDs
together with the type of coffee (espresso, small, or large)
and the cup size in milliliters.
After interviewing the users to get their subjective im-
pressions, we found out that users preferred the smart ma-
chine, because it requires no user attention to trigger the
machine or to control the amount of coffee you want. This
“un-attended nature” of the machine was cited as a benefit
by all users.
Since the coffee machine is controlled by a service, the
service also can log statistical data. In particular, the service
knows how much coffee beans are left and for how long they
will last. We programmed a notification service to send
emails to all the people using the kitchen when beans were
close to running out.
6.2 Error Conditions
In case of a problem such as running out of water, in addi-
tion to the standard error messages on the machine’s display,
the user also receives audio instructions on how to resolve
the problem (i.e., “User X, please refill water.”). After the
user has fixed the problem, the machine automatically re-
sumes the requested operation. All feedback from the sys-
tem is also over voice, delivered through a Text-to-Speech
engine.
With audio feedback, users changed their behavior when
handling error situations with the machine. The most com-
mon error situations are a full coffee grounds container and
the machine running out of water. We observed that in case
of lack of audio feedback, the users would complain that
the system was “broken”, instead of looking for the problem
themselves on the display as they did before. The machine
behaved exactly as before (i.e., it showed the error message),
but users did no longer read the error messages from the dis-
play of the machine. We concluded that this was the result
first of the expected consistency in the machine’s behavior,
and second, to the fact that although listening is slower than
reading, it requires less effort, what could make it a more
appealing modality for consumers.
6.3 Billing
Next, we wanted to add automatic billing to the system.
This requires to identify users by some means. The first
implementation was based on everyday observations on how
people handle the coffee machine and also by asking them
how they would like to operate it. Because most users have
their favorite coffee cup and mostly drink one kind of coffee,
our conclusion was that associating a cup to a user is an
administrative task that has to be done rarely and therefore
does not have to be very comfortable.
Based on this hypothesis, the initial implementation had
the following characteristics: Changing settings and associ-
ating cups to a user had to be done with a GUI application
running on a nearby computer. In addition, cups got leases.
This means that after 24 hours of not using the cup, the as-
sociation was deleted and the cup got freed. In other words,
you would have to associate the cup typically once a week.
Once a cup is associated, the user automatically gets her
preferred coffee when she puts the associated cup under the
coffee dispenser. In addition, the system automatically takes
care of accounting and the user can have his favorite web site
pop up on a nearby computer screen.
However, as we observed the initial users, we noticed that
some of our assumptions were wrong, and that users even
started to change their behavior in order to overcome some
limitations and problems.
Firstly, it turned out that human beings are not creatures
of habit but creatures of laziness. Instead of washing their
cups and reusing them, they put them into the dishwasher
and use clean cups. This conflicted with the assumption that
associating cups once a week is acceptable. Associations had
to be done much more often and users found it unacceptable
to use a GUI application. Although the GUI was enhanced
several times, it was simply not accepted.
Secondly, the introduction of a “date of expiry” for cups
was not convincing to the users. They wanted to be able
to explicitly de-associate the cup and get feedback that the
cup was free.
We addressed the concerns raised in the first test by mak-
ing the following modifications: To improve the association
process, we introduced a “one-click association”. Here, we
use our digital door keys to associate cups to users. Because
users were used to use the door key several times a day, there
was no need to teach the users how to use the digital key.
They just had to be told to press the key when asked by the
coffee machine, as if they were opening a door.
The explicit de-association was handled by mounting a
second RFID reader near the dishwasher. When people put
their cups in the dishwasher, they can swipe the cup over
the reader to break the association. More importantly, they
get voice feedback that the association has been broken.
With this implementation, almost all system interactions
could be done with tagged cups, the digital key and Text-
to-Speech instead of using the GUI-based application.
6.4 User Tests
After the modifications, we performed two formal user
tests, conducted by observing users using the coffee machine
as well as free-form interviews. The overall outcome was ex-
tremely encouraging. Most users preferred to use the auto-
mated machine in spite of the procedure being slightly more
complicated and the time to get coffee slightly longer. (Note
that it takes 15–25 seconds to get the coffee, depending on
the type of coffee, so an additional delay on the order of a
second or two is usually not significant.)
The second set of tests focused on the other aspects of the
coffee machine operation such as voice feedback, the associa-
tion, selection of an alternate coffee, and the subjective user
perception. The use of voice to provide feedback was highly
Figure 5: De-scaling process modeled in XPDL using XPEd
regarded as beneficial. However, many users (2/3) found
it difficult to understand the voice output in some cases.
We believe the reasons might be as follows: When a user
hears a particular message for the first time, she is probably
surprised by it and does not fully understand the message,
hence leaving the impression that the messages are hard to
understand. Sometimes the voice feedback comes when the
machine is busy performing the requested action, and the
machine is typically quite loud, hence the voice feedback is
hard to understand. To alleviate this problem we imple-
mented “subtitles”. All voice output is displayed as text on
the computer screen as well.
Another issue that arose was that users started to wonder
how to make a different coffee than the default one without
changing the default user settings. This situation happens
when a user wants, for example, an espresso instead of the
usual large coffee. Again, being able to change the setting
with only a GUI was not acceptable. To cater for people
wanting different coffees at different times, we implemented
an “override button”. Fortunately, there was an extra but-
ton on the coffee machine which did not have any function
and we used it as the override button. When a user presses
this button, she can choose the kind of coffee she wants in-
stead of getting the coffee programmed for the cup.
6.5 Results
First and foremost, our experience underlines the impor-
tance of user-centered design. Ubiquitous applications often
enter new and unexplored domains, hence it is hard or im-
possible to know beforehand how a system can best be used.
Not many features from our initial design survived through
the two user tests, even though the design seemed sound at
that time. In particular, using the digital key as user iden-
tification was a positive experience. The lesson here would
be that if users need to perform additional actions, they are
more readily accepted if they are based on other items which
they use in everyday life, even if the uses are different.
Second, only certain modalities may be mixed in a system
with a multi-modal user interface. In our case, this was
shown by users not checking the display of the machine. We
solved this by making all the output from the system to
come over audio. However, the users did not have problems
with different modalities on the input and output channels.
“Tactile” input and auditory output was not a problem to
the users.
Third, users do not trust automatism, at least in every
case. Even though a lease mechanism was implemented (and
thoroughly debugged!), users insisted on having an explicit
possibility to de-associate cups.
7. COMPLEX INTERACTIONS
In the following, we describe how our extensions to the
machine can be used to help guide the user through com-
plex procedures, in our case the cleaning (or de-scaling) of
the machine. The machine needs to be cleaned regularly,
since limestone builds up in the machine and it has to be
removed. The process is relatively cumbersome and com-
plicated, since it involves first filling the machine with the
cleaning liquid, running the liquid through the machine us-
ing a certain mechanism that includes pushing the right but-
tons and opening and closing the water tube at the right
time, waiting a relatively long time, and then finally flush-
ing the machine before it is ready to be used again. Most
people in our group are not aware how to do this process.
Our goal in implementing this task support is to show how
our modifications can help users in complex tasks. We chose
cleaning as the task, because it must be performed occa-
sionally even though the process is described in the manual,
most people never read the manual and even if they do read
it, they must use the manual as a support every time they
have to clean the machine. The cleaning process is also very
suitable for being supported, since parts of it can be done
automatically (controlled by the system) and parts of the
process require manual user interaction. Furthermore, there
are long waiting periods in the process. Our implementation
shows how the system supports the user by letting the user
know what to do next at any step. Any steps which can be
performed automatically are done by the system and only
when user intervention is required, does the machine ask the
user for help. Also, when a particular step of the process
will take a long time, the machine lets the user know this, so
that the user does not need to wait by the machine. We have
implemented a notification system which alerts the user via
an instant message when the long step has been completed.
Internally, we describe this process as a workflow descrip-
tion (Figure 5). We use the XML Process Definition Lan-
guage (XPDL) as data format, the JPEd graphical editor
to edit workflows, and the OpenEmcee Microflow Engine to
execute workflows. We have written a small Perl script to
translate XPDL descriptions into OpenEmcee’s proprietary
XML format.
In the described system, all communications between ser-
vices is based on MundoCore and uses channel-based pub-
lish/subscribe. We defined a generic activity and transition
class for the workflow engine that interfaces with this pub-
lish/subscribe system.
Following our reference architecture, the workflow descrip-
tions become part of the Embodiment Module and are ex-
ecuted by the Controller. In this particular example, the
Embedding Module performs only a simple adaptation of
the process by stating the user’s name provided by the User
Model.
7.1 Activities
XPDL permits to assign an arbitrary number of extended
attributes to activities and transitions in the workflow. The
attributes of an activity are used to describe the action that
should take place. An action can be a message send oper-
ation or a remote method call. Method calls build on the
dynamic invocation interface of MundoCore which allows
to call any method of any remote service. For example, to
output text via the Text-to-Speech engine, the following at-
tributes are used:
channel = ”tts”
interface = ”org.mundo.speech.synthesis.ITextToSpeech”
method = ”speak”
p0 = ”Please remove water tank”
The channel property specifies the name of the channel to
which the invocation request should be sent. The TTS ser-
vice is subscribed to the channel tts either locally or some-
where in the network. The message distribution is handled
by MundoCore and is fully transparent to the application.
(It should be noted that channels are not necessarily global
- MundoCore has zone and group concepts to limit the scope
of channels.)
7.2 Transitions
State transitions can be triggered by arbitrary Mundo-
Core events. For example, if the water tank becomes empty,
the Coffee Machine service publishes a notification of type
org.mundo.service.saeco.WaterEvent with the content empty =
true to the channel saeco.event.
MundoCore supports notification filtering based on XQuery
expressions. We use this mechanism to describe transition
conditions in the workflow. To execute a transition as soon
as the water tank is empty, the following extended attributes
are specified for the transition:
channel = ”saeco.event”
filter = ”for $o in $msg where
$o[class=’org.mundo.service.saeco.WaterEvent’] and
$o/empty=true()”
The transition is executed as soon as the first notification
matches this filter expression.
8. CONCLUSION
In this paper we have presented a general architecture and
distributed runtime environment for building Smart Prod-
ucts. Turning a device into a Smart Product only requires
minimal physical extensions to the device itself - basically
a simple communication interface, such as Bluetooth, to re-
trieve and set the device’s events and state. Through this
communication interface, the surrounding ubicomp environ-
ment is able to integrate the device, improving its function-
ality and usability.
Using a coffee machine as an example, we have shown how
to extend it into a Smart Product with additional function-
ality and adaptation capabilities using the proposed archi-
tecture. A major concern during this work has been the
impact on the people using the device. The different user
tests carried out provided valuable feedback and insights to
keep interaction simple and natural. The tests also pointed
out the requirement of a system support for guiding users
through complex procedures. This functionality has been
also developed and integrated in our reference architecture.
9. REFERENCES
[1] E. Aitenbichler. System Support for Ubiquitous
Computing. Shaker, 2006.
[2] E. Aarts. Ambient Intelligence: A Multimedia
Perspective. IEEE Multimedia, 11(1):12–19, Jan 2004.
[3] M. Addlesee, R. Curwen, S. Hodges, et al.
Implementing a Sentient Computing System. IEEE
Computer, 34(5):50–56, Aug 2001.
[4] B. Brumitt, B. Meyers, J. Krumm, et al. EasyLiving:
Technologies for Intelligent Environments. In Proc. of
HUC 2000, volume 1927 of LNCS, 12–27. Springer,
September 2000.
[5] J. Chen, A. H. Kam, J. Zhang, et al. Bathroom
Activity Monitoring Based on Sound. In Proc. of
Pervasive 2005, volume 3468 of LNCS, May 2005.
[6] C. Decker, M. Beigl, A. Krohn, et al. eSeal - A System
for Enhanced Electronic Assertion of Authenticity and
Integrity. In Proc. of Pervasive 2004, volume 3001 of
LNCS, 18–32. Springer, April 2004.
[7] A.K. Dey, D. Salber, and G.D. Abowd. A Conceptual
Framework and a Toolkit for Supporting the Rapid
Prototyping of Context-Aware Applications. HCI
Journal, 97–166, 2001.
[8] A. Fano and A. Gershman. The Future of Business
Services in the Age of Ubiquitous Computing. Comm of
the ACM, 45(12):83–87, December 2002.
[9] H.-W. Gellersen, M. Beigl, and H. Krull. The
MediaCup: Awareness Technology embedded in an
Everyday Object. In Proc. of HUC’99, volume 1707 of
LNCS, 308–310, 1999.
[10] MIT Project Oxygen. http://oxygen.lcs.mit.edu/.
[11] Mr. Java Project.
http://www.media.mit.edu/ci/projects/mrjava.html
[12] A. Schmidt, K. A. Aidoo, A. Takaluoma, et al.
Advanced Interaction in Context. In Proc. of HUC’99,
volume 1707 of LNCS, 12–27. Springer, September 1999.
[13] B. Shneiderman. Leonardo’s Laptop: Human Needs
and the New Computing Technologies. MIT Press,
October 2002.
[14] Things That Think. http://ttt.media.mit.edu/.
[15] M. Weiser. The Computer for the 21st Century.
Scientific American, 265:66–75, 1991.
... By being "directly or indirectly digitally augmented and connected" to the processing environment and to the events of the real world through real-time mapping systems, they have access rights to objects, ambient resources, and devices to act jointly and exchange personal information [17,18,27,[30][31][32]. Their improved capabilities expose multiple, new, and complex functions, whether provided by the physical embodiment of communication functionality or by manufacturers of third-party providers, that provide use-value, perceptual qualities, better functionality, or services that produce useful results through activities that make them hyperfunctional or multifunctional [18,30,31,[33][34][35][36][37][38][39]. They extend what users can do with the technology by offering higher usage behavior, enriching themselves with digital functionality, and "connecting to external services and exploiting other objects' capabilities" [27,[40][41][42]. ...
... They communicate with the environment effectively to create an "optimal relationship between users and themselves" [19, 31, 34-36, 38, 45, 49, 51-54]. They collect, provide, and process information by sensing, logging, and interpreting information generated within themselves and around the neighboring external world in which they are situated [28,29,36,[55][56][57][58]. They can process real-time information and collect and broadcast related context information, their users, and functionality automatically and transparently by "computing situational context from sensor data" [31,44,45,50,52,59]. ...
... Human Behavior and Emerging Technologies information about themselves with other digital artifacts or applications" [30,36,39,49,60,61]. The proactivity, adaptivity, awareness, autonomy, anticipation, flexibility, and self-explanatory characteristics are grouped under the definitive category of the definition of the smartness map. ...
Article
Full-text available
The domestication of smart artifacts has transformed our homes into hybrid environments of physical and digital worlds. It also has been changing our mindsets, behaviors, meaning attributions to, expectations from, frustrations about, and interactions with smart artifacts. By extension, the smartness definition is reconstructed by users who are the subject of smart artifact experiences. The current study is aimed at uncovering the user experience of smart artifacts with a focus on cognitive and emotional aspects to better understand what users expect when an artifact is identified as “smart.” Therefore, an online research study is conducted to gain insight into the user experience of smart artifacts from content-rich reviews on e-commerce websites. Robot vacuum cleaners, smartwatches, and smart speakers were chosen as exemplary smart artifacts of the study. Because they offer different types of interaction with distinct aspects, our findings indicate that smartness is associated with trust in expertise, emotional engagement, exaggerated evaluation, and intriguing existence concepts about Emotional UX. In Cognitive UX, smartness relates to reducing mental workload, gratifying experience, perceived phenotype, reciprocal acquaintance, trust-building experience, tailored situatedness, shaping sociality, physical competency, and dual enhancement concepts. These findings demonstrate the potential of conceptualization in the early stages of smart artifact design processes.
... In recent years an increasing number of smart products are being developed [1]- [5]. Such smart products are able to collect, process and produce information and they can make use of knowledge about themselves, their users and their context [1], [2]. These smart products are often combined with context-aware services to form a smart product-service system [3]. ...
... However, developing these products and services is not an easy task [2]- [5]. It is difficult for the designers to know beforehand how a system can best be used or how it will be used in reality [2]. ...
... However, developing these products and services is not an easy task [2]- [5]. It is difficult for the designers to know beforehand how a system can best be used or how it will be used in reality [2]. Yet, a thorough understanding of the users and the environment is necessary to develop smart productservice systems [3]. ...
... The user"s actions are detected using sensors. The authors experimented with an interface that did not deploy any visual or voice syntax; instead, it directs with machine movement actions [10]. However, machine movements sometimes increase the cognitive load of users if they aren"t accustomed to the flow of machines. ...
Article
Full-text available
User Interface (UI) acts as a mediator between human and computer or any other system or sub-systems which exhibit its function, structure, and behavior. The complexities of UI are open issues that make it formidable to explore it. Supervision makes any task easier by explaining the elements and intentions of UI. Self-Explanatory User Interfaces (SEUI) traps the complexities of UI and provides assistive tools with it in order to supervise users addressing usability using visual and aural syntax. UIs require adapting to the mental model of users and support them during their task. The study presented in this paper has been administered to adult users (age group: 50-64 years) with minimum exposure to technologies and often requires support to get accustomed to interfaces. To complete a task successfully, it is essential to justify the intent of the design, purpose of any textual entry, the usability of the buttons, importance of the labels, visibility, or blurriness of certain items to end-users. Literature associated with SEUI highlight the intervention of Model-Driven Engineering in UI. But very few studies in this area have investigated concatenation of visual and aural syntax in UI. The visual aspect aims at explaining a clear and concise diagrammatic representation of the hierarchy of UI and the significance of each user entry while the aural dimension intends to provide audio instructions to the users. The results include evidence of significant growth in task completion rate and improvement in the task completion time of the novice users assisted with combined visual and aural syntax tool embedded in an interface. The results of the current study are significant in light of the designing and development of UIs that address adults who are a novice and lack exposure to UIs. The solution presented in this paper would support them to migrate from being a novice to an expert.
... Within the Smart Movement [35][ 36] products have been increasingly equipped with electronics, enabling the assessment of isolated environmental data and the interpretation of basic contextual information (e.g., wearable activity trackers, smartphones and watches, etc.). However, these products typically have deterministic and predefined behavior and lack the capabilities required for sustainable and autonomous human-like cognitive functions, such as perception, awareness, learning, reasoning and decision-making. ...
Conference Paper
Full-text available
Future commercial products and product assemblies could greatly benefit from recent developments in machine learning , providing the foundation of cognitive products equipped with sensors and actuators and embedded into tangible objects operating in the real world. This paper identifies key challenges in the related fields and provides motivation for further advancements particularly in the domain of resource-constrained distributed and embedded Artificial Intelligence. Enabling cognitive capabilities , such as perception, reasoning, learning and planning, could result in higher reliability, adaptivity and improved performance, however it would require an increased involvement of non-technical disciplines like cognitive neuroscience. We propose a generic top-level cognitive architecture providing a reference to various research areas involved in this multifaceted field. Conceptual prototypes of two cognitive products, targeting real-world industrial environments, are presented and discussed. Keywords-cognitive systems; ambient intelligence; embedded systems ; distributed intelligence; cognitive components. I. INTRODUCTION Humans have developed skills to survive in a complex world by evolving adequate information processing mechanisms well suited to deal with ill-structured problems involving a high degree of uncertainty. The human brain, however, cannot compete with machines on tasks requiring massive computational resources. Machines are faster, more accurate and stronger than humans. However, humans outperform machines in many tasks, which require flexible, reliable and adaptive control. Since these abilities are currently beyond the reach of state-of-the-art Artificial Intelligence (AI), much of the inspiration for implementing future intelligent machines needs to be taken from cognitive sciences that study computational models of human perception, attention and motor control. The ultimate goal is to turn machines into ones that can reason using substantial amount of appropriately represented knowledge, learn from its past experiences in order to continuously improve performance, be aware of its own capabilities, reflect on its own behavior and respond robustly to surprise [1]. Such a high level intelligence should be complemented by low level cognitive abilities provided by reactive models. This would enable a major leap in the quality of interaction and cooperation with humans.
... Algunos ejemplos de interacción, como se ha indicado antes, sería el de añadirle inteligencia a algunos objetos cotidianos para facilitarnos las tareas. Por ejemplo, se le podría añadir RFID a una cafetera [10] para poder controlarla de forma remota y saber el estado en el que se encuentra. Con RFID se pueden detectar las acciones que realizan los usuarios y automáticamente comenzar procesos según el usuario que sea. ...
Chapter
Full-text available
Internet de las cosas (IoT en sus siglas en inglés) es un concepto que apareció hace unos años y que, como veremos, integra el mundo virtual de la información con el mundo real de las cosas. Explicaremos qué son las cosas, qué son los objetos inteligentes, qué ámbitos pueden verse afectados por ellos y cómo interactuar con este tipo de objetos. También indicaremos el alcance de esta nueva tecnología, veremos en qué entornos se está aplicando y algunas de las tecnologías relacionadas, como son RFID, los códigos QR y NFC.
Poster
Full-text available
Design and development of an user interface with visual and aural support for elderly.
Article
Full-text available
RESUMO O presente trabalho teve como objetivo explorar a percepção das dimensões-chave percebidas pelos usuários de produtos inteligentes que são: autonomia, capacidade de aprender, reatividade, capacidade de cooperação, interação humana e personalidade. Para tanto, foi realizada uma pesquisa qualitativa, por meio de 15 entrevistas em profundidade utilizando questionário semiestruturado. Para análise de conteúdo das entrevistas em profundidade foi utilizado software Nvivo Plus. A análise de cluster revelou a relação entre as dimensões habilidade para cooperar e interação humana, bem como a dimensão autonomia com habilidade para aprender. Como principais resultados, pode-se afirmar que este trabalho resultou em maior entendimento da percepção das seis dimensões-chave bem como sua relação, frequência e importância para os usuários. Além disso, identificou-se três dimensões adicionais: segurança, desempenho e status, sendo a dimensão status um resultado da percepção integrada de todas as outras dimensões. Palavras-chave: Produtos Inteligentes. Smartphone. iPhone. Dimensões-chave dos produtos inteligentes. ABSTRACT The present work aimed to explore the perception of key dimensions perceived by users of smart products that are: autonomy, ability to learn, reactivity, capacity for cooperation, human interaction and personality. For that, a qualitative research was carried out, through 15 interviews in depth using semi-structured questionnaire. NVivo® software was used for in-depth interview content analysis. Cluster analysis revealed the relationship between the dimensions of ability to cooperate and human interaction, as well as the dimension of autonomy with ability to learn. As main results, it can be affirmed that this work resulted in a greater understanding of the perception of the six key dimensions as well as their relation, frequency and importance for the users. In addition, three additional dimensions were identified: security, performance, and status; the status dimension being a result of the integrated perception of all other dimensions. Keywords: Smart products. Smartphone. iPhone. Smart products key-dimensions.
Thesis
Cette thèse propose une méthode originale pour la génération automatique d'un code de simulation pour les systèmes à événements discrets. Cette méthode utilise l'information de localisation des produits lors du fonctionnement du système. Ce flux composé par des tuples (product id, location, time) constitue le point d'entrée pour l'algorithme proposé de génération d'un modèle de simulation de type réseau de files d'attente. Ce type d'approche permet, outre un gain important de temps pour la conception initiale du modèle, une maintenance et reconfiguration " on line " du modèle. La thèse est composée de 5 chapitres. Le premier chapitre pose la problématique et fixe le cadre théorique de la thèse. Le second chapitre est une revue de la littérature sur la simulation en général et sur les travaux utilisant la notion de trajectoires à des fins de modélisation. Le troisième chapitre sert à mettre en avant la proposition au cœur de cette thèse. Le quatrième chapitre décrit le générateur développé. Le cinquième et dernier chapitre présente les travaux d'expérimentation et de validation du générateur.
Article
Full-text available
The development of global markets also contributed to the formation of new customers, who have become more stringent, which in turn resulted in a challenge for the industry with respect to the customization of products, efficient services and low cost. The competition between companies increase the need for global strategies to stay on the market. The automotive industry becomes more and more competitive, so, one of the challenges of the current production is the need for greater efficiency by changing the way they operate their plants. The German government started, through government investment as well as from the own companies, the program Industrie 4.0, which objective is to interconnect all areas that belongs to the production process through a smart network to support all industrial process. This program challenges industrial production to achieve a 4th industrial revolution, where the production processes govern themselves. This program must be able to resolve production problems and make it more effective, enabling competitive gains. Over the years is inevitable that these changes happens, however, changes without appropriate coaching can have negative consequences for the company. This paper aims to raise the supporting pillars on this topic.
Article
This thesis work proposes a novel method for the automatic generation of simulation code of discrete event systems. This method uses the location information of the products during operation of the system. This flow, of tuples (product id, location, time) is the input of the proposed algorithm that generates a queueing-network simulation model. This type of approach can achieve a significant time gain for the design, maintenance and on-line reconfiguration of the model. The thesis is composed on five chapters. The first chapter poses the problem and fixes the theoretical framework of the thesis. The second chapter is a literature review of the broad sense of simulation and present works using the notion of trajectories for modeling. The third chapter is to highlight the proposal of the heart of this thesis. The fourth chapter describes the generator developed. The fifth and final chapter presents the experiments and the validation of the generator.
Conference Paper
Full-text available
The MediaCup is an ordinary coffee cup augmented with sensing, processing and communication capabilities, to collect and communicate general context information in a given environment. In this project, coffee cups are computerized to integrate them and the information they hold—where the cup is, how it is handled, and whether it's hot or cold—as context into surrounding information ecologies.
Conference Paper
Full-text available
The EasyLiving project is concerned with development of an architecture and technologies for intelligent environments which allow the dynamic aggregation of diverse I/O devices into a single coherent user experience. Components of such a system include middleware (to facilitate distributed computing), world modelling (to provide location-based context), perception (to collect information about world state), and service description (to support decomposition of device control, internal logic, and user interface). This paper describes the current research in each of these areas, highlighting some common requirements for any intelligent environment.
Conference Paper
Full-text available
Ensuring authenticity and integrity are important tasks when dealing with goods. While in the past seal wax was used to ensure the integrity, electronic devices are now able to take over this functionality and provide better, more fine grained, more automated and more secure supervision. This paper presents eSeal, a system with a computational device at its core that can be attached to a good, services in the network and a communication protocol. The system is able to control various kinds of integrity settings and to notify authenticated instances about consequent violations of integrity. The system works without infrastructure so that goods can be supervised that are only accessible in certain locations. The paper motivates the eSeal system and its design decisions, lists several types of integrity scenarios, presents the communication protocol and identifies practical conditions for design and implementation. An implementation in a business relevant scenario is presented as a proof of concept.
Article
Full-text available
Ubiquitous computing will change the way we live with technology. As Mark Weiser stated: "The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it" [3]. We don't think of pencils or hinges or faucets as technology. They are just simply features of the world we take for granted and shape the way we act in the world. With ubiquitous computing, using information technology will progressively feel more like using these everyday objects than using personal computers.
Article
Specialized elements of hardware and software, connected by wires, radio waves and infrared, will be so ubiquitous that no one will notice their presence.
Article
Specialized elements of hardware and software, connected by wires, radio waves and infrared, will be so ubiquitous that no one will notice their presence.
Conference Paper
The old computing was about what computers could do; the new computing is about what people can do.To accelerate the shift from the old to the new computing designers need to:reduce computer user frustration. Recent studies show 46% of time is lost to crashes, confusing instructions, navigation problems, etc. Public pressure for change could promote design improvements and increase reliability, thereby dramatically enhancing user experiences.promote universal usability. Interfaces must be tailorable to a wide range of hardware, software, and networks, and users. When broad services such as voting, healthcare, and education are envisioned, the challenge to designers is substantial.envision a future in which human needs more directly shape technology evolution. Four circles of human relationships and four human activities map out the human needs for mobility, ubiquity, creativity, and community. The World Wide Med and million-person communities will be accessible through desktop, palmtop and fingertip devices to support e-learning, e-business, e-healthcare, and e-government.Leonardo da Vinci could help as an inspirational muse for the new computing. His example could push designers to improve quality through scientific study and more elegant visual design. Leonardo's example can guide us to the new computing, which emphasizes empowerment, creativity, and collaboration. Information visualization and personal photo interfaces will be shown: PhotoMesa (www.cs.umd.edu/hcil/photomesa) and PhotoFinder (www.cs.umd.edu/hcil/photolib).For more: http://mitpress.mit.edu/leonardoslaptop and http://www.cs.umd.edu/hcil/newcomputing.
Conference Paper
In this paper an automated bathroom activity monitoring system based on acoustics is described. The system is designed to recognize and classify major activities occurring within a bathroom based on sound. Carefully designed HMM parameters using MFCC features are used for accurate and robust bathroom sound event classification. Experiments to validate the utility of the system were performed firstly in a constrained setting as a proof-of-concept and later in an actual trial involving real people using their bathroom in the normal course of their daily lives. Preliminary results are encouraging with the accuracy rate for most sound categories being above 84%. We sincerely believe that the system contributes towards increased understanding of personal hygiene behavioral problems that significantly affect both informal care-giving and clinical care of dementia patients.