ArticlePDF Available

Usability study in Adaptive Mobile Interface

Authors:
  • Anuradha Engineering College
  • Director, IIIT Kottayam, Kerala, India Institute of National Importance

Abstract and Figures

The term Usability refers to the making Systems easier to use to the user needs. The advent of mobile devices imposes great challenges for user-friendly displays for effective browsing for web contents. This paper reports usability and their principles, Adaptive mobile content, Adaptive mobile content in action, we discuss how to exploit solutions able to support mobile multimodal adaptive user interfaces for dynamic accessibility. We focus on an approach based on the use of declarative user interface languages and oriented to Web applications accessed through the emerging ubiquitous environments. By the Usability study, we can fulfill the people's needs through the adaptive mobile devices by making the their basic things easier to use hence we required to study their usability.
Content may be subject to copyright.
International Journal Of Computer Science And Applications Vol. 6, No. 1, April / May 2013 ISSN: 0974-1003
Published by Research Publications, India 48
Usability study in Adaptive Mobile Interface
Prof.K.H.Walse
Dept. of Computer Sci.&Engg
Anuradha Engineering College
Chikhli, Dist.Buldana M.S.India
kwalse1234@gmail.com
Dr. R. V. Dharaskar
Director
MPGI, Nanded, M.S.India
rvdharaskar@rediffmail.com
Dr. V. M. Thakare
Professor & Head,
P.G. Department of Computer Science
SGB Amravati University
Amravati, M.S. India
Abstract-The term Usability refers to the making Systems easier to
use to the user needs. The advent of mobile devices imposes great
challenges for user-friendly displays for effective browsing for web
contents. This paper reports usability and their principles,
Adaptive mobile content, Adaptive mobile content in action, we
discuss how to exploit solutions able to support mobile multimodal
adaptive user interfaces for dynamic accessibility. We focus on an
approach based on the use of declarative user interface languages
and oriented to Web applications accessed through the emerging
ubiquitous environments.
By the Usability study, we can fulfill the people’s needs
through the adaptive mobile devices by making the their
basic things easier to use hence we required to study their
usability.
Index Terms- Usability, Mobile Web, User Interfaces, Mobile
Device, Adaptive User Interface, Model-based usability
Evaluation.
I. INTRODUCTION
The emerging ubiquitous environments call for supporting
access through a variety of interactive devices. Web
applications are widely used and a number of approaches and
techniques have been proposed to support their adaptation to
mobile devices. Little attention has been paid so far on
identifying more general solutions able to adapt Web
applications to various combinations of mobile multimodal user
interfaces. We have developed a solution for this purpose that
aims to support any Web application independent of the type of
authoring environment used for its development. We have
considered adaptation to various possible contexts of use
(varying in terms of interactive devices, user preferences, and
environmental aspects) and the approach can be useful for
supporting disabled users. Adaptation in mobile contexts is a
topic that has recently stimulated various research
contributions. For example, W3Touch is a tool that aims to
collect user performance data for different device
characteristics in order to help identify potential design
problems for touch interaction. In mobile devices, it has been
exploited to support adaptation based on two metrics related to
missed links in touch interaction and zooming issues to access
Web content. However, such work has not considered
accessibility issues [4].
The rest of this paper is organized follows, in section 2 we
studied about the Usability, Usability Problems. In section 3
there is evaluation of the usability, Usability planning and
about the adaptive interfaces and section 4 includes the
explanation of all these things.
II. USABILITY
Usability means making products and systems easier to
use and matching them more closely to user needs and
requirements. The term usability is interpreted in different
ways by authors, even within the same scientific
community. Usability has been identified with ease of use
And learning, while excluding utility (Shackel, 1984;
Nielsen In other cases, usability is used, 1993). In Other
cases, usability is used to denote ease of use and utility,
while ignoring learning. In software engineering, usability is
considered an intrinsic property of the software product,
whereas in HCI, usability is contextual: a system is not
intrinsically usable or unusable. Instead, usability arises
relatively to contexts of use. The extent to which a product
can be used by specified users to achieve specified goals
with effectiveness, efficiency and satisfaction in a specified
context of use. Learnability Efficiency of use Memorability
Few Errors Satisfaction [1].
A. Usability requirements
Shackel (1990) recognizes that there are four
components of any work situation; user, task, system and
environment. In the case of human computer interaction, the
system is the computer system. Environment includes
physical aspects such as appropriate heating, lighting,
equipment layout, operating circumstances and so on as
well as psychological aspects such as the provision of help
and training and socio-political features such as the
organizational environment in which the interaction takes
place. Usability is concerned with achieving a harmony
between these components. Shackel proposes that usability
can be seen in terms of four operational criteria;
effectiveness, learnability, flexibility and attitude.
Effectiveness is specified with respect to the performance
(as measured by a characteristic of the interaction such as
the time taken to complete the task or the number of errors
made) of a range of tasks by some percentage of users
within some proportion of the environments in which the
system will operate. Learnability is defined in terms of the
time taken to learn (to some specified level of competence)
given a specified amount of training. Learning also includes
the time taken to relearn the system if details are forgotten.
Flexibility covers the amount of variation in the tasks and/or
environments which can be accommodated by the design.
Attitude concerns the acceptable levels of human cost
(tiredness, effort, etc.) which are required so that users are
International Journal Of Computer Science And Applications Vol. 6, No. 1, April / May 2013 ISSN: 0974-1003
Published by Research Publications, India 49
satisfied enough to continue to use and to enhance their use
of the system[1].
III. THE USABILITY MODEL
The Usability Model is comprised of five stages, four of
which are implicitly joined in a loop.
Fig.1: ISO 13 407 Model Overview[6].
A. Principles of Usability
Perceivable- Information and user interface
components must be perceivable by users.
Operable- User interface components must be
operable by users Understandable: Information and
operation of user interface must be understandable by
users Robust: Content must be robust enough that it
can be interpreted reliably by a wide variety of user
agents[6].
B. Usability Challenges
Some of the typical properties of user-adaptive systems can
lead to usability problems that may out weigh the benefits of
adaptation to the individual user. Discussions of these problems
have been presented by a number of authors (e.g., H¨o¨ok,
2000; Lanier, 1995; Norman, 1994; Schaumburg, 2001;
Shneiderman, 1995; Wexelblat & Maes, 1997)[7].
IV. USABILITY PROBLEMS IN SMART
ENVIRONMENTS
Smart environments differ from desktop applications in
various aspects, which lead to an according adaptation of
usability evaluation methods. Based on the characteristics of
smart environments we derive appropriate evaluation methods.
Afterwards we show how to apply these methods for both:
providing users with information about the current state of the
system and providing the usability expert with evaluation
support[19].
A. Introduction to Usability Evaluation in Smart
Environments
The advanced features of smart environments are able to
provide a comfortable usage experience, but also introduce
new possible usability issues. The reason for usability
problems of proactive systems can be bifurcated into four
potential error components: imprecise sensor values (e.g.
wrong location values), misinterpretations of sensor values
(e.g. when applying a faulty user movement model to clean
the raw sensor data), intention recognition errors (e.g. when
predicting the wrong user task) and planning errors (e.g.
when delivering the wrong functionality).
To identify these error components we suggest a
usability evaluation process comprising three subsequent
stages:
Comparing interaction traces (Hilbert, 2000) with a
predefined expected behavior to identify possible
usability issues.
Analysis of captured sensor data and manual
annotations to investigate the reason for the
problem.
Investigation of the analysis metrics and
visualizations to solve the issue.
Within smart environments a task can be accomplished
cooperatively by a number of users with the support of their
different devices. In addition, a certain user can start a task
on one device (e.g. a mobile phone with speech input)
completing the task later with another device (e.g. a laptop
with a keyboard). In this case separate interaction traces of
the devices can hardly be compared. Therefore we suggest
interpreting the interaction trace according to an underlying
task model as task trace (Hilbert, 2000). A task trace is
understood as arbitrary sequence of performed tasks.
Deviations according to the defined temporal order of tasks
may occur and need further investigation during evaluation.
Designing a usability test case comprises two activities.
First, the environment has to be modeled as a CTML model
and afterwards a usability expert defines the test plan, as it
is common practice in usability evaluation. For the
execution of a usability test case, we distinguish usability
evaluation at different development stages. An interactive
walk through, helps to expose weaknesses within the
designed artifacts, to revise the underlying CTML
models[20].
B. Visualization and Analysis for Usability Expert
After capturing a trace of executed tasks and the
corresponding sensor data, our approach provides support
for identification and analysis of usability issues. To cope
with the vast amount of captured data we distinguish
between two solutions: on the one hand removing data,
which is out of evaluation scope, through filtering and on
the other hand, keeping all data, but setting focus on data of
evaluation interest through aggregation[21].
For aggregation of the task trace we apply a semantic
lens method. Analog to an optical lens a semantic lens is
defined by a focus point, a size of the lens and lens function
(Griethe, 2005). Applied to a task trace, the task of interest
is focused, the size of the lens is the number of previous and
successive tasks which are covered by the lens and the lens
function defines how the aggregation works. The lens
function defines the level of aggregation for each position
within the lens area (Propp, 2007b). An example for the
application of a semantic lens is shown below
International Journal Of Computer Science And Applications Vol. 6, No. 1, April / May 2013 ISSN: 0974-1003
Published by Research Publications, India 50
Fig.2: Complete Example Task Trace[21].
The data is captured as a trace of a time stamp for each
completed task. The aggregation mechanism analyses the trace
to find subsequences which have a common parent within the
task tree. Depending on the focus function certain tasks are
aggregated and represented by a parent or even more abstract
task. The usability expert is able to choose the focus in the time
scale and vary the size of the focus accordingly. Adjusting the
focus function provides a more or less detailed view. The
filtered and aggregated task trace can be visualized with
different techniques. One simple trace is depicted in Fig. 3.
Fig. 3: Visualization of the Task Trace for an Usability Expert [22].
Our intention is to provide specific visualization techniques for
different purposes of evaluation and to have a tool box
containing adaptable visualizations. Additionally we provide a
timeline view to compare different users according to duration
of accomplishing different tasks[22].
V. USABILITY WITHIN TASK MODEL-BASED
SOFTWARE ENGINEERING
In this section we will firstly introduce the vocabulary used
in the following. Afterwards, the evaluation approach is
presented.
A. Disambiguation
According to [16] a task model TM is seen as the sum of
possible task traces TT. The hierarchical tree composition
from actually executed tasks as leaf nodes and abstract tasks
as inner nodes represents the logical structure of the root
task that is divided into sub tasks. Within the activities
described by TM there is no contemporaneous execution of
any two tasks. At any moment during the execution only
one task can be running. This restriction causes some
difficulties in describing cooperative work and ongoing
activities with several devices in a smart environment. To
allow the specification of interactive systems with more
than one user acting simultaneously, CTT has been
extended to Collaborative Concur Task Trees (CCTT) [15].
The main principle in CCTT is to introduce a coordination
task tree specifying the relations and interaction between
several other task trees that describe the different users or
roles involved. In those role-depending task trees a new
kind of nodes (connection tasks) is introduced to specify
temporal dependencies to connection tasks within another
roles’ task model. The coordination task model describes
these temporal dependencies. In our approach to model the
behavior of users and the functionalities of devices we use
such separate models for each role and device. Beyond this,
Sinnig et al. [18] Further extended CCTT in order to
consider the fact that each role is typically fulfilled by
different users. Therefore, for each user a copy (instance) of
the corresponding role task model is created. Various
instances are executed concurrently during runtime and the
task constrain language (TCL) specifies synchronization
points between several instances.
For evaluation purposes the view on the execution of a
specific task is more detailed taking into account the
different possible states of a task. States can be enabled,
disabled, running, suspended, done, and skipped. Each state
change of a task, be it a leaf node or inner node, is called a
task event and the execution of a model based system is
described as a sequence of such events called Task Event
Trace (TET). Events occurring as a result of other events
(e.g. the finishing of task “A” enables the execution of task
“B”) are placed beyond the causing event in the sequence.
During the preparation of a usability evaluation of a smart
environment system an expected behavior is specified called
Expected Task Event Trace (ETET). This ETET is used
during the evaluation and the afterwards analysis to
compare the actual behavior of users and devices with the
expected.
B. Model-based Usability Evaluation
There are some approaches to employ task models in
usability evaluations. We will have a look at two of them and
afterwards introduce the specifics of model based development
of smart environment. The tool RemUsine with its extension
MultiDevice [12] uses task models to describe the expected
(planned) behavior of users and compares it to the output of
another tool component: the tool logger, which is supposed to
be available at client-side. The logging tool stores several types
of system events during the test session. To provide automatic
analysis of the actual user behavior, possible system events
have to be mapped to tasks represented by leaf nodes in the task
model. This association has to be done once for several user
sessions. The tool then provides assistance in analysis by
pointing out, at which parts of the tracked user actions the
associated task execution violates temporal or logical relations
in the task model. The component Mobile Logger protocols
different types of system and environment variables and
includes a dialog based input form for entering these
environment conditions. Finally, the tool supports the evaluator
in analyzing this potentially huge amount of data by offering
different graphical visualizations. Another tool, developed
closely oriented on the model-based development approach
outlined above, is the Remodel (“Remote Model-based
Evaluation”) client-server one client-side module captures any
task-related events within an application developed following
the semi-automatic generation and replacement process. These
events are sent to the server as they occur and are stored for
subsequent requests. An evaluation expert can connect to the
server with the same client software but different modules and
observe the events related to one or more specific executions.
Thus, same-time but different-place evaluations are provided.
The client-module gives a lot of information about the
execution, e.g. an animated task tree. There are other modules
International Journal Of Computer Science And Applications Vol. 6, No. 1, April / May 2013 ISSN: 0974-1003
Published by Research Publications, India 51
that allow communication between several clients for example
and support subsequent analysis [12].
VI. MODEL-BASED USABILITY EVALUATION FOR
SMART ENVIRONMENTS
Our objective is to provide usability evaluation methods for
the model based smart environments in all development stages.
Because of the models being an inherent part of the system
there is no need to parse any log-files in order to extract task-
related information. Instead of that we can utilize the task
model engine as the source for relevant events. During design
time, this engine is used to simulate and animate the underlying
models and during run time it acts as the logic within the smart
environment providing assistance to the users. The approach
presented in this paper integrates usability evaluation activities
in the development process. Furthermore, we believe that
software developers and usability experts do not only benefit
from working on the same models, but also profit from working
in the same environment and interdigitating their work. Figure
4 shows the process in principle - based on the models
(described in section2) a test case is developed as described in
section 3. In section 4, the execution of a test case is explained.
Finally, section 5 discusses the analysis of the gathered
data[18].
Fig. 4: Usability Evaluation Process[18].
We define an evaluation scenario as a set of users and
devices, each characterized by properties and specific task
models. Every user owns one or more roles and all roles are
characterized by a certain task model. Every device is
associated with one or more types described by a set of
properties and a usage model in a CTT like notation, which
defines how a device is used. To evaluate a smart
environment based on a specific modeling technique the
task model chunks to describe user behavior and device
usage are mapped to CTT notation and additional
information is annotated. The aim is to track the interaction
between user and environment and validate the interaction
according to the model in an analysis stage[18].
VII. PLANNING A USABILITY EVALUATION
Within Eclipse all artefacts and documents necessary to
conduct a usability evaluation are defined in an Usability
Test Case file. This includes a description of the evaluation
preparation and environment as well as a definition of the
test case that is to be executed. Accordingly, the test plan
specifies the required resources, focuses the points to test,
and serves as a communication tool between the different
members of the working team. It includes a high level
description of the test’s purpose, a list of all the questions
and objectives that are to be solved by the test as precise as
possible, a description (profile) of all users involved, a
detailed description of the test execution, a list of tasks to be
carried out by the users in an appropriate level of
abstraction, and a description of the used equipment and of
the level of participation or neutrality of the test conductors.
In the context of model-based smart environments also the
task models of both user roles and device types are
included, too. The definition of the test executions can
include instructions for the usability experts to ask questions
to the users during and/or after the test execution. These
questions are linked to tasks in the according task model and
the links can contain filter criterions to define several
questions as to be asked under specific circumstances only.
All information about one single execution of such a
Usability Test Case is gathered in a Usability Test Case
Execution file. Here, the recorded task event traces and the
description of the actual setup are stored. It also contains
comments and notes made by the usability experts during
the evaluation [14].
VIII. ADAPTIVE INTERFACES
The goal of an adaptive interface may be stated as
follows: Given a set of content elements, a relevance
measure, a set of layout constraints, and a user experience
goal, show an interface layout that maximizes the utility of
the display for the specified user experience. The proposed
adaptive user interface can recommend applications of a
Smartphone which have the highest chance to be executed
by the user. The probabilities of execution of applications
are calculated by the spatiotemporal structure learning
algorithm which is explained in the subsection “The
Interference Engine.” The system learns the application
choices based on five contextual variables with the
algorithm. Therefore, even if the user interfaces of all users’
Smart phones are initially identical, the user interfaces are
gradually adapted to each user as users use them. The figure
shows a high-level concept of the adaptive user interface
[11].
Fig 5: Conceptual diagram of the adaptive user interface[11].
IX. FRAMEWORK FOR ADAPTATION
There are three important aspects of adaptable interfaces
must be considered. User is responsible for tailoring the
interface in the adaptable interfaces on the other hand, adaptive
interfaces are responsible to adapt to the user autonomously
Identification of variables that call for adaptation,
International Journal Of Computer Science And Applications Vol. 6, No. 1, April / May 2013 ISSN: 0974-1003
Published by Research Publications, India 52
Determination of necessary modifications to the interface
Selection of decision inference mechanism It is these aspects
that differ between fields. Any two applications are different by
the definition of the above abstract class. According to Benyon
and Murray, adaptive interfaces should consist of a User model,
a Domain Model, and an Interaction Model Here, the user
model describes the user in terms of abilities, profile
information, and domain knowledge. Domain models define the
task structure of the system and the user such as user goals, the
logical construct of the system, and the basic inter- action
mechanism. The interaction model consists of a historical
record of the user’s interaction with the system in order to carry
out the adaptation, handle the inference process, and finally
incorporate the evaluation mechanism of the performance for
either the user, or system, or both. Benyon’s definition of the
adaptive interface was not fully implemented in many
applications. The conceptual framework of the adaptive
interface is little or no help to an adaptive interface designer.
Most of thecomponents are domain dependent and generally it
is difficult to apply this framework to practical applications. An
abstraction of concepts is a great deal of help as it allows the
big picture of the interface adaptation to be seen. At this point
we are confronted with the requirements of the adaptive
interfaces [3].
Fig 6: Adaptive Interface Model (Benyon and Murray)[3].
X. ADAPTIVE MOBILE INTERFACES
In recent years, computing is moving toward pervasive,
ubiquitous environments, in which a variety of devices,
software agents, and services are expected to seamlessly
integrate and cooperate with each other to deliver
appropriate content and services of users' interest without
time and location constraints. The unique features of the
wireless network and mobile devices, however, present a
number of critical challenges for taking advantage of the
convenience of mobile devices and enabling Web access.
For example, small screen size and limited memory of
mobile devices pose challenges on presenting Web pages
effectively, while low-bandwidth wireless networks force
information systems to adapt content to the dynamically
changing network environmentAdaptive User Interfaces
have a long history rooted in the emergence of such eminent
technologies as Artificial Intelligence, Soft Computing,
Graphical User Interface, JAVA, Internet, and Mobile
Services. More specifically, the advent and advancement of
the Web and Mobile Learning Services has brought forward
adaptivity as an immensely important issue for both efficacy
and acceptability of such services.
XI. ARCHITECTURE FOR ADAPTIVE SYSTEMS
The designer has to consider the range of ways in which
systems can be made adaptive and to select a capability
appropriate to the usability problem at hand. The most basic
architecture of an adaptive system illustrated in Figure 7. An
adaptive system requires three models, or representations.
The sophistication of the adaptive mechanism depends on
the quality of these models. The model of the system
describes the characteristics which can be altered i.e. the
aspects of the system which are adaptable. A system model
which represents only the physical aspects of interaction
such as screen displays, dialogue content or the effects of
function keys will only be able to adapt at this level. A
system model which represents the logical structure or
functioning of the system will be capable of adaptation at a
logical level. For example, HAM-ANS (Morik, 1989) deals
with a logical description of hotel rooms whereas the
adaptive menu system (Greenberg and Witten, 1985) only
adapts the physical arrangement of menu items. A further
level of adaptivity may be obtained if the model of the
system describes the system at the task level i.e. what the
system can be used for. A complex system may be kept
functionally simple for a new user if it can adapt at the task
level. These three levels of description are necessary
because each reveals some generalisation which would
otherwise not be available. The model of the system
describes the characteristics which can be altered i.e. the
aspects of the system which are adaptable. A system model
which represents only the physical aspects of the interaction
such as screen displays, dialogue content or the effects of
function keys will only be able to adapt at this level. A
system model which represents the logical structure or
functioning of the system will be capable of adaptation at a
logical level.
Fig.7: General model of an adaptive system [3].
A. What is Adaptive Mobile Content?
The Internet is rapidly going mobile and you need to keep
pace with it to maintain your audience. Without a mobile
solution you will begin to lose more of your audience every
day. Vertuelle builds 'smart-content' solutions that effortlessly
adapt to mobile devices. We maintain one set of code for
centralized editing. This is the cornerstone of our adaptive
mobile content solution.
International Journal Of Computer Science And Applications Vol. 6, No. 1, April / May 2013 ISSN: 0974-1003
Published by Research Publications, India 53
B. Empowering Your Audience to Access Your Content
Anywhere
Adaptive Mobile Content empowers your audience to
access your content anywhere. The problem with most web
content is that it is practically impossible to view effectively on
a mobile device. As mobile browsing increases, you will lose
more and more of your audience, unless you have a flexible
mobile content delivery platform. Vertuelle has the solution to
rapidly propel your content forward into this emerging mobile
world. Our Adaptive Mobile Content solution delivers
dynamically formatted content that can be efficiently viewed by
devices connecting via slow 1X, fast EDGE, faster 3G, or even
faster Wi-Fi. Be seen anywhere, with optimized content
delivery to mobile devices. Our solution even optimizes
images, video, and thumbnails for mobile devices
automatically.
C. One set of code for all devices
Mobile devices currently have more limited support for
web content. No matter how your website is designed, your
content will almost certainly have to be reformatted before
it reaches the device to achieve the best results possible.
Each time a web browser, or a mobile device connects to the
content developed by Vertuelle, the device's capabilities are
checked. The server determines the best content format for
the device and sends back content that is optimized and
formatted for optimal viewing. Vertuelle’s web-based
solutions deliver multiple formats using the same central set
of web based code. This approach is smart, economical,
easy to manage, and fast to deploy.
D. Adaptive Mobile Content in Action
When a device connects to the Adaptive Mobile
application, the server begins by detecting the device and its
capabilities. Devices differ significantly in their screen size,
plug-in support (often limited on mobile devices), video
playback support etc.. Each device type has a profile that is
used to determine what file formats and web page template
to use. Web server content templates reformat the web
pages specifically for the connected device. A template is
designed for each of the supported device types. It is even
possible to deliver specific version of different web
browsers, if necessary. A template design is efficient
because site edits and design changes are restricted to a set
of template files. If site changes are required, only a few
files need to be updated and the entire site design is
changed. This is because our content is delivered using a
database-driven approach. This approach to digital media
ensure immediacy, efficiency, flexibility, and broad device
support. The outcome of this methodology is a rapidly
changing, dynamic, and widely available web presence.
XII. DEVELOPING THE USER INTERFACE
Usability concerns are typically not among the major
concerns of software engineers during the early stages of
design. It is common to find software engineers designing the
user interface as a last layer to be placed on top of the system
above function or business logic in order to enable users access
its functionality. This vision rests on the assumption that all the
application’s logic is at the functional level, and is independent
of the user interface. The user interface then is simply a passive
information transmission layer between the user and the
“application”. Such approaches compromise the quality of the
software system that is being produced both in terms of the
quality of the user’s experience and the quality and
maintainability of the implemented code. User interface layers
are, in practice, required to implement control logic. In current
applications, the control logic of the user interface is usually
tightly coupled with the logic of the functional layer. This
happens both in terms of the state and behaviour of user
interface objects as they reflect the underlying function. Both
layers must be adequately designed so that they can co-operate
in order to provide a high quality user interaction. An
inadequate or non-existent design specification leads to a user
interface layer that is developed in a more or less ad-hoc
fashion. This problem is exacerbated by current IDE tools
which provide support for the design and construction of the
graphical aspects of the user interface, but provide little support
for the design of the behavioural aspects. This leads to a
situation where the design or, more precisely, the
implementation, will require frequent update as problems are
found and improvements requested [2].
XIII. INTEGRATING USABILITY ANALYSIS INTO
SOFTWARE DEVELOPMENT
We have been working on a method for the design of the
user interface that involves joint analysis by software engineers
and human-factors experts. The approach taken is that depicted
in Figure 8. In it, design and verification become part of the
same development process, where verification is used to inform
design decisions. Four main steps can be identified for this type
of approach:
selecting what to verify;
building an appropriate model;
performing the verification;
analyzing the results.
Whenever usability issues must be considered, a model is built
that captures salient interactive features of the design. The
model is used to capture the intended design, and at this stage
different alternatives can be considered. It must be stressed that
these models are not prescriptive but descriptive. That is, they
are used not to express how the system should be implemented;
rather they are used to express what system should be
implemented. At this stage the system implementation is not of
interest, instead the focus is on the interaction between the
system and its users. The aim of the approach is that it might be
applied from the early design stages, when the question is
“what should be built?”, rather than “how should we build
it?”.Once a satisfactory model is reached, the desired usability
properties must be expressed and afterwards verified of the
model. From an analysis point of view, the most interesting
situations are those where the verification fails. In this case, the
causes for the failure must be analyzed, this will lead to a new
iteration of the process where either the model or the property
have been modified. In order to enable the analysis of the
behaviour of non-trivial systems, models are built from
components using a modeling language with rigorous
semantics. Components are described using the notion of
interactor in the style of an object-like entity which is capable
of rendering (part of) its state into some presentation medium.
The state of each interactor is described by a set of attributes
rendering relation.
International Journal Of Computer Science And Applications Vol. 6, No. 1, April / May 2013 ISSN: 0974-1003
Published by Research Publications, India 54
Fig.8: Integration of verification in development [2].
XIV. USER INTERFACE PROCESSORS VS.
GENERAL-PURPOSE COMPUTERS
Duchamp has constructed a simple four category
taxonomy for mobile computers: terminal vs. workstation
and disconnected-often vs. disconnected-rarely. It is not
clear which of these four categories will produce the best
mobile device for the users of these machines. While some
may argue that mobile devices should be general-purpose
computers, we opted instead for simple user interface
processors. A user interface processor (or a “disconnected-
rarely terminal” in Duchamp’s taxonomy) only performs
drawing and low-level event processing, leaving the rest of
an application’s computation to a remote server. Other
“disconnected-rarely terminals” include the Berkeley
Infopad and the Xerox ParcTab. User-centered design and
task analysis techniques stress that we should first determine
what tasks the intended user population would likely
perform on mobile devices. It was argued at a recent mobile
computing panel that much of the usage of mobile devices
will be to access the rapidly changing data at the home
office, rather than authoring new data. The current uses of
communications technology (i.e., pagers and telephones)
seem to support this conclusion. Placing most of the
computational power on the mobile device seems to be
overkill if this is how the machines are actually used. In
designing a device that has stringent constraints on price,
power consumption, and size, one must omit components
that are not absolutely necessary. Running applications on a
remote host and using the mobile device primarily for the
user interface solves some of the hard problems faced by
designers of mobile computer operating systems. First, the
system reinitialization problem becomes less complicated.
When the mobile device crashes, it can be rebooted and the
user can continue the application with no loss of data or
state. This assumes that the remote hosts are more reliable
than the mobile devices. The second major problem that the
user interface processor approach solves is the data
consistency problem. Instead of needing intelligent caching
and consistency protocols as in Coda, a user will have
access to the most current versions of their private and home
office data as long as their communications link is stable.
This is a reasonable assumption in a single building or
campus setting. The user interface processor approach fits
well with the idea that autonomous local file systems will
become increasingly undesirable. Although we focus on our
experience in building a user interface management system
for BNU, much of this discussion applies to all mobile
computers, regardless of where they fit in Duchamp’s
taxonomy [5].
XV. USAGE MODEL OF MOBILE INTERFACES IN
PERVASIVE COMPUTING ENVIRONMENTS
In this section we describe a specified schema which can
help software developers to implement a usable mobile
interface. A model based on our schema can illustrate a user’s
current situation in a pervasive computing environment and
required adaptations of the mobile interface to always meet a
user’s requirements. A usage model as our schema consists of
all relevant aspects which may have influence on a user’s usage
behaviour. In the development process of a planned mobile
interface a concrete usage model iteratively emerges. The
resulted model extends the released mobile interface with a set
of rules to give knowledge of required adaptations of
interaction and presentation capabilities on each contextual
situations of a user. For example, the rule set gives knowledge
with presentation to display when a user has interacted with the
graphical user interface of the mobile phone. Another example
might be the rule which interaction to support when the
physical environment has changed, e.g. when the noise is
currently extremely high. We see four main components in our
schema to emerge the set of rules: a model of a persona, a
model of the mobile interface, a model of the user context and a
model of the environmental context. A model of the persona
describes the user and her goals whereas the mobile interface
includes a description of the application including supported
interaction and presentation capabilities. The user and
environmental context can change a user’s situations in the
pervasive computing environment and can require an
adaptation of the mobile interface in forms of an adaptation of
the interaction and presentation [8].
A. Persona
Whenever mobile interfaces have to be developed, the
diversity of real users must be considered in forms of different
personas. Each persona gives useful input about a group of real
users, including information about their knowledge,
expectations and goals. We split the specification of persona in
two parts concerning a user’s mental model and a user’s goals
because we suppose both aspects as relevant when developing a
set of rules for an adaptive mobile interface [8].
Mental Model- The mental model describes general
information about the user such as information about a
user’s general knowledge and memories. A user’s
general attitude and behaviour base on this mental
model and influence a user’s goals quite intensive.
Users having a very negative attitude of our planned
application will not use it at all[8].
Fig 9: Usage Model of Mobile Interfaces to Pervasive Computing
Environments [8].
International Journal Of Computer Science And Applications Vol. 6, No. 1, April / May 2013 ISSN: 0974-1003
Published by Research Publications, India 55
Main and Situational Goals-We see a user’s main
goals which can be attained by using the mobile
interface and its provided services such as the main
goal to load information about a smart object.
Apart from the main goals, a user has further goals
depending on a user’s situation such as a goal to
prevent any further cognitive load because the
user’s activity level is currently high. Situational
goals can arise in several ways, such as privacy
goals, security goals and goals to prevent cognitive
or physical load. The mobile interface has to meet
the main goals by supporting right services but also
the situational goals by supporting appropriate
interaction and presentation capabilities of the
mobile interface. Thus, I is necessary to have
knowledge of each main and situational goal [8].
B. Mobile Interface
Appropriate services must be provided by the mobile
interface to complete tasks and attain a user’s main goals. A
task can be performed by executing a sequence of actions
and providing feedback. To also meet situational goals, the
mobile interface contains an interaction and presentation
model which gives information about adaptations. The
interaction and presentation model contains rules for
adaptations of the interface to meet main and situational
goals. The model might contain the rule to support the user
with graphical interaction capabilities instead of physical
whenever a situational goal is privacy. Thus, the interface
can support the user with appropriate interaction and
presentation capabilities and avoid unnecessary actions of
the user to change the mobile interface because the interface
can automatically adapt to a user’s needs.
C. User Context
Whenever users change their current situation such as by
changing their location, they trigger user context which
influences their goals. User context can be triggered in
different ways. We consider it useful to differentiate the
user context in the graphical and physical user context [8].
Graphical User Context- Users can trigger context by
interacting with the graphical user interface presented
on the mobile interface which is even textual-based or
menu-based. We call this context graphical user
context. After a user has triggered a graphical user
context, the interface can adapt, e.g. by displaying
another screen and playing a sound signal.
Physical User Context- Physical user context is
triggered whenever users physically interact with their
pervasive computing environment. Users can even
directly interact with their environment, e.g. by
touching a smart object via the mobile phone or can
indirectly trigger physical user context.
XVI. ADAPTED USER-CENTRED DEVELOPMENT
PROCESS FOR MOBILE PERVASIVE INTERFACES
An adapted iterative user-centred design process is required
which basic objective is finding best conforming usage model
to create a user-centred mobile interface to a pervasive
computing environment. As the basis for our adapted process
we used the human-centred design process formally specified
in ISO- 13407 [10]. The main goal of the ISO-specification is
setting the user into the centre of the development process. The
user is involved as often as possible to optimize applications on
a user’s goals and needs. This ISO-specification shows several
steps of a process to find a user’s requirements: understand and
specify the context of use, specify the user and organizational
requirements produce a design solution and evaluate the design
against requirements. Found issues are used as input to change
the context of use and iterate the process as long as a users
requirements have not been correctly matched. The figure
shows our adapted user-centred development process for
mobile interfaces to pervasive computing environments. The
process consists of a conceptual and analysis phase as well as
an iterative design phase. The iterative design phase consists of
a specification, implementation, evaluation and analysis.
A. Conceptual Phase
Whenever developers plan to implement a mobile interface
to a pervasive computing environment it is recommended to
first specify a basic concept to decrease the number of
solutions. The result of the conceptual phase should be a basic
description of the mobile interface, a specification of the target
group, the requirements of users and developers as well as a
very basic specification of the usage model.
B. Analysis Phase
After having a conceptual specification of the mobile
interface there is a need to evaluate it before starting to consider
its explicit modelling and implementation. People of the target
group must be asked for their attitude about the concept.
Moreover, there is a need to find out users’ abilities. A result of
the analysis phase is the reviewed specification of the concept.
In addition, a specification of the personas and the scenarios for
later user tests should be the outcome of the analysis phase.
These different personas are based on the resulted user’s
abilities and attitudes.
C. Iterative Design Phase
The objective of the iterative design phase is to find a usage
model and a set of rules for required adaptations of the planned
mobile interface to always meet a user’s needs best. For each
persona identified in the analysis phase a set of rule has to be
defined.
Specification- During the specification phase the
usage model and the set of rules are specified and
adapted for the planned mobile interface to a
pervasive computing environment.
Implementation- The set of rules from the
specification phase is used for the implementation
of different versions of the mobile interface.
Evaluation- After a new version of the planned
mobile interface has been developed, the interface
and its set of rules must be evaluated within
experts and real users test. For the real user tests,
the specified scenarios of the analysis phase are
used. These scenarios should include all testing
situations to investigate the relevant user and
environmental context.
International Journal Of Computer Science And Applications Vol. 6, No. 1, April / May 2013 ISSN: 0974-1003
Published by Research Publications, India 56
Analysis- The results of the evaluation can be
analyzed to find out whether the usage model and
rule-set conforms a user’s needs and expectations or
not. Resulted issues give valuable input for
modifications in the specification [8].
XVII. CONCLUSION
In mobile environments, the design and implementation of
the user interface needs to accommodate the diversity of
contexts and scarcity of resources. Existing approaches tend
to generate application –specific solution whereby the
internal mechanisms that support adaptation are entwined
with the functional code of the application.
Although adaptive user interfaces seem to be the
solution able to deal with the growing diversity of usage
contexts, devices and users, there is also downside.
Adequate support can be developed to improve the benefits
and reduce the costs. However, further research is necessary
to determine the factors influencing the exact cost-benefit
trade-offs for usability of adaptive user interfaces.
REFERENCES
[1] Joëlle Coutaz And Gaëlle Calvary, “hci and software
engineering for user interface plasticity”, published in
"Human-Computer Interaction Handbook:
Fundamentals, Evolving Technologies, and Emerging
Applications, Third Edition, Julie A. Jacko (Ed.) (2012)
1195-1220".
[2] Jos´e Creissac Campos1, Michael D. Harrison2, and
Karsten Loer2 “Verifying user interface behaviour with
model checking”, Department of Computer Science,
University of York Heslington, York YO10 5DD, UK.
[3] David Benyon,” Adaptive Systems: a solution to
usability problems”, Computing Department, Open
University, Milton Keynes, MK7 6AA UK..
[4] Nebeling M., Speicher M. and Norrie M. C., W3Touch:
Metrics-based Web Content Adaptation for Touch, to
appear in Proceedings CHI'13, Paris.s.
[5] James A. Landay and Todd R. Kaufmann,” User Interface
Issues in Mobile Computing”, School of Computer Science
Carnegie Mellon University 5000 Forbes Avenue Pittsburgh,
PA 15213, USA (412) 268-3564.
[6] Anjoo Navalkar,” Usability Engineering – Quality Approach
(ISO 13407)”.
[7] Anthony Jameson,” Adaptive Interfaces and Agents”,
DFKI, German Research Center for Arti_cial Intelligence
/International University in Germany.
[8] Karin Leichtenstern, Elisabeth Andre,” User-Centred
Development of Mobile Interfaces to a Pervasive Computing
Environment”, Institute of Computer Science University of
Augsburg Eichleitnerstr. 30, 86135 Augsburg, Germany.
[9] A. Cooper, About Face: The Essentials of User Interface
Design, John Wiley & Sons, Inc., New York, NY, USA,
1995.
[10] ISO-13407:
http://www.usabilitynet.org/tools/13407stds.htm.
[11] Hosub Lee, Young Sang Choi, and Yeo-Jin Kim, Samsung
Electronics Co., Ltd.,” An Adaptive User Interface Based
on Spatiotemporal Structure Learning.”
[12] Paternò, F., Russino, A., Santoro, C.: Remote
evaluation of Mobile Applications. TAMODIA 2007,
Toulouse, France, Vol. 4849, ISBN 978-3-540-77221-7,
pp.155-168, 2007.
[13] Buchholz, G., Engel, J., Märtin, C., Propp, S.: Model-Based
Usability Evaluation - Evaluation of Tool Support. HCII
2007, Beijing, China, pp.1043-52, 2007.
[14] Rubin, J., Handbook of usability testing, Wiley technical
communication library, Hudson, T. (ed.), 1994.
[15] Mori, G., Paternó, F., Santoro, C.: CTTE: Support for
Developing and Analyzing Task Models for Interactive
System Design. IEEE Trans. Softw. Eng. 28, pp. 797-813,
2002.
[16] Paternò, F.: Model-Based Design and Evaluation of
interactive applications. Springer ISBN 1-85233-155-0,
1999.
[17] Sinnig, D., Wurdel, M., Forbrig, P., Chalin, P., Khendek, F.:
Practical Extensions for Task Models. TAMODIA 2007, pp.
42-55, Toulouse, France, 2007.
[18] [18] Stefan Propp, Gregor Buchholz, Peter Forbrig, Task
Model-based Usability Evaluation for Smart Environments,
University of Rostock, Institute of Computer Science, Albert
Einstein Str. 21, 18059 Rostock, Germany.
[19] Maik Wurdel, Stefan Propp and Peter Forbrig, HCI Task
Models and Smart Environments, University of Rostock,
Department of Computer Science Albert-Einstein-Str.
21, 18059 Rostock, Germany.*
[20] Hilbert, D. M. and D. F. Redmiles (2000). "Extracting
usability information from user interface events." ACM
Comput. Surv. 32(4): 384-421.
[21] Malý, I. and P. Slavík (2007). Towards Visual Analysis
of Usability Test Logs Using Task Models. Task Models
and Diagrams for Users Interface Design: 24-38.
[22] Propp, S. and G. Buchholz (2007b). Visualization of Task
Traces. Interact 2007 Workshop on New Methods in User-
Centered System Design. Rio de Janeiro, Brazil.
ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Web designers currently face the increased proliferation and diversity of new touch devices which pose major challenges to the design task. This paper presents W3Touch--an interface instrumentation toolkit for web designers to collect user performance data for different device characteristics in order to help them identify potential design problems for touch interaction. Web designers can visualise the data aggregated by W3Touch and use simple metrics to automate the adaptation process for many different viewing and interaction contexts. In a series of experiments with web designers and users, we show that W3Touch is able to detect interaction problems that are hard to find using conventional methods and demonstrate how the tool was successfully used to automate the desktop-to-mobile migration of Wikipedia as an example.
Conference Paper
Full-text available
A challenging issue for HCI is the development of usable mobile interfaces for interactions with a complex pervasive environment. We consider a need for interfaces which automatically adapt their interaction and presentation capabilities on the user's situational needs and expectations to decrease the complexity of the environment and increase the usability of the system. Therefore, a rule-set is required which gives knowledge on the mobile interface's adaptations as a consequence on a user's situations within the environment. This rule-set iteratively emerges within a user-centred development process by considering and testing each contextual situation of the user when interacting with the mobile interface. In this paper we describe an approach of a usage model for specifying each context of the user and the environment as well as the user's goals and mental model. Moreover, we describe our used user-centred process to develop the usage model and rule-set, practical experience in development of mobile interfaces, some guidelines and our planned future work.
Article
Full-text available
While task modeling and task-based design are entering into current practice in the design of interactive software applications, there is still a lack of tools supporting the development and analysis of task models. Such tools should provide developers with ways to represent tasks, including their attributes and objects and their temporal and semantic relationships, to easily create, analyze, and modify such representations and to simulate their dynamic behavior. In this paper, we present a tool, CTTE, that provides thorough support for developing and analyzing task models of cooperative applications, which can then be used to improve the design and evaluation of interactive software applications. We discuss how we have designed this environment and report on trials of its use.
Article
Full-text available
Improving the usability of computer systems is perhaps the most important goal of human-computer interaction research. Current approaches to usability engineering tend to focus on simply improving the interface. An alternative is to build intelligence into the system. However, in order to do this a more comprehensive analysis is required and systems must be designed so that they can be made adaptive. This paper examines the implications for systems analysis, design and usability specification if adaptive systems are to be a realistic solution to usability problems.
Article
We developed a user interface prototype for the Android smartphone, which recommends a number of applications to best match the user's context. To consider the user's context of use, we utilized 5 prototypical variables; time, location, weather, emotion, and activities. The developed system derives the best three recommended applications based on the results of supervised machine learning from such data sets. To consider the history of past context information, in addition to the current one, we developed a novel and effective probabilistic learning and inference algorithm named "Spatiotemporal Structure Learning." By extending Naïve Bayesian Classifier, the spatiotemporal structure learning can create a probability model which represents relationship between time-series contextual variables. We implemented a prototype system which shows the current context and the inferred recommendation of applications. For the prototype system, we developed an Android widget application for the user interface and a Java-based server application which learns structure from training data and provides inference results in real time. To gather training data and evaluate the proposed system, we conducted a pilot study which showed 69 percent accuracy in predicting the user's application usage. The prototype demonstrated the feasibility of an adaptive user interface applied to a state of the art smartphone. We also believe that the suggested spatiotemporal structure learning can be applied to number of application areas including healthcare or energy problems.
Conference Paper
The current set of temporal operators is insufficient to make effective use of task models as specifications for user interfaces. Moreover, the predominant monolithic task tree structure does not scale well for sizable applications. In order to overcome these shortcomings, a small collection of practical extensions for task models is proposed. In particular, we define new temporal operators (stop, non-deterministic choice, deterministic choice and instance iteration), concepts in support of modularization and a high-level task diagram notation. Finally, we introduce a new concept for expressing cooperative task models that distinguishes between different roles as well as between actors fulfilling these roles.
Conference Paper
In this paper we discuss techniques on how task models can enhance visualization of the usability test log. Evaluation of the usability tests is a traditional method in user centered design. It is a part of the methodology for design of usable products. We developed a tool for visualization of usability logs that uses the hierarchical structure of the task models to group and visualize observers’ annotations. This way of visualization is very natural and allows investigation of the test dynamics and comparison of participant’s behavior in various parts of the test. We also describe methods of visualization of multiple logs that allow for comparison of the test results between participants. For that purpose we present a new visualization method based on alignment of the visual representations of tasks, where binding between the task model and the log visualization is used. Finally, we present an evaluation of our tool on two usability tests, which were conducted in our laboratory and we discuss observed findings.
Conference Paper
Usability evaluation can be accomplished in different ways, depending on individual information interests and specific constraints. In some cases the test user and the usability evaluator are located at different places, for instance in mobile environments or in the case of Internet websites, where the user can’t be observed as in a laboratory situation. The usage of multi-modal interfaces introduces some additional constraints. To overcome the problems, techniques of remote usability testing are applied. The data recorded during the test is structured und afterwards analyzed. A user centric approach structures the data based on tasks that are intended by the user. A task model describes the tasks composed of subtasks and temporal relationships between them. This paper introduces and evaluates two tools, AWUSA and ReModEl, which use task modeling for remote usability evaluation.