[Show abstract][Hide abstract] ABSTRACT: Intelligent computer applications need to adapt their behaviour to contexts and users, but conventional classifier adaptation methods require long data collection and/or training times. Therefore classifier adaptation is often performed as follows: at design time application developers define typical usage contexts and provide reasoning models for each of these contexts, and then at runtime an appropriate model is selected from available ones. Typically, definition of usage contexts and reasoning models heavily relies on domain knowledge. However, in practice many applications are used in so diverse situations that no developer can predict them all and collect for each situation adequate training and test databases. Such applications have to adapt to a new user or unknown context at runtime just from interaction with the user, preferably in fairly
ways, that is, requiring limited user effort to collect training data and limited time of performing the adaptation. This paper analyses adaptation trends in several emerging domains and outlines promising ideas, proposed for making multimodal classifiers user-specific and context-specific without significant user efforts, detailed domain knowledge, and/or complete retraining of the classifiers. Based on this analysis, this paper identifies important application characteristics and presents guidelines to consider these characteristics in adaptation design.
[Show abstract][Hide abstract] ABSTRACT: Interaction in smart environments should be adapted to the users’ preferences, e.g., utilising modalities appropriate for the situation. While manual customisation of a single application could be feasible, this approach would require too much user effort in the future, when a user interacts with numerous applications with different interfaces, such as e.g. a smart car, a smart fridge, a smart shopping assistant etc. Supporting user groups, jointly interacting with the same application, poses additional challenges: humans tend to respect the preferences of their friends and family members, and thus the preferred interface settings may depend on all group members. This work proposes to decrease the manual customisation effort by addressing the cold-start adaptation problem, i.e., predicting interface preferences of individuals and groups for new (unseen) combinations of applications, tasks and devices, based on knowledge regarding preferences of other users. For predictions we suggest several reasoning strategies and employ a classifier selection approach for automatically choosing the most appropriate strategy for each interface feature in each new situation. The proposed approach is suitable for cases where long interaction histories are not yet available, and it is not restricted to similar interfaces and application domains, as we demonstrate by experiments on predicting preferences of individuals and groups for three different application prototypes: recipe recommender, cooking assistant and car servicing assistant. The results show that the proposed method handles the cold-start problem in various types of unseen situations fairly well: it achieved an average prediction accuracy of 72±1% . Further studies on user acceptance of predictions with two different user communities have shown that this is a desirable feature for applications in smart environments, even when predictions are not so accurate and when users do not perceive manual customisation as very time-consuming.
No preview · Article · Dec 2013 · Journal on Multimodal User Interfaces
[Show abstract][Hide abstract] ABSTRACT: To recognise just the same human reaction (for example, a strong excitement) in different contexts, customary behaviours in these contexts have to be taken into account; e.g. a happy sport audience may be cheering for long time, while a happy theatrical audience may produce only short bursts of laughter in order to not interrupt the performance. Tailoring recognition algorithms to contexts can be achieved by building either a context-specific or a generic system. The former is individually trained for each context to recognise sets of characteristic responses, whereas the latter-in contrast to the context-specific one-adapts to the context via significantly more lightweight modification of parameters. This paper follows the latter way and proposes a simple modification of a hidden Markov model (HMM) classifier that enables end users to adapt the generic system to a context or a personal perception of an annotator by labelling a fairly small number of data samples of each context. For better adaptability to the limited number of the user's annotations, the proposed semi-supervised HMM classifier employs the maximum posterior marginal, rather than the more conventional maximum a posteriori decision rule. The proposed user- and context-adaptable semi-supervised HMM classifier was tested on recognising excitement of a show audience in three contexts (a concert hall, a circus, and a sport event), differing in how the excitement is expressed. In our experiments the proposed classifier recognised reactions of a non-neutral audience with 10% higher accuracy than the conventional HMM and support vector machine based classifiers
No preview · Article · Jun 2012 · Multimedia Systems
[Show abstract][Hide abstract] ABSTRACT: So called "smart products" try to recognise user context and to deliver relevant information upon own initiative, e.g., to advise to buy a windscreen washing liquid or to stir an overheated meal. As variety of usage situations grow, it may become difficult for the users to configure interaction manually in every new case, e.g., to specify via which modalities to deliver different message types. This work proposes several strategies to predict interaction preferences of individual users and user groups for a new context, based on preferences of these and other users in other contexts and preferences of other users in the target context. In the experiments with the smart products' configurations, set by 21 test subjects for different contexts (new and known tasks in cooking and car servicing domains, performed alone and in a group), the best of the proposed preferences mediation strategies allowed to predict on average 75% of settings, chosen by individuals and groups.
[Show abstract][Hide abstract] ABSTRACT: Cooking assistant is an application that needs to find a trade-off between providing efficient help to the users (e.g., reminding them to stir a meal if it is about to burn) and avoiding users' annoyance. This trade-off may vary in different contexts, such as cooking alone or in a group, cooking new or known recipe etc. The results of the user study, presented in this paper, show which features of a multimodal interface users perceive as socially acceptable or unacceptable in different situations, and how this perception depends on user's age.
[Show abstract][Hide abstract] ABSTRACT: Proceedings of the 9th International Conference on Mobile and Ubiquitous Multimedia MUM'10 Mobile communication technology has become an integral part of our everyday lives. Yet, do we recognize the patterns how we communicate with each other? In this paper we describe our work on increasing users' self-awareness of their communication with their social network with a mobile phone. We have developed two mobile phone applications, which log and visualize different elements of these patterns, enabling users to recognize the activity and responsiveness of their communication behavior more easily. We conducted a user study with 50 participants, where we compared the two applications and derived information of visualizing personal mobile communication patterns. Our research showed that although people were mostly well aware of the patterns of their social activity, still the majority of them found the applications interesting enough to view on a daily basis. We also report on findings on increased awareness of communication gaps, unexpected activity histories, and unbalanced communication behavior with incoming and outgoing calls/messages.
[Show abstract][Hide abstract] ABSTRACT: The majority of recommender systems require explicit user interaction (ranking of movies and TV programmes and/or their metadata,
such as genres, actors etc), which requires user time and effort. Furthermore, such ranking is often done separately by each
person, while merging these manually acquired individual preferences in multi-user environments remains largely an unsolved
problem. This work presents a method for learning a joint model of a multi-user environment from implicit interactions: programme
choices which family members make together and separately. The proposed method allows to adapt to the practices of each particular
family and to protect family privacy, because the joint family model is learned for each family separately. Furthermore, since
the accuracy of machine learning methods is family-dependent and none of the machine learning methods outperforms others for
all families, a fairly lightweight classifier ensemble selection approach is applied for better adaptation to the specifics
of each family. In tests on the real-life TV viewing histories of 20 families, acquired over 5months, the classifier ensemble
achieved an accuracy comparable with that of systems which require explicit user ratings: an average recall of 57% at an average
precision of 30%, despite only a few programme metadata descriptors being available.
No preview · Article · Jul 2009 · Multimedia Systems
[Show abstract][Hide abstract] ABSTRACT: Unobtrusive user authentication is more convenient than explicit interaction and can also increase system security because it can be performed frequently, unlike the current “once explicitly and for a long time” practice. Existing unobtrusive biometrics (e.g., face, voice, gait) do not perform sufficiently well for high-security applications, however, while reliable biometric authentication (e.g., fingerprint or iris) requires explicit user interaction. This work presents experiments with a cascaded multimodal biometric system, which first performs unobtrusive user authentication and requires explicit interaction only when the unobtrusive authentication fails. Experimental results obtained for a database of 150 users show that even with a fairly low performance of unobtrusive modalities (Equal Error Rate above 10%), the cascaded system is capable of satisfying a security requirement of a False Acceptance Rate less than 0.1% with an overall False Rejection Rate of less than 0.2%, while authenticating unobtrusively in 65% of cases.
Full-text · Article · Feb 2009 · Image and Vision Computing
[Show abstract][Hide abstract] ABSTRACT: International Journal of Communications, Network and System Sciences Vol.2 Nr.3, 211 - 221 Modern mobile devices have several network interfaces and can run various network applications. In order to remain always best connected, events need to be communicated through the entire protocol stack in an efficient manner. Current implementations can handle only a handful of low level events that may trigger actions for mobility management, such as signal strength indicators and cell load. In this paper, we present a framework for managing mobility triggers that can deal with a greater variety of triggering events, which may originate from any component of the node's protocol stack as well as mobility management entities within the network. We explain the main concepts that govern our trigger management framework and discuss its architecture which aims at operating in a richer mobility management framework, enabling the deployment of new applications and services. We address several implementation issues, such as, event collection and processing, storage, and trigger dissemination, and introduce a real implementation for commodity mobile devices. We review our testbed environment and provide experimental results showcasing a lossless streaming video session handover between a laptop and a PDA using mobility and sensor-driven orientation triggers. Moreover, we empirically evaluate and analyze the performance of our prototype. We position our work and implementation within the Ambient Networks architecture and common prototype, centring in particular on the use of policies to steer operation. Finally, we outline current and future work items. (25 refs.)
[Show abstract][Hide abstract] ABSTRACT: Majority of recommender systems require explicit user interaction (ranking of movies and TV programs and/or their metadata,
such as genres, actors etc), which requires user time and effort. Furthermore, often such ranking is done separately by each
person, while merging these manually acquired preferences in multi-user environments remains largely unsolved problem. This
work presents a method to learn a model of multi-user environment in intelligent home from implicit interactions: the choices
which family members make together and separately. In tests on TV viewing histories of twenty families, acquired during two
months, the method has achieved prediction accuracy comparable with the accuracy of systems which require explicit user ratings:
a set of TV programs, actually viewed during each test session (average set size was 2.2 programs per viewing session), was
recommended among five top choices in 60% of cases on average, despite training on small data sets.
[Show abstract][Hide abstract] ABSTRACT: This works presents a user modelling service for a Smart Home – intelligent context-aware environment, providing personalized
proactive support to its inhabitants. Diversity of Smart Home applications imposes various technical and implementation requirements,
such as the need to model dependency of user preferences on context in a unified and convenient way, both for users and for
application developers. This paper introduces the service architecture and currently implemented functionalities: stereotypes-based
profiles initialisation; a GUI for acquisition of context-dependent and context-independent preferences, which provides an
easy way to create own concepts of context ontology and to map them into already existing concepts; and a method to learn
context-dependent user preferences from interaction history.
[Show abstract][Hide abstract] ABSTRACT: The need for authenticating users of ubiquitous mobile devices is becoming ever more critical with the increasing value of
information stored in the devices and of services accessed via them. Passwords and conventional biometrics such as fingerprint
recognition offer fairly reliable solutions to this problem, but these methods require explicit user authentication and are
used mainly when a mobile device is being switched on. Furthermore, conventional biometrics are sometimes perceived as privacy
threats. This paper presents an unobtrusive method of user authentication for mobile devices in the form of recognition of
the walking style (gait) and voice of the user while carrying and using the device. While speaker recognition in noisy conditions
performs poorly, combined speaker and accelerometer-based gait recognition performs significantly better. In tentative tests
with 31 users the Equal Error Rate varied between 2% and 12% depending on noise conditions, typically less than half of the
Equal Error Rates of individual modalities.
[Show abstract][Hide abstract] ABSTRACT: Personal information needs depend on long-term interests and on current and future situations (contexts): people are mainly
interested in weather forecasts for future destinations, and in toy advertisements when a child’s birthday approaches. As
computer capabilities for being aware of users’ contexts grow, the users’ willingness to set manually rules for context-based
information retrieval will decrease. Thus computers must learn to associate user contexts with information needs in order
to collect and present information proactively. This work presents experiments with training a SVM (Support Vector Machines)
classifier to learn user information needs from calendar information.
[Show abstract][Hide abstract] ABSTRACT: Personal information needs and multimedia preferences depend on the long-term user interests as well as on the current situation (context) of a person. For example, adults are mainly interested in toys' advertisements when child's birthday approaches; selection of videos to watch strongly depends on who are present in a room: parents with children or just adults. As capabilities of personal computers and smart environments for being aware of users' contexts grow, the users' willingness to set manually rules for context-based information and/or multimedia retrieval will decrease. Thus computers must learn to associate user contexts with information needs in order to collect and present information proactively. This work presents initial experiments with two reasoning methods: SVM (Support Vector Machines) and CBR (Case-Based Reasoning) to learn dependency of user preferences on context which we are aiming to recognise in smart homes. Experimental results show that each of the methods has own advantages, and combination of the recommendations provided by both methods achieves fairly high recall rate with acceptable precision.
[Show abstract][Hide abstract] ABSTRACT: This work presents a method and experiments with privacy protection in multimedia and information retrieval system, currently being developed in Amigo project. Since Amigo environment is capable of recognition of user situations (contexts), the recommender system takes into account both long-term user interests and contexts when providing recommendations. We propose to utilize context recognition also for privacy protection, and suggest a method which either allows or suspends recommending of an item via non- personal UI, depending on a current context and retrieval history. This work studies how privacy protection affects precision and recall of recommendations. Two privacy-protection techniques were explored: protection based on the user's social context (other people around the user); and protection based on the user's location. Experimental results on data, collected via user interviews, show that social context-based protection works better than the location-based one; and that during normal family life privacy protection does not decrease system performance significantly. Instead, in some cases system performance has been improved.
[Show abstract][Hide abstract] ABSTRACT: Forthcoming wireless communication systems, well represented by the term beyond 3G, are likely to impose some new requirements that go beyond the traditional view on today's networking paradigm. In particular, mobility procedures will no longer be restricted to the change of the point of attachment to the network. The work presented in this paper aims at proving, following a fully experimental approach, the feasibility of some architectural components of a mobility control space, which has been designed so the context of the ambient networks project. Especially, in this study we focused on and successfully realized two concepts, a facility for triggering mobility events and support for moving networks
[Show abstract][Hide abstract] ABSTRACT: Mobile device context-aware features should be closely coupled to the end user demands. This is achieved by enabling end user development of context-aware applications. A context framework and a tool are presented for facilitating easy customization of context-aware features into existing mobile terminal applications. A blackboard-based context framework for mobile devices is extended with a component for handling user-defined context-action rules, and a component for activating application actions. Example cases of context-aware features customized with the implemented framework and the tool are presented, with discussion of the potential scope of end user development in mobile devices.
[Show abstract][Hide abstract] ABSTRACT: A wearable context aware terminal with net connection and spoken command input is presented. The context aware terminal can
be used, for example, by janitors or other maintenance personnel for retrieving and logging information related to a location,
such as a room. The main context cues used are user’s identity and location. The user is identified by biometrics which is
also used for preventing unauthorized usage of the terminal and information accessible through the terminal. Location information
is acquired by using signal strength information of existing Wireless Local Area Network (WLAN) infrastructure. Since the
wearable terminal is envisaged to be used by maintenance personnel it was seen important to offer possibility to hands-free
operation by using spoken commands. Tentative experiments show that this approach might be useful for maintenance personnel.