Conference PaperPDF Available

The Effect of Co-adaptive Learning & Feedback in Interactive Machine Learning

Authors:

Abstract and Figures

In this paper, we consider the effect of co-adaptive learning on the training and evaluation of real-time, interactive machine learning systems, referring to specific examples in our work on action-perception loops, feedback for virtual tasks, and training of regression and temporal models. Through these studies we have encountered challenges when designing and assessing expressive, multimodal interactive systems. We discuss those challenges to machine learning and human-computer interaction, proposing future directions and research.
Content may be subject to copyright.
The Eect of Co-adaptive Learning &
Feedback in Interactive Machine
Learning
Michael Zbyszyński
Goldsmiths, University of London
London, UK
m.zbyszynski@gold.ac.uk
Balandino Di Donato
Goldsmiths, University of London
London, UK
b.didonato@gold.ac.uk
Atau Tanaka
Goldsmiths, University of London
London, UK
a.tanaka@gold.ac.uk
ABSTRACT
In this paper, we consider the eect of co-adaptive learning on the training and evaluation of real-time,
interactive machine learning systems, referring to specific examples in our work on action-perception
loops, feedback for virtual tasks, and training of regression and temporal models. Through these studies
we have encountered challenges when designing and assessing expressive, multimodal interactive
systems. We discuss those challenges to machine learning and human-computer interaction, proposing
future directions and research.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee
provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the
full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored.
Abstracting with credit is permied. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior
specific permission and/or a fee. Request permissions from permissions@acm.org.
Glasgow ’19, May 04, 2019, Glasgow, UK
©2019 Copyright held by the owner/author(s). Publication rights licensed to ACM.
The Eect of Co-adaptive Learning & Feedback in Interactive Machine Learning Glasgow ’19, May 04, 2019, Glasgow, UK
KEYWORDS
human-centred machine learning, co-adaptation
INTRODUCTION
Human interaction with technology can be described by a process of co-adaptation [
8
], where human
users adapt to technological tools while simultaneously shaping those tools to beer fit their own
needs. Co-adaptation is especially evident in HCI systems which use interactive machine learning
(IML), where users are cyclically recording training examples, training new models, and evaluating
model performance. Evaluation of model performance is enabled by feedback, either directly from the
main output of a model or from secondary audio or visual feedback related to the model’s performance.
Human users decide if a model is adequately trained, or if training data need to be adjusted before
another model is trained. In this position paper, we present work where we have observed significant
complexity in the human-machine interaction loop, involving co-adaptive learning from both ML
models and humans engaged with these models.
Our work focuses on creating real-time, multimodal interactive systems controlled by biosignals –
specifically electromyography (EMG) – along with other sensors (e.g. IMUs). EMG sensors measure
the electrical activity of skeletal muscles, and together with IMUs can be used to analyse body
movements and gestures. These real-time systems are a special case for IML. In contrast to the IML
image classification problem discussed in Fails and Olsen[
4
], in our use cases both the training data
and the inputs to trained models are generated on-the-fly by human performers. Furthermore, EMG
oers particular challenges to HCI design. [
11
] Individuals have dierent anatomies and employ their
muscles dierently, so a “one-size-fits-all” interaction mapping of EMG data is very limited.
We have experimented with various methods of feedback – audio or visual, related or unrelated
to the primary interface goals – and have observed that feedback can inform users about their
performance as well as the performance of the particular IML models they are interacting with.
Providing users with this information has the potential to tighten the interaction loop and improve
the perceived quality of IML-centred interfaces. However, introducing feedback poses challenges
for studying the system: Are we evaluating the model’s performance, or are we studying the user’s
ability to work the model? In a tightly bound co-adaptation loop where human learning and machine
learning are coupled, how can we design experiments that can eectively distinguish which eect is
in force? These challenges inform our position and suggest design perspectives for such systems.
RELATED WORK
Fails and Olsen [
4
] define interactive machine learning to be machine learning with a human in the
learning loop, observing the result of learning and providing input meant to improve the learning
The Eect of Co-adaptive Learning & Feedback in Interactive Machine Learning Glasgow ’19, May 04, 2019, Glasgow, UK
outcome. This human engagement creates an opportunity for co-adaptation. An example of co-
adaptation is Kulesza, Amershi, et al. [
7
] description of concept evolution: a dynamic where a user’s
goals change while interacting with a system. Even though a trained model might not perform as
imagined, it is possible that the imperfections suggest another way to interact with a problem space.
In certain real-world applications, there might not be a ground truth that would inform the accuracy
of trained models, or it might change over a period of interaction. Expressive models are not necessarily
accurate models.
Cartwright and Prado [
3
] also address concept evolution, in the context of musical tasks. They
further identify the problem that editing training data sets become increasingly more tedious as the
size of the data set grows, and recommend minimising the number of examples that need evaluating.
Fiebrink [
5
] noted co-adaptation while studying the evaluation practices of end users building IML
systems for real-world gesture analysis problems. She observed that users employed evaluation
techniques to judge an algorithm’s relative performance and improve upon trained models, as well as
learning to adapt their performance to provide more eective training data. Subjects’ strategies for
providing training data evolved over the training sessions.
Feedback in the context of machine learning has been examined by Françoise et al. [
6
] who propose
interactive visual feedback that exposes the behaviour and internal values of models, rather than
just their results. They consider whether visualisations can improve users’ understanding of machine
learning and provide valuable insights into embodied interaction. Similarly, Ravet et al. [
9
] identify
diiculties for users interacting with high-dimensional motion data and propose solutions for using
ML with these data, including visual representation of the impact of learning algorithm parameter
tuning on modelling performances.
ARTICULATIONS REPRESENTATION
MODEL
Learned model Initialization
velocity /
acceleration
MODEL
Figure 1: Learning procedure. Gestural
data, velocity, and acceleration feature
space is associated with an articulation la-
bel. The representation feeds a GMM ini-
tialised with the means of each class and
adapted using Expectation-Maximisation.
EXAMPLES
Action-perception
In a study exploring feedback in an action-perception loop [
10
], we used Gaussian Mixture Models
(GMM) trained with dierent orchestral conducting gestures (1-legato, 2-normal, 3-staccato). Partici-
pants were asked to make a simplified conducting gesture, following the beat while a melody was
being played for each dierent articulation. The participant could rehearse until she felt confident and
then record the training examples. (Figure 1). Aer training, the participant was presented with one
of the melody versions used for training; the articulation of that version being the target articulation.
The user was also provided visual feedback of the output of the model. During performance, a slider
showed the fixed, target articulation value together with the current inferred one.
The study was designed with the objective of characterising the quality of trained models by
evaluating accuracy during performance sessions. However, the results of that evaluation revealed
The Eect of Co-adaptive Learning & Feedback in Interactive Machine Learning Glasgow ’19, May 04, 2019, Glasgow, UK
unexpected complexity. While algorithms were able to model participants’ intended articulations,
participants also adapted their performance to the system. Adaptation was evident because the
accuracy of models as calculated through cross-validation against recorded examples is lower than
the average accuracy measured during new performances with the models. This suggests that in a
continuous action-perception loop, users responded to visual feedback by adapting their physical
performance to cause a model to output the correct articulation value for given task.
Virtual tasks
We carried out a task-based study to define a simple grasping task using muscle tension. There was no
machine learning in this study, but we employed auditory feedback to help subjects learn to perform
muscle actions in a more consistent manner. More consistency could lead to more useful training
data for IML applications. Subjects were asked to imagine holding a cup of water with just enough
tension so it would not slip through their fingers. Too lile tension and the cup slips, too much and
it crushes or breaks. (Figure 2). The subtlety of expressive grasping could be compelling in a virtual
reality scenario.
Figure 2: Virtual Task. Participant in a rest
position (above) and when performing the
task (below).
In our study, auditory feedback was provided as a secondary communication channel and compen-
sated for the lack of haptic feedback in virtual space. The main tasks were defined using the metaphor
of a glass and communicated using video of a researcher performing the same task. Auditory feedback
enabled us to focus users on the specifics of their behaviours so that they would understand what
they were doing and how it could lead to a result.
Workshopping regression and temporal modelling
In a more recent workshop activity, we asked users to play an imitation game to train an IML system
with real-time human input. This was a sound tracing [
2
] activity; we asked participants to physically
represent a sound through hand and arm gestures. These gestures were used to train dierent
models, allowing users to compare a series of regression-based approaches with a temporal modelling
algorithm. The temporal modelling implemented Hidden Markov Models to model a sequence of
time-based input from beginning to end. Three dierent regression models looked at 1) the whole
input as a single set of training examples 2) four static examples using salient anchor points from
the stimulus as examples, or 3) an automated windowing system capturing short periods of dynamic
input centred around the same anchor points.
The stimulus imitated in the training phase became the auditory feedback in the testing phase,
with trained models controlling the parameters of the synthesizer that generated the initial stimulus.
Participants in our workshop were able to try the dierent algorithms using a consistent IML workflow
without knowing the technical details of any particular algorithm (Figure 3). They commented on
the aordances of dierent techniques – some facilitating the reproduction of the original stimulus,
The Eect of Co-adaptive Learning & Feedback in Interactive Machine Learning Glasgow ’19, May 04, 2019, Glasgow, UK
others enabling exploration – and critiqued the fluidity of response of the models. They discussed the
choice of algorithms as a trade-obetween faithfully reproducing the stimulus and creating a space
for exploration to produce new, unexpected ways to articulate sounds.
Figure 3: Workshop participants using re-
gression and temporal modelling.
DISCUSSION & CONCLUSION
When asked to accomplish a specific task (e.g. crumple a piece of paper, or hold a virtual glass)
users are not typically aware of which muscles cause that task to be performed. There are many
ways that forearm muscles can be employed to create the same apparent hand motion; users do not
intentionally choose one method over another. This lack of awareness complicates the generation of
training data for IML, as well as evaluation of a trained model. Users are not always aware of their
exact performance, or what elements of that performance are influencing the output. Feedback can be
an important tool to help users understand both their own performance and that of a model, leading
to beer outcomes.
By design, such feedback causes users to adapt their performance over the course of an IML session.
But, co-adaptive learning complicates our ability to objectively evaluate trained models. Models
respond beer in interactive performance than when evaluated with recorded examples because
subjects “play” them. This discrepancy suggests that in a continuous action-perception loop, users
respond to feedback by adapting their physical performance to cause a model to perform properly for
a given task.
Designing an adaptive system is challenging because the final stage of interaction design is placed
in the hands of the users. [
1
] Systems should not require sophisticated understanding of machine
learning from potential users. Rather, they must contextualise an evolving interaction in an exploratory
space that allows a user to delve deliberately and meaningfully manipulate the aordances of the
system.
Through our work, we have developed the position that interactive machine learning is an invaluable
paradigm for implementing bespoke user interactions, but it needs to be contextualised in a layer
of design that covers the whole UX from conceptualising and learning input actions to shaping and
refining rich media outputs. Our position is relevant to IML for real-time interaction situations, such
as gaming, virtual reality, or creative performance, where the user is generating new training or input
data constantly and can adapt those data to the output of the system.
In response to the examples presented here, we have the following perspective on the development
of IML-based interactions:
Feedback is important for helping users evaluate and use an interactive system.
Feedback does not need to be part of the main output of a system; it can be a secondary channel.
Feedback leads to co-adaptation.
The Eect of Co-adaptive Learning & Feedback in Interactive Machine Learning Glasgow ’19, May 04, 2019, Glasgow, UK
Interaction design can accommodate concept evolution.
Both humans and machines learn, together.
Real-time IML is especially appealing when it helps users to develop an expressive interaction
without leaving the problem space of that interaction. The systems we design should help them
consider that space, rather than distract with details of the underlying technologies. That consideration
involves co-adaptive learning through evolving user goals and iteration of machine learning models.
As designers, we ask: How might we present a real-time, IML workflow to users? How might we
enable learning the possibilities of a given system? How might we design feedback to demonstrate
the performance and potential of the system and illuminate details of the human performance?
ACKNOWLEDGMENTS
This project has received funding from the European Research Council (ERC) under the European
Union’s Horizon 2020 research and innovation programme (Grant agreement No 789825).
REFERENCES
[1]
F. Bernardo, M. Zbyszynski, R. Fiebrink, and M. Grierson. 2017. Interactive machine learning for end-user innovation. In
2017 AAAI Spring Symposium Series.
[2]
B. Caramiaux, P. Susini, T. Bianco, et al
.
2011. Gestural Embodiment of Environmental Sounds : an Experimental Study. In
Proceedings of the International Conference on New Interfaces for Musical Expression (NIME’11). Oslo, Norway, 144–148.
[3]
M. Cartwright and B. Pardo. 2016. The Moving Target in Creative Interactive Machine Learning. In Proceedings of the
2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’16). San Jose, California, USA.
[4]
J. A. Fails and D. R Olsen Jr. 2003. Interactive machine learning. In Proceedings of the 8th international conference on
Intelligent user interfaces. 39–45.
[5]
R. Fiebrink, P. R. Cook, and D. Trueman. 2011. Human Model Evaluation in Interactive Supervised Learning. In Proceedings
of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’11). Vancouver, BC, Canada, 147–156.
[6]
J. Françoise, F. Bevilacqua, and Thecla S. 2016. Supporting User Interaction with Machine Learning through Interactive
Visualizations. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI
EA ’16). San Jose, California, USA.
[7]
T. Kulesza, S. Amershi, R. Caruana, D.yel Fisher, and D. Charles. 2014. Structured labeling for facilitating concept evolution
in machine learning. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 3075–3084.
[8] W. E Mackay. 1990. Users and customizable soware: A co-adaptive phenomenon. Ph.D. Dissertation. Citeseer.
[9]
T. Ravet, N. d’Alessandro, J. Tilmanne, and S. Laraba. 2016. Motion Data and Machine Learning: Prototyping and Evaluation.
In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’16). San
Jose, California, USA.
[10]
A. Sarasua, B. Caramiaux, and A. Tanaka. 2016. Machine learning of personal gesture variation in music conducting. In
Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 3428–3432.
[11]
A. Tanaka and M. Ortiz. 2017. Gestural Musical Performance with Physiological Sensors, Focusing on the Electromyogram.
In The Routledge Companion to Embodied Music Interaction, Micheline Lesare, Pieter-Jan Maes, and Marc Leman (Eds.).
Routledge, 422–430.
... Our approach allows musicians to quickly create an association between points and trajectories in a gesture feature space and units in a timbral feature space. The spaces can be explored and augmented together, interactively, offering the potential for new expressive interactions that extend beyond the initially defined gestures [38]. ...
Chapter
This chapter explores three systems for mapping embodied gesture, acquired with electromyography and motion sensing, to sound synthesis. A pilot study using granular synthesis is presented, followed by studies employing corpus-based concatenative synthesis, where small sound units are organized by derived timbral features. We use interactive machine learning in a mapping-by-demonstration paradigm to create regression models that map high-dimensional gestural data to timbral data without dimensionality reduction in three distinct workflows. First, by directly associating individual sound units and static poses (anchor points) in static regression. Second, in whole regression a sound tracing method leverages our intuitive associations between time-varying sound and embodied movement. Third, we extend interactive machine learning through the use of artificial agents and reinforcement learning in an assisted interactive machine learning workflow. We discuss the benefits of organizing the sound corpus using self-organizing maps to address corpus sparseness, and the potential of regression-based mapping at different points in a musical workflow: gesture design, sound design, and mapping design. These systems support expressive performance by creating gesture-timbre spaces that maximize sonic diversity while maintaining coherence, enabling reliable reproduction of target sounds as well as improvisatory exploration of a sonic corpus. They have been made available to the research community, and have been used by the authors in concert performance.
Article
The journey of a prosthetic user is characterized by the opportunities and the limitations of a device that should enable activities of daily living (ADL). In particular, experiencing a bionic hand as a functional (and, advantageously, embodied) limb constitutes the premise for promoting the practice in using the device, mitigating the risk of its abandonment. In order to achieve such a result, different aspects need to be considered for making the artificial limb an effective solution to accomplish activities of daily living. According to such a perspective, this review aims at presenting the current issues and at envisioning the upcoming breakthroughs in upper limb prosthetic devices. We first define the sources of input and feedback involved in the system control (at user-level and device-level), alongside the related algorithms used in signal analysis. Moreover, the paper focuses on the user-centered design challenges and strategies that guide the implementation of novel solutions in this area in terms of technology acceptance, embodiment, and, in general, human-machine integration based on co-adaptive processes. We here provide the readers (belonging to the target communities of researchers, designers, developers, clinicians, industrial stakeholders, and end-users) with an overview of the state-of-the-art and the potential innovations in bionic hands features, hopefully promoting interdisciplinary efforts for solving current issues of ULPs. The integration of different perspectives should be the premise to a transdisciplinary intertwining leading to a truly holistic comprehension and improvement of the bionic hands design. Overall, this paper aims to move the boundaries in prosthetic innovation beyond the development of a tool and towards the engineering of human-centered artificial limbs.
Chapter
Full-text available
Physiological computing has become widespread, as the democratization of biomedical technolo- gies has facilitated take-up in interactive computing systems. Signals from the human body can be detected by a wide array of sensors and digitized, providing computational systems information on individual identification, body states, and gross and fine limb movement. These signals have rich potential to be exploited for musical interaction—be it recording performer or audience state in ambient interaction, or capturing instrumentalists’ volitional acts in gestural musical interaction. This chapter focuses on the potential of physiological interfaces to capture performer gesture to create embodied interaction with computer music systems. We begin with a brief history of the use of physiological signals in musical performance. We introduce the range of physiological signals, and focus on the electromyogram (EMG), reporting muscle tension. We then present techniques for using the EMG in music, including signal pre-processing and feature extraction, and analysis by machine learning. We discuss challenges of reproducibility and situate the EMG in a multi-modal context with other sensing modalities. We conclude by proposing gesture “power” as one low-level feature that in part represents Laban’s notion of “effort” to demonstrate the potential of the EMG to capture expressive musical gesture.
Conference Paper
Full-text available
User interaction with intelligent systems need not be limited to interaction where pre-trained software has intelligence " baked in. " End-user training, including interactive machine learning (IML) approaches, can enable users to create and customise systems themselves. We propose that the user experience of these users is worth considering. Furthermore, the user experience of system developers—people who may train and configure both learning algorithms and their user interfaces—also deserves attention. We additionally propose that IML can improve user experiences by supporting user-centred design processes, and that there is a further role for user-centred design in improving interactive and classical machine learning systems. We are developing this approach and embodying it through the design of a new User Innovation Toolkit, in the context of the European Commission-funded project RAPID-MIX.
Conference Paper
Full-text available
Conference Paper
Full-text available
This paper discusses novel visualizations that expose the behavior and internal values of machine learning models rather than their sole results. Interactive visualizations have the potential to shift the perception of machine learning models from black-box processes to transparent artifacts that can be experienced and crafted. We discuss how they can reveal the affordances of different techniques, and how they could lead to a deeper understanding of the underlying algorithms. We describe a proof-of-concept application to visualize and manipulate Hidden Markov Models, that provides a ground for a broader discussion on the potentials and challenges of interactive visualizations in human-centered machine learning.
Conference Paper
Full-text available
This note presents a system that learns expressive and idiosyncratic gesture variations for gesture-based interaction. The system is used as an interaction technique in a music conducting scenario where gesture variations drive music articulation. A simple model based on Gaussian Mixture Modeling is used to allow the user to configure the system by providing variation examples. The system performance and the influence of user musical expertise is evaluated in a user study, which shows that the model is able to learn idiosyncratic variations that allow users to control articulation, with better performance for users with musical expertise.
Conference Paper
Full-text available
Labeling data is a seemingly simple task required for training many machine learning systems, but is actually fraught with problems. This paper introduces the notion of concept evolution, the changing nature of a person's underlying concept (the abstract notion of the target class a person is labeling for, e.g., spam email, travel related web pages) which can result in inconsistent labels and thus be detrimental to machine learning. We introduce two structured labeling solutions, a novel technique we propose for helping people define and refine their concept in a consistent manner as they label. Through a series of five experiments, including a controlled lab study, we illustrate the impact and dynamics of concept evolution in practice and show that structured labeling helps people label more consistently in the presence of concept evolution than traditional labeling.
Conference Paper
Model evaluation plays a special role in interactive machine learning (IML) systems in which users rely on their assessment of a model's performance in order to determine how to improve it. A better understanding of what model criteria are important to users can therefore inform the design of user interfaces for model evaluation as well as the choice and design of learning algorithms. We present work studying the evaluation practices of end users interactively building supervised learning systems for real-world gesture analysis problems. We examine users' model evaluation criteria, which span conventionally relevant criteria such as accuracy and cost, as well as novel criteria such as unexpectedness. We observed that users employed evaluation techniques---including cross-validation and direct, real-time evaluation---not only to make relevant judgments of algorithms' performance and interactively improve the trained models, but also to learn to provide more effective training data. Furthermore, we observed that evaluation taught users about what types of models were easy or possible to build, and users sometimes used this information to modify the learning problem definition or their plans for using the trained models in practice. We discuss the implications of these findings with regard to the role of generalization accuracy in IML, the design of new algorithms and interfaces, and the scope of potential benefits of incorporating human interaction in the design of supervised learning systems.
Article
Co-adaptive phenomena are defined as those in which the environment affects human behavior and at the same time, human behavior affects the environment. Such phenomena pose theoretical and methodological challenges and are difficult to study in traditional ways. However, some aspects of the interaction between people and technology only make sense when such phenomena are taken into account. In this dissertation, I postulate that the use of information technology is a coadaptive phenomenon. I also argue that customizable software provides a particularly good testbed for studying co-adaptation because individual patterns of use are encoded and continue to influence user behavior over time. The possible customizations are constrained by the design of the software but may also be modified by users in unanticipated ways, as they appropriate the software for their own purposes. Because customization patterns are recorded in files that can be shared among users, these customizations may act ...
Gestural Embodiment of Environmental Sounds : an Experimental Study
  • B Caramiaux
  • P Susini
  • T Bianco
B. Caramiaux, P. Susini, T. Bianco, et al. 2011. Gestural Embodiment of Environmental Sounds : an Experimental Study. In Proceedings of the International Conference on New Interfaces for Musical Expression (NIME'11). Oslo, Norway, 144-148.
The Moving Target in Creative Interactive Machine Learning
  • M Cartwright
  • B Pardo
M. Cartwright and B. Pardo. 2016. The Moving Target in Creative Interactive Machine Learning. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA '16). San Jose, California, USA.