Conference PaperPDF Available

An Actionable Approach to Understand Group Experience in Complex, Multi-surface Spaces

Authors:

Abstract and Figures

There is a steadily growing interest in the design of spaces in which multiple interactive surfaces are present and, in turn, in understanding their role in group activity. However, authentic activities in these multi-surface spaces can be complex. Groups commonly use digital and non-digital artefacts, tools and resources, in varied ways depending on their specific social and epistemic goals. Thus, designing for collaboration in such spaces can be very challenging. Importantly, there is still a lack of agreement on how to approach the analysis of groups' experiences in these heterogeneous spaces. This paper presents an actionable approach that aims to address the complexity of understanding multi-user multi-surface systems. We provide a structure for applying different analytical tools in terms of four closely related dimensions of user activity: the setting, the tasks, the people and the runtime co-configuration. The applicability of our approach is illustrated with six types of analysis of group activity in a multi-surface design studio.
Content may be subject to copyright.
An Actionable Approach to Understand Group Experience
in Complex, Multi-surface Spaces
Roberto Martinez-Maldonado1,2
Roberto.Martinez-Maldonado@uts.edu.au
Peter Goodyear2
Peter.Goodyear@sydney.edu.au
Judy Kay3
Judy.Kay@sydney.edu.au
Kate Thompson2,4
Kate.Thompson@griffith.edu.au
Lucila Carvalho2
Lucila.Carvalho@sydney.edu.au
1Connected Intelligence Centre, University of Technology Sydney, Australia
2Faculty of Education and Social Work, 3Faculty of Engineering and Information Technologies,
The University of Sydney, Australia
4School of Education and Professional Studies, Griffith University, Australia
ABSTRACT
There is a steadily growing interest in the design of spaces
in which multiple interactive surfaces are present and, in
turn, in understanding their role in group activity. However,
authentic activities in these multi-surface spaces can be
complex. Groups commonly use digital and non-digital
artefacts, tools and resources, in varied ways depending on
their specific social and epistemic goals. Thus, designing
for collaboration in such spaces can be very challenging.
Importantly, there is still a lack of agreement on how to
approach the analysis of groups’ experiences in these
heterogeneous spaces. This paper presents an actionable
approach that aims to address the complexity of
understanding multi-user multi-surface systems. We
provide a structure for applying different analytical tools in
terms of four closely related dimensions of user activity: the
setting, the tasks, the people and the runtime co-
configuration. The applicability of our approach is
illustrated with six types of analysis of group activity in a
multi-surface design studio.
Author Keywords
multi-surface; ubicomp ecologies; horizontal display;
shared-display; groupware; collocated collaboration
ACM Classification Keywords
D.2.10 Design; H.5.2 User Interfaces; J.4: Social and
Behavioral Sciences
INTRODUCTION
Interactive surfaces have become a key part of everyday life
for many of us. Mobile-handheld devices are now
widespread and large interactive screens are becoming
more accessible and pervasive [57]. People commonly
interact using a heterogeneous ecology of tools and
resources, both digital and material. So there is a steadily
growing interest in the design of spaces in which multiple
interactive surfaces can be used concurrently [1] and also in
understanding their role in collaborative activity [11]. The
affordances of multi-surface spaces have been explored in a
wide range of contexts such as design (e.g. [55]), data
exploration (see [1] for a review), simulation (e.g. [43]),
cooperative group work (e.g. [52]) and learning (e.g. [27]).
A driver of this work is to help address group activity
where multiple people interact with multiple devices (*-*)
which is more complex than single-user interfaces (1-1) or
multiple people interacting at a shared device (*-1). In such
multi-surface environments, the heterogeneous ecology
extends beyond the devices and user interfaces, to the
materials, the multiple roles that are adopted in relation to
the task, the tools, or the group process. Analysis of such an
ecology is challenging, especially given that the technology
can be partly or fully unintegrated.
There is a substantial body of research on understanding
how people collaborate with single display groupware [57]
but there has been little agreement on approaches to the
analysis of situations in which multiple devices are used by
groups in an interactive space. User experience evaluations
offer an important class of tool. However, most are targeted
at single users [5], and provide little information about a
range of elements that are likely to shape user activity.
Observational studies and user interface evaluations
sometimes ignore the setting, tasks or roles within the
unfolding group activity, and so often oversimplify the
relationships between them. There is a need for multi-
method approaches for analysing group activity - what
people actually do in these spaces - as a whole and that can
capture the complexity of group activity in heterogeneous
multi-surface spaces.
In short, analysis and understanding of user activity in
multi-surface interactive spaces is challenging, especially if
it properly acknowledges the complexity of activities
involving multiple users interacting with multiple surfaces,
blending heterogeneous tools, resources, work roles and
rules, to tackle complicated knowledge-rich tasks.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. Copyrights for
components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted. To copy otherwise, or republish, to
post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from Permissions@acm.org.
CHI'16, May 07-12, 2016, San Jose, CA, USA
© 2016 ACM. ISBN 978-1-4503-3362-7/16/05…$15.00
DOI: http://dx.doi.org/10.1145/2858036.2858213
The CSCW, and HCI communities have recognised the
opportunities, technical challenges and problems with
interactive multi-surface and multi-device ecologies [1, 3,
11, 52]. However, they have also acknowledged the lack of
understanding of the challenges faced in facilitating
effective collaboration which depends, in turn, on
understanding the activity of groups using multiple devices
and interactive surfaces [8, 11], especially in complex work
domains and when deployed in real world contexts. It is
timely to develop methods and theoretical frameworks that
can facilitate a systematic analysis to identify the problems
and possibilities of current use, and the impact of the
integration of new technologies (such as interactive
surfaces) for increasingly complex user activity in
heterogeneous ecologies of devices. The contribution of this
paper is the formulation of an actionable approach to
support the selection of available analytical tools and the
interpretation of results, to cover four closely related
dimensions of group activity: the setting where the activity
unfolds, the tasks being tackled, the roles adopted by the
people involved and the co-configuration of the intended
design in runtime.
The approach is targeted to designers and researchers to
document and tease apart the many dimensions of group
activity - and enable them to gain insights into elements of
the system (the ecology of devices, the social interactions
and the task itself) that work well, and those that seem to
pose difficulties. Norman [36] observed that a key feature
that distinguishes an activity-centred approach from others
(e.g. user-centred) is that it requires both a deep
understanding of users, and also of the technology, the
tools, and the context of the activities. Thus, an activity-
centred approach provides a better fit to understanding
group activity in multi-surface spaces, as a whole. Our
approach is grounded in an activity-centred framework
which offers a holistic perspective on what people actually
do in group activity, and the tools, resources and social
interactions that become bound up in that activity [10]. The
application of this framework can also help bridge
theoretical models that explain collaborative activity with
specific HCI analysis techniques, particularly if those
theories are abstract and not directly actionable. We
illustrate our approach in action by analysing group activity
in a multi-surface design studio. The illustrative case study
includes the application of multiple methods to investigate:
(i) user experience, (ii) tools usage, (iii) users’ attention,
(iv) space usage, (v) roles and (vi) the processes involved in
completing the task.
The rest of the paper is organised as follows. The next
section provides an overview of selected multi-surface
spaces and methods available to analyse collaboration,
mostly used with single display groupware. Then, we
present the principles underpinning our approach. After
this, we describe the study that illustrates how the approach
can be put into action. This includes sets of analyses,
which, when combined, provide insights into the unfolding
group activity of four teams within our multi-surface studio.
We conclude with a discussion of the application of our
approach and ideas for future research.
RELATED WORK
State of the Art: Complex Surface Systems
The widespread proliferation of surface devices in the
workplace, learning settings and everyday activities, has
moved us towards a world where computers begin to
disappear - cognitively and emotionally [45]. If the
cognitive and emotional ‘load’ around the use of devices
disappears, then groups can focus more on interactions with
information, communication of their ideas and cooperation
with other people [53]. Since the development of the first
spaces that distributed information and functions across
multiple screens and devices (e.g. [45, 46]), there has been
a steadily increasing interest in the design of complex
multi-surface systems. One of the most relevant and highly
referenced of these systems is Wespace [52]. This
supported small group exploration of information, featuring
multiple shared displays and personal computers. The
highlight of this study was that it addressed the complexity
of analysing group work in a realistic scenario by
combining ethnographic methods (e.g. observations of
patterns of usage) with logged interaction data and explicit
feedback from users. Multi-surface spaces have also been
attractive in educational contexts [16]. A variety of tabletop
and tablet-based classroom ecologies have been designed to
support teacher’s orchestration of small-group tasks [27].
More recent research on multi-surface spaces have focused
on the integration of various devices into a central system.
This facilitates the distribution of information among users.
Reticular Spaces [3] proposed the unification of the UI of
heterogeneous interfaces for organising information and
providing accessibility to both collocated and remote users.
Shared Substance [17] followed a data-centred (rather than
an activity-centred) approach to inform the design of a
multi-surface environment used in areas that require groups
of users to explore and interact with heterogeneous content.
Activity Space [23] allowed the integration of several
devices (a tabletop, tablets and laptops) into a physical
workplace by offering three related but de-coupled layers,
which contain: the devices, activity management functions
and resources. The Huddle Lamp [38] allowed the use of
gestures across mobile devices, and adapting the role of
devices based on their detected orientation or distance from
other devices. Finally, VisPorter [12] allowed access to
multiple perspectives on visual information in various
displays (tablets, a vertical display and a tilted tabletop).
Most of the examples mentioned above reflect the current
focus on improving the technology to provide enriched
experiences in multi-surface spaces. However, there is still
a need to develop conceptual tools to better understand why
some things work well, while others fail, by going beyond
these individual explorations, especially when different
technologies are used in realistic, heterogeneous settings.
Figure 1. Our actionable approach: the ACAD framework, used to
connect specific analysis methods to analyse group activity
Current Methods to Study Group Work in Interactive
Surface-based Settings
There has been a proliferation of studies exploring group
interaction in multi-surface environments. We now list
some examples that show the diversity of methods used in
multi-device spaces and to understand group work mediated
by interactive surface groupware. Ethnographic methods in
HCI and CSCW [6] have been used to observe extensively
and in detail the design, development and/or usage of
technologies in particular settings (e.g. [4, 21]). This has
included the exploration of complex group activity in multi-
device and/or multi-display spaces (e.g. [9, 13, 28, 29, 52]).
Attempts have also been made to extend usability tests,
originally aimed at single users, to provide information
about group experiences in a multi-user system [5]. The
evaluation of a multi-surface system has been linked to
measures of task performance [43]. The analysis of human
mobility and proxemic relationships have also been used to
analyse and support cross-device interactions in multi-
device environments [30].
There have been several studies analysing group work
primarily in single surface settings (e.g. multi-touch
tabletops). For example, Tang et al. [50] applied multiple
methods of analysis that considered the arrangement of
people around a tabletop and even the impact of social
scaffolding (e.g. suggesting roles to users). Davis et al. [14]
conducted interaction video-analysis to discover patterns of
collaboration in a tabletop-based museum exhibit. Ryall et
al. [41] explored how group and display size can affect the
task and strategies of groups interacting at a tabletop.
Most of the studies mentioned above evaluate their systems
or analyse group activity using widely varied approaches.
An overview of other methods used to study collaborative
activity can be found in [42]. However, as some methods
commonly focus on responding to very specific research
questions, they can miss the bigger picture, and alternative
key elements that may shape a groups activity.
APPROACH
Our approach is grounded on the principle that group
activity is not only shaped by the design of the tools or
technology, but by the loosely-coupled relationship
between the many tools available, users, tasks and the
activity of people during runtime, tuning the design. In
order to define these aspects of group activity, we use the
Activity-Centred Analysis and Design (ACAD) framework
[19] at the core of our approach. The ACAD framework,
which is informed by research on learning networks and
activity theory, can be used to scaffold the selection of
specific analysis methods in order to analyse group
experience as a whole. These elements are depicted in
Figure 1, represented with an increasing abstraction (from
top to bottom) in order to understand key aspects of group
activity in an interactive multi-surface, multi-device space.
These elements are described in detail in the next sections.
Theoretical Perspectives on Group Activity
A variety of theoretical perspectives have been proposed to
understand the social aspects of HCI activity. Grounding
the design for, and analysis of, group experience in
complex multi-surface environments on a theoretical
framing can help structure our understanding of individual
and collaborative use of technology in relation to
collaborative group practice [24]. Examples of these
theoretical models include: Design anthropology [20]
which is already well-established in CHI and CSCW
research foundational work being done by Bonnie Nardi,
Lucy Suchman and others (e.g. [35, 47]). Similarly, the self-
improving ecologies approach (e.g. [15]), situates the
knowledge for design and improvement within a system,
rather than on top as some kind of external control.
Distributed cognition [22] has been used to analyse and
design multi-user systems [39], and, more specifically, for
multi-surface learning ecologies [27]. Activity theory [25]
has been crucial for understanding technology-mediated
activities in HCI because it pays special attention to the
integration of artefacts into social practices. Instrumental
genesis [37] explains how users cannot be considered as
fixed, interchangeable subjects but as entities that evolve as
they interact with tools and other people that are also
evolving. The work reviewed by Rogers [40] provides a
high level differentiation and summary of theory in HCI.
Nevertheless, the recent third wave of HCI research [7]
challenges the use of theoretical models to directly explain
users activity. It demonstrated that pragmatic/cultural-
historical approaches, focused on experience, have achieved
similar or even better results than theory-driven approaches,
especially if those theories are abstract or high level, such
as activity theory or distributed cognition. By using the
ACAD framework, we aim to achieve a more holistic view
of group activity in multi-surface environments. Our
approach can provide guidance to select low level analysis
methods as well as to provide a way to link these robust
theories with the methods of analysis selected.
In addition to the ACAD framework, we also considered
alternative frameworks. For example, the activity-based
computing (ABC) framework [2] is conceptually close to
the ACAD framework in the sense that it decomposes
users’ activity into tasks, materials, time, and users.
However, the purpose of the ABC framework is to inform
the implementation of ‘computational activities’ of a
distributed system that ensures adequate synchronisation of
data and methods across devices. Another example is the
Blended Interaction framework [24]. Informed by
embodied cognition theory, it helps to explain the design of
user interfaces perceived as ‘natural’. The Blended
Interaction framework structures the design space into four
sub-spaces: individual and social interaction, the workflow,
and the physical space. Authors of this framework observed
that connecting the analysis to sound cognitive theories
facilitates the interpretation of the empirical observations,
potentially leading to more detailed and iterative, but better,
designs. We selected the ACAD framework because it
provides a holistic view of group activity, focusing on the
tasks that the users are given (epistemic), the ways that the
users divide up labour (social), and use the various tools,
surfaces and materials in the space (set), and how ultimately
the design is enacted (co-configuration). Moreover, we
illustrate how this conceptual model can be connected with
specific HCI analysis techniques to explain group activity.
The ACAD Framework
The ACAD framework has mainly been applied to link
design and analysis of complex, group learning situations.
The framework considers group activity as emergent and
situated. ‘Activity’ here means what people are actually
doing, physically and mentally. Activity is socially,
epistemically and physically situated. Users’ activity is not
directly designable, but other elements (e.g. tools, tasks and
work roles), which are also relevant to the understanding of
an unfolding group activity, can be designed. Activity is
different from task - the prescribed work, or what
(officially) should be done, and which can be designed
[54]. At runtime, the physical, social and epistemic
elements are dynamically entangled together as the group
activity unfolds. But at design time, the physical, social and
epistemic can be treated as discrete design components.
Design needs to pay attention to each of them, even though
what is designed and set in place will be reconfigured in use
by participants at runtime.
For ease of reference, within the ACAD framework, we
refer to the first three components as (1) the set (physical)
component which includes the place in which
participants’ activity unfolds, the physical and digital space
and objects; the input devices, screens, software, material
tools, awareness tools, artefacts, and other resources that
need to be available, (2) the social component which
includes the variety of ways in which people might be
grouped together (e.g. dyads, trios, groups); scripted roles,
divisions of labour, etc, and (3) the epistemic component
which includes both implicit and explicit knowledge-
oriented elements that shape the participants’ tasks and
working methods. ACAD states that the users’ individual
skillsets (e.g. improvisation, argumentation, tool use skills),
social relationships (e.g. power relations, working methods,
divisions of labour), and even their domain knowledge are
not fixed. Rather, they evolve during the collaborative
activity, over time. What is designed in advance is then
customised, selected from, added to, re-interpreted or
otherwise modified by the people involved in the ensuing
activity. We refer to this fourth component as (4) co-
configuration [19]. This key component may help explain
why certain design intentions sometimes do not play as
expected in runtime. It can also help understand how some
differences among groups of users are not just a function of
social circumstances, but also of how information flows
around the group, and the choices they make to co-
configure the other three (at least partly) designable
components of the ACAD framework.
Actionable Elements: a Range of Analysis Methods
The application of the ACAD framework can help make
sense of multiple analyses applied to the same dataset or to
provide focus to observations. As stated above, multiple
measures of collaboration, interaction, group preferences,
usability, tools usage, user experience, task completion etc.,
have been used to design or demonstrate the effectiveness
of multi-user, multi-device systems. Each of these may be
automatically, semi-automatically or manually collected in
order to provide insights about a specific aspect of groups’
activity. However, having a systemic view of the key
components of users’ activity within these spaces requires a
conceptual framework that can provide meaning to multiple
analysis methods used together.
In calling our approach ‘actionable’ we are taking account
of the relationship between the time and resources needed
to undertake the analysis and the payoff for design and/or
development of the systems. Crucially, our approach (a) can
target just key parts of activity in a workflow and (b) can
use a more extensive, or conversely a stripped down, toolset
to capture different aspects of the activity, depending on
time and analytic skills available. This modifiability comes
from the layering shown in Figure 1. As this shows, some
components are necessary for the approach such as the
core focus on actual activity and the ways of understanding
such activity as physically and socially situated. The next
section illustrates the application of our approach to analyse
and describe group activity in a multi-surface, multi-device
space.
ILLUSTRATIVE STUDY
The illustrative case study consists of a series of sessions in
the context of group educational design, in a multi-surface
design studio. An educational design describes the tasks,
materials, pedagogies and social dynamics for teachers and
their students aimed at providing learning opportunities in
students’ face-to-face or online activities, over a particular
time period [18]. As in other areas, such as architecture or
Figure 3. Left: The Design Studio, featuring: a) an interactive tabletop,
b) an interactive whiteboard, c) a wall projected computer with wireless
input devices, d) a live-visualisations dashboard, e) tablets, f) a
writeable white-wall and g) paper-based materials
Figure 2. The CoCoDes tool’s UI, main view, featuring: a) an editing area; b)
a bar of quick template patterns; c) a weekly timeline; d) swim lanes for the
main learning spaces; e) menus to add patterns from the catalogue of
patterns; f) menus of candidate designs; and g) layout menus
software development, patterns can be used as reusable
solutions to commonly occurring educational problems.
Educational design patterns can represent learning places
(e.g. a lecture room), learning approaches (e.g. building a
blog as an educational exercise), or dictate more complex
social dynamics (e.g. jigsaw or pyramid) [18]. A pattern
language is a structure of these patterns. Educational design
is usually performed by teachers themselves, but in a
university context, it is also common to find dedicated
learning designers [26]. In the next sub-sections we
describe the space and the study that illustrate our approach.
Apparatus: The Design Studio
The Design Studio is equipped with various digital and
physical tools to support the design activity of small teams.
Figure 3 illustrates the work area of the Design Studio,
featuring four shared digital devices: an interactive tabletop
(a), an interactive whiteboard (IWB) (b), a
personal computer connected to a projector
(c) and a dashboard (d). The Design Studio
also features tablet devices (e); a large
writeable wall (f); and paper and drawing
materials (g). These are all optional tools
available to the participants.
For these studies we used a multi-touch
collaborative educational design system, the
CoCoDes [33]. CoCoDes offers a large
multi-touch interface that supports small
teams performing early stage conceptual
design work on tertiary education courses
(see Figure 2, (a). CoCoDes is firmly based
on educational design patterns and the use of
a pattern language (PL) to represent student
tasks (b), learning resources (e) or learning
spaces (d). CoCoDes provides digital
elements that can be manipulated by direct
touch (by dragging digital objects and
touching buttons), allowing bimanual input and
fluid interaction so user-designers can rapidly
build multiple candidates of an educational design
(f).
By deploying CoCoDes on the interactive
tabletop and the IWB (Figure 3, a, b), the same
design can be shown in both displays, or two
different candidate designs can be shown in each
device. This allows users to: i) use the tabletop as
the main working device, keeping a high level
view on the IWB, ii) split the task so different
team members work on two designs in parallel or
iii) compare two different designs, each shown in
a different device. The user interface provides a
flipped timeline where users can arrange patterns
on a weekly basis (Figure 2, c). The orientation of
all or selected patterns can be rotated 180° when
the application is loaded on a tabletop, allowing
users to work side-by-side or face-to-face.
Multiple physical keyboards can be attached to
the system, to allow fast input by multiple users (three to
the tabletop and one to the IWB for our studies). A vision-
based touch tracking system [32] links each touch on the
tabletop with the user and his/her keyboard’s input.
The shared dashboard (Figure 3, d) shows real-time visual
indicators of the candidate designs created using the
interactive tabletop or the IWB. These indicators include a
list of educational patterns added to each candidate design,
a pie chart that shows how a student’s time would be
divided among learning spaces (face-to-face and online),
and a histogram showing a student’s weekly workload.
Design: Task, Tools and Roles
To illustrate our approach we focus our analysis on an open
task study of four teams (A, B, C and D), each with three
Figure 4. Tools usage and attention: accumulated effective time by all group
members of each team using or looking at each tool
Table 1. Results of UMUX applied individually to the 12 users of
the study. Responses are based on a Likert scale from 1 to 7 where 1
means strong disagreement and 7 strong agreement
participants (4 male and 8 female). We recruited the
participants through word-of-mouth from the Faculty of
Education & Social Work of The University of Sydney.
They had various levels of expertise in teaching (4 were
advanced, 5 competent and 3 novice teachers) and
educational design (5 were advanced and the rest had
limited experience), and all knew each other beforehand.
All had experience in the Design Studio and at least one
member in each team had used CoCoDes before. Eight of
the 12 participants had used an interactive tabletop before
(touchscreen directories, art exhibits and design projects),
seven had used an interactive whiteboard before (mainly
smartboards at school), five had used both and all used
tablets regularly.
The goal of each team was to produce two high-level
competing candidate designs (e.g. the same course based on
traditional lectures and/or a blended learning experience) of
a 13-week course in the area of Engineering, held at the
University of Sydney. Each team member was given one of
three possible roles (Lecturer-L, Learning Designer-LD and
Quality Assurance Officer-QAO). According to their role,
each had specific goals and information about the course.
Some goals provided to the participants complemented
others’ goals, and some were conflicting. Thus, the task
involved the resolution of conflicting information and
goals, agreement about the different design versions to be
built, compliance with institutional metrics (e.g. a minimum
of face-to-face contact between students and instructors),
and the construction of the designs using CoCoDes.
All participants were given the following paper materials: a
design brief (indicating the requirements and constraints of
the course design); and a catalogue of patterns (a pattern
language describing relevant patterns for the course). Each
team member was provided with a tablet device that
included: digital copies of the design brief and the pattern
language; and access to the official online system that
provides detailed descriptions of university courses. Teams
had up to 1-hour to complete the task. After the group
activity, a 30-minute semi-structured interview was
conducted with each team. Then, each participant
completed a questionnaire about their usage of the tools and
the space. Sessions were video-recorded and transcribed.
Method
As an illustrative exercise, we apply six methods
to investigate: (i) user experience, (ii) tools usage,
(iii) users’ attention, (iv) space usage, (v) roles
and (vi) the task completion processes. We
conceptually grouped the methods around the
components of the ACAD framework to respond
to three high level research questions that
illustrate a variety of aspects of group activity in a
multi-surface environment:
Q1: How were the space and the tools used?
Q2: What was the design process each group followed to
complete the task?
Q3: What strategies were followed by each group in terms
of their social roles and divisions of labour?
Analysis 1 Tools and space
Q1: How were the space and the tools used?
User experience metrics can provide insights about user
satisfaction, usability and accessibility of the tools available
in the multi-surface space. To gain insights into the
experience of the participants using the CoCoDes interface,
they were asked to respond to a usability questionnaire (the
UMUX [5], which has four 1-to-7 Likert questionnaire
items). Table 1 summarises the results, showing that
overall, participants were satisfied with the effectiveness of
the system and agreed that it was easy to use (rows 1 and
3). However, three users (from three different groups: A, B
and D) found the experience to be somewhat frustrating
(row 2). Five team members (the three members of team A
and two of Team D) spent too much time correcting things
with the system (row 4, responses above 3 in the scale).
Overall, although UMUX provides a rapid overview of
users’ satisfaction, it does not offer deeper insights into the
groups’ experience, or why some tools were used and
others not, considering the heterogeneity of the space and
the various interfaces available in the Design Studio.
In order to better understand this, we analysed the videos of
the sessions, recording the time and duration of each
Figure 5. Space usage: the circles represent the time each user spent in each
area of the multi-surface space according to their roles: Lecturer (red),
Learning Designer (blue) and Quality Assurance Officer (green). RT=
Regular table; IT=interactive tabletop and IWB= Interactive whiteboard
Figure 6. Teams’ task workflows. RT: Regular table. IT: Interactive tabletop. Wall: Writeable wall.
Coloured circles represent roles as in Figure 4
participant’s interaction with, and attention focused
on, each tool, and their location in the physical space
of the Design Studio. The use of tools was measured
as the effective time when a participant was holding or
interacting with a tool and attention was measured as
the time a participant spent only focusing their gaze on
a tool without interacting physically with it. Figure 4
presents the results of this analysis. Overall, the paper-
based course description was used the most (avg: 26%,
std: 10), followed by the interactive tabletop (20%,
std: 13 use; and 6%, std: 6 attention), the personal
computer projected on the wall (12%, std: 6 attention),
the tablets (10%, std: 8 use), the IWB (5%, std: 5 use;
and 3.3%, std: 2 attention) and the writeable wall (4%,
std: 4 use; and 4%, std: 4 attention). Other tools were
also occasionally used and often more than one tool
was used at the same time.
Results showed that: i) each team used tools in
different ways although the task and roles given to
users were the same; and ii) participants used the large
devices to different extents and in combination with
the other personal devices and physical materials
available. Thus, the set component was heavily co-
configured by the groups in runtime. Each group used a
different combination of large devices (IWB, interactive
tabletop wall projected computer), smaller devices (tablet,
dashboard) and non-digital devices (writeable wall, paper-
based print-outs of the course description and pattern
language). Team A mainly used only the interactive
tabletop, in combination with all the smaller devices
available (tablets, the dashboard and the wall projected
computer), and the paper course description. Group B also
mainly used one large device (the interactive tabletop) as
well as the dashboard and the projected computer. Teams C
and D used both large devices, but different combinations
of the other tools. These two teams also wrote on and used
the writeable wall. Team C used the dashboard and the
projection as well as the paper-based materials. Team D
used the dashboard, more than other groups, and the non-
digital writeable wall as well as the paper-based materials.
In order to better explain how participants interacted in the
physical space, and inspired by the automated analysis of
proxemic relationships [30], the location of each participant
in the Design Studio was
manually recorded and
analysed. The data is
visualised in Figure 5. In all
teams, the participants mostly
worked around the interactive
table (57%, std: 28 of the
total activity time per group
member). The remaining
time was divided between the
space around the regular table
(20%, std: 9), the IWB (10%,
std: 10) and the writeable
wall (12%, std: 12). The four groups did use the space quite
differently (Figure 4). Members of Team A primarily
worked side by side (sxs) at one edge of the interactive
tabletop. In Team B, one team member (blue) worked face-
to-face (f2f) with the other two (sxs), as all members moved
between the regular table (RT) and the interactive tabletop.
Members of Teams C and D positioned themselves in much
more varied formations. They used the space around the
IWB and the writeable wall. In Team C only one member
(blue) worked at the IWB, and the others worked around
the regular and interactive tabletop (f2f) and side by side
(sxs) by the writeable wall. The use of space in Team D,
was more irregular and complex, with all team members
moving around and distributing their time working mostly
sxs at the tables, the IWB and the wall. All groups re-
configured the location of the chairs and the hand-held tools
according to how they worked around the large surface
devices.
Analysis 2 Design process
Q2: What was the design process each group followed to
complete the task?
Figure 7. Different group strategies depending on users’ roles and divisions of labour.
L=Lecturer; LD= Learning designer; and QAO=Quality assurance officer
All teams were asked to
complete the same design
task (to build two candidate
designs). They were free to
choose the tools they would
use and the design process
was unstructured. In order to
better understand the design
processes followed by teams
to achieve the task, we
identified and video-coded
four main sub-processes that
emerged from users’ activity
and that corresponded to the
main sub-tasks posed to
participants, which included:
i) the initial discussion to
organise the team work; ii)
the negotiation of
conflicting objectives; iii)
the actual design of the two
options; and iv) a meta-
analysis comparing both candidate designs. Figure 6 shows
the workflow representations of this analysis for each team,
illustrating how each team particularly co-configured the
epistemic component in runtime. Team A followed the
simplest process with just three states: an initial group
discussion at the regular table, followed by the continued
design of the two candidate designs at the interactive
tabletop, completing one before starting the second (see
top-left state diagram in Figure 6). Similarly, Team B
followed a similar design process with the addition of a
meta-analysis comparing the two candidate designs
displayed on different devices (one at the tabletop and the
second at the IWB) and using the visualisations shown in
the dashboard. Another difference is that in this group only
one member actually built the designs using the tabletop,
while the other two participated in the verbal interaction,
working as advisors (Figure 6, top-right).
The design process followed by Team C included an
explicit sub-process during which members negotiated their
individual agendas by writing their agreed objectives on the
writeable wall. Thereafter, they divided the work so one
member worked at the IWB while the other two worked at
the interactive tabletop to generate two similar candidate
designs. During this time they used the projection for
information retrieval about the course, as well as the paper-
based materials. Then they merged the work so the three
participants completed one of the designs at the tabletop,
occasionally updating the second design using the IWB
(Figure 6, bottom-left). Finally, the design process followed
by Team D was more complex and less linear than the other
teams. Team members negotiated their objectives by
writing them on the writeable wall, and built the designs
using both the IWB and the interactive tabletop, often in
parallel but also iteratively, also using both large devices to
examine the designs. Members were aware of what each
other was doing, and kept their list of objectives and the
designs updated simultaneously. They also used the paper-
based materials for information retrieval. They completed
the task with a meta-analysis of both candidate designs and
by checking that they addressed their agreed objectives
(Figure 6, bottom-right).
Analysis 3 Roles and divisions of labour
Q3: What strategies were followed by each group in terms
of their social roles and divisions of labour?
This analysis was grounded on the CSCW view of roles as
human constructs created and sustained during the
interactive activity [44]. Thus, to understand how the roles
proposed to users were enacted (and co-configured), and
inspired by previous work aimed at automatically detecting
the emergence of leadership and divisions of labour (e.g.
[48]), we conducted a video-analysis of the sessions that
included a search of: 1) the degree of differentiation in
behaviour, depending on each member’s role (e.g. whether
the suggested roles were enacted by the participants during
the activity) ; 2) the presence of strong leaders; 3) divisions
of labour and responsibilities (e.g. whether participants
worked altogether or divided some tasks); and 4) the
monitoring ratio, measured as the division of time between
attention and task-work using the shared devices (IWB,
tabletop, dashboard, projection and wall). In this section we
present the key findings for each team.
Team A was characterised by a low differentiation of roles.
All team members spent more time performing task-work
using the CoCoDes than monitoring the task (ratios
between attention and task-work were 1:3, 1:6 and 1:2 for
the three team members). The Quality Assurance Officer
(QAOA) and the Learning Designer (LDA) used the tablets
Table 2. Brief overview of results of the analyses applied to the four teams. The last column
indicates in bold which component(s) of the ACAD framework are highlighted in each analysis
and paper-based materials while working on the tabletop.
They worked sxs without splitting the work during the
design task (e.g. Figure 7, top-left). Members of Team B,
like Team A, did not move from the same physical
disposition around the tabletop. They worked f2f and sxs as
in Figure 7, top-right. However, members of this team
assumed their roles very strictly. The Lecturer (LC) built the
two designs while the LDC (holding a tablet in Figure 6)
and the QAOC, acted in advisory roles and did not touch the
tabletop. The LDC justified this in the interviews by
indicating that they left the LC to do most of the work to let
him take ownership of the designs they were building. The
team members assumed the allocated roles as if they were
in an authentic collaborative design situation.
By contrast, members of Team C split the task during part
of the session (Figure 7, bottom-left). Members of this team
used the writeable wall to externalise their agreed joint
goals. The Lecturer (LB) was a strong leader and spent most
of the time coordinating and guiding the other members
towards his allocated goals (the attention to task-work ratio
of the LB was 4:1 whilst other members’ were 1:1 for both
the LDB and the QAOB). Finally, for Team D the workload
was distributed among all the roles. Team members moved
quite frequently in the space, using the writeable wall to
keep track of all the changes in the design being constructed
in both the IWB (e.g. Figure 7, bottom-right) and the
interactive tabletop. The QAOD focused on monitoring the
accomplishment of the team’s objectives while the other
team members spent less time monitoring the displays and
more time building the designs (attention to task-work ratio
was 1:1 for QAOD whilst 1:2 and 1:3 for the LD and LDD
respectively).
Discussion: the Framework Applied across Analyses
This section describes how the integration of the analysis
methods, aligned to the four components of group activity,
can explain each team’s activity in our design studio and,
where possible, make meaningful comparisons between
teams.
Table 2 presents a brief overview of the analyses described
above. Team A constrained itself to only working sxs at the
tabletop, without moving around in the space, with
members holding tablets or paper
materials and using the projected PC
to guide and monitor their activity
(set component). The design
intention was that they would use the
writeable wall to write up their
negotiated goals and compare their
final designs, however, they did not.
Team A adopted the simplest linear
process of all the groups to build the
two candidate designs requested
(epistemic). Interpreting this in line
with the ACAD framework, we can
say that Team A re-configured the
task at runtime according to their understanding. Team A
also did not differentiate roles in the group (social) and the
three users felt frustrated at not achieving their individual
goals, struggling with user interface aspects.
In Team B, in contrast, the roles strongly shaped the
interaction. Team members also worked only at the
tabletop, but adopting a f2f formation. The team made a
runtime decision to assign the role of ‘doer’ to one member,
and ‘advisors’ to the other two (social). These roles
influenced the way in which members used the tools
available. The advisors were looking at the tablets, papers,
the other vertical screens, and the dashboard, while the doer
interacted with the tabletop (set). Team B also adopted a
linear process in completing the task, with an additional
phase of analysis of their two designs (epistemic). As with
Team A, the simplicity with which these teams approached
the task shaped the way they used the space and the tools
available (epistemic shaping the tools usage). Team B
followed the intended design more systematically with
respect to the roles. However, they did not follow other
aspects suggested as part of the task (for example, the
suggested negotiation of individual goals - epistemic). The
similarities and differences in the activity of the users in the
two groups was observable in the similar linear process but
different physical formation and tools usage.
Teams C and D also had commonalities. They both used the
tools and space more extensively, including the writeable
wall to externalise and negotiate their individual goals and
then to monitor the design activity. In both teams, users did
not keep the same formations in the space, using both the
IWB and the tabletop to design and visualise the two
candidate designs (set). The process adopted by Team C
was both linear and parallel at times (epistemic). In Team
C, a strong leader coordinated the task approach (social).
Influenced by the loose enactment of roles and the presence
of a leader, the team showed different formations, sxs and
f2f, and diversity in the tools used, and changes in the
workflow (social component shaping the tools use and the
task). By contrast, the distribution of the workload was
more even in Team D, with some noticeable differentiation
of the roles of each member (social). The roles also
influenced the ways in which the space and tools were used
by this team, adopting sxs formations at all the large
devices (reflecting the similarity of their workload) and
showing a more complex process with periods of parallel
work. Like Team B, this group also heavily used the
dashboard to support the meta-analysis of their candidate
designs, as a result of the more strict differentiation of
roles.
Overall, the analysis approach captured a set of elements
that each contribute valuable dimensions to understanding
the complexity of the group processes in this heterogeneous
setting. The ACAD dimensions provide a clear overview of
the different aspects that can shape the groups activity (as
in Table 2) that go beyond the sole set design (e.g. user
interface design). This is important as it can inform the
further re-design and refinement of the user interfaces and
the space, in light of the social and epistemic context.
Alternatively, the analysis also illustrates the flexibility of
this particular space, and how it enabled people to use such
different strategies.
CONCLUSIONS
The design of effective multi-surface spaces requires a clear
understanding of the multiple dimensions that can shape
group activity. There is a need for conceptual approaches to
address the critical challenge of designing effective
interfaces while keeping an explicit connection with the
underpinning context of usage. In this paper, we proposed
an actionable approach, to provide structure to the analysis
and understanding of group activity in multi-surface spaces.
The approach we have developed relates theories of group
activity to the ACAD framework and proposes the
application of multiple methods of analysis. The approach
is deliberately and explicitly pragmatic based firmly on
the knowledge needs of the people who want to understand
the actual impact of the design of the system of interest. To
the best of our knowledge, the ACAD approach is unique in
so far as it offers ways to disentangle activity from the
epistemic, physical/digital and social entities that situate it.
While these situating entities are (at least in part)
designable, activity itself is not. Yet activity is what matters
it is how tasks are completed.
The richness of the approach has been illustrated with a
study of 4 teams doing an open and complex design task.
Three example analysis questions were addressed to
understand the complexity of, and the multiple factors that
can affect, the runtime activity. The varied ways each team
performed in the studio, even though they all were asked to
enact the same task and roles, and had the same set of tools
available, illustrate the multiple factors involved in groups’
activity in multi-device spaces. It can be seen that it is no
easy task to get a global view of groups’ activity, nor to
measure or even gain an appreciation of the complexity of
the activity in the interactive space. This is beyond the
capacity of a single analysis method.
In order to situate our approach in the wider literature of
ethnographic approaches to group work, we should
understand that ACAD provides some ways to sensitise the
researcher or the designer to look at the physical, the
digital, the social and the epistemic dimensions of activity
and to distinguish these elements from emergent activity.
Thus, it may be said that ethnography provides both a
theoretical framing and a preference for observational and
interview techniques, but ACAD can sensitise the
ethnographer to look at relations between certain aspects of
the group activity. In short, our approach can be situated
with respect to ethnographic methods in two ways: it can
include qualitative ethnographic research methods as part of
a set of mixed methods to understand complex multi-user
activity; or, alternatively, our approach can be used by an
ethnographer for theoretically framing the activity analysis.
(A recent example was reported by Yeoman [56]).
We acknowledge that the ACAD framework’s more
comprehensive analysis of group activity in multi-surface
environments may be more time consuming. But it also
gives a richer picture of the collaborative activity and the
uses of the rich set of digital tools and devices. This can
provide insights into the collaboration and inform iterative
re-design of the spaces. Moreover, it is becoming
increasingly feasible to automate some of the analysis and
even information about social roles could be elicited,
potentially with cross validation within the group.
Some aspects of the collaborative group activity can be
automatically captured, for example, by automatically
tracking mobility and group formations [30], or allowing
the detection of leaders and followers [48]. Alternatively,
sensing technology can also be used to semi-automate the
analysis and reduce the effort necessary to make sense of
users' activity. Some of these tools include the use of
multichannel video-based analysis tools (e.g. VACA [10]
and EXCITE [31]), the digitisation and synchronisation of
hand-written observations (e.g. [51]), or the use of visual
representations of users’ interactions (e.g. VICPAM [34]
and VisTACO [49]).
Further work is needed to explore the application of the
approach in other contexts and interactive spaces to
demonstrate how the tools, artefacts, roles, divisions of
labour, individual practices, and approaches to the task can
re-shape users’ group activity in these kinds of emerging
technology-rich spaces. We believe that this work provides
a valuable foundation for the research needed to understand
complex collaborative interactions in multi-surface digital
ecosystems.
ACKNOWLEDGMENTS
This work was funded by the Australian Research Council
(Grant FL100100203). The studies were conducted under
protocol 2012/2794 approved by The University of Sydney
Human Research Ethics Committee. The most up to date
participant consent forms can be requested by email
(Peter.Goodyear@Sydney.edu.au).
REFERENCES
1. Zahra S. H. Abad, Craig Anslow and Frank Maurer.
2014. Multi Surface Interactions with Geospatial Data:
A Systematic Review. In Proceedings of the 9th ACM
International Conference on Interactive Tabletops and
Surfaces (ITS '14), 69-78.
http://dl.acm.org/citation.cfm?doid=2669485.2669505.
2. Jakob E. Bardram. 2005. Activity-based computing:
support for mobility and collaboration in ubiquitous
computing. Personal Ubiquitous Computing, 9, 5
(September 2005), 312-322.
http://dx.doi.org/10.1007/s00779-004-0335-2.
3. Jakob E. Bardram, Sofiane Gueddana, Steven Houben
and Søren Nielsen. 2012. ReticularSpaces: activity-
based computing support for physically distributed and
collaborative smart spaces. In Proceedings of the ACM
SIGCHI Conference on Human Factors in Computing
Systems (CHI '12), 2845-2854.
http://dx.doi.org/10.1145/2207676.2208689.
4. Richard Bentley, J. A. Hughes, D. Randall, T. Rodden, P.
Sawyer, D. Shapiro and I. Sommerville. 1992.
Ethnographically-informed systems design for air traffic
control. In Proceedings of the ACM Conference on
Computer-Supported Cooperative Work (CSCW '92),
123-129. http://dx.doi.org/10.1145/143457.143470.
5. Mehmet Ilker Berkman and Adem Karahoca. 2012. A
direct touch table-top display as a multi-user
information kiosk: Comparing the usability of a single
display groupware either by a single user or people
cooperating as a group. Interacting with Computers, 24,
5 (September 1, 2012), 423-437.
http://iwc.oxfordjournals.org/content/24/5/423.abstract.
6. Jeanette Blomberg and Helena Karasti. 2013. Reflections
on 25 Years of Ethnography in CSCW. Computer
Supported Cooperative Work (CSCW), 22, 4-6 (August
2013), 373-423. http://dx.doi.org/10.1007/s10606-012-
9183-1.
7. Susanne Bødker. 2006. When second wave HCI meets
third wave challenges. In Proceedings of the 4th Nordic
Conference on Human-Computer Interaction
(NordiCHI '06), 1-8.
http://dx.doi.org/10.1145/1182475.1182476.
8. Susanne Bødker and Clemens Nylandsted Klokmose.
2011. The HumanArtifact Model: An Activity
Theoretical Approach to Artifact Ecologies. Human
Computer Interaction, 26, 4, 315-371.
http://dx.doi.org/10.1080/07370024.2011.626709.
9. John Bowers and David Martin. 1999. Informing
Collaborative Information Visualisation Through an
Ethnography of Ambulance Control. In Proceedings of
the 6th European Conference on Computer Supported
Cooperative Work (ECSCW ’99), 311-330.
http://dx.doi.org/10.1007/978-94-011-4441-4_17.
10. Brandon Burr. 2006. VACA: a tool for qualitative video
analysis. In Proceedings of the CHI '06 Extended
Abstracts on Human Factors in Computing Systems
622-627. http://dx.doi.org/10.1145/1125451.1125580.
11. Pedro Campos and Alfredo Ferreira. 2015.
Collaboration Meets Interactive Surfaces: A Brief
Introduction (CSCW). Computer Supported
Cooperative Work, 24, 2-3 (June 2015), 75-78.
http://dx.doi.org/10.1007/s10606-015-9222-9.
12. Haeyong Chung, Chris North, JessicaZeitz Self, Sharon
Chu and Francis Quek. 2014. VisPorter: facilitating
information sharing for collaborative sensemaking on
multiple displays. Personal and Ubiquitous Computing,
18, 5 (June 2014), 1169-1186.
http://dx.doi.org/10.1007/s00779-013-0727-2.
13. Stéphane Conversy, Hélène Gaspard-Boulinc, Stéphane
Chatty, Stéphane Valès, Carole Dupré and Claire
Ollagnon. 2011. Supporting air traffic control
collaboration with a TableTop system. In Proceedings
of the ACM Conference on Computer Supported-
Cooperative Work (CSCW '11), 425-434.
http://dx.doi.org/10.1145/1958824.1958891.
14. Pryce Davis, Michael Horn, Florian Block, Brenda
Phillips, E. Margaret Evans, Judy Diamond and Chia
Shen. 2015. “Whoa! We’re going deep in the trees!”:
Patterns of collaboration around an interactive
information visualization exhibit. International Journal
of Computer-Supported Collaborative Learning, 10, 1
(March 2015), 53-76. http://dx.doi.org/10.1007/s11412-
015-9209-z.
15. Robert Ellis and Peter Goodyear. 2010. Students'
experiences of e-learning in higher education: the
ecology of sustainable innovation. Routledge, New
York.
16. Michael Evans and Jochen Rick. 2014. Supporting
Learning with Interactive Surfaces and Spaces. In
Handbook of Research on Educational Communications
and Technology, J. Michael Spector, M. David Merrill,
Jan Elen and M. J. Bishop (Eds.). Springer, New York,
689-701.
17. Tony Gjerlufsen, Clemens Nylandsted Klokmose,
James Eagan, Clément Pillias and Michel Beaudouin-
Lafon. 2011. Shared substance: developing flexible
multi-surface applications. In Proceedings of the ACM
SIGCHI Conference on Human Factors in Computing
Systems (CHI '11), 3383-3392.
http://dx.doi.org/10.1145/1978942.1979446.
18. Peter Goodyear and Symeon Retalis. 2010. Technology-
enhanced learning: design patterns and pattern
languages. Sense Publishers, Rotterdam.
19. Peter. Goodyear and Lucila. Carvalho. 2014. Framing
the analysis of learning network architectures. In The
architecture of productive learning networks, Lucila
Carvalho & Peter Goodyear (Ed.), Routledge, New
York, NY, 48-70.
20. Wendy Gunn, Ton Otto and Rachel Charlotte Smith.
2013. Design anthropology: theory and practice. A&C
Black, London, UK.
21. Christian Heath and Paul Luff. 1991. Collaborative
Activity and Technological Design: Task Coordination
in London Underground Control Rooms. In Proceedings
of the 2nd European Conference on Computer
Supported Cooperative Work (ECSCW ’91), 65-80.
http://dx.doi.org/10.1007/978-94-011-3506-1_5.
22. James Hollan, Edwin Hutchins and David Kirsh. 2000.
Distributed cognition: toward a new foundation for
human-computer interaction research. ACM
Transactions on Computer-Human Interaction
(TOCHI), 7, 2 (June 2000), 174-196.
http://dx.doi.org/10.1145/353485.353487.
23. Steven Houben, Paolo Tell and Jakob E. Bardram.
2014. ActivitySpace: Managing Device Ecologies in an
Activity-Centric Configuration Space. In Proceedings of
the 9th ACM International Conference on Interactive
Tabletops and Surfaces (ITS '14), 119-128.
http://dx.doi.org/10.1145/2669485.2669493.
24. Hans-Christian Jetter, Harald Reiterer and Florian
Geyer. 2014. Blended Interaction: understanding natural
humancomputer interaction in post-WIMP interactive
spaces. Personal and Ubiquitous Computing, 18, 5
(June 2014), 1139-1158.
http://dx.doi.org/10.1007/s00779-013-0725-4.
25. Victor Kaptelinin and Bonnie A Nardi. 2006. Acting
with technology: Activity theory and interaction design.
MIT Press, Cambridge, MA.
26. Michael J Keppell. 2007. Instructional Design: Case
Studies in Communities of Practice. IGI Global,
Hershey, PA.
27. Ahmed Kharrufa, Roberto Martinez-Maldonado, Judy
Kay and Patrick Olivier. 2013. Extending tabletop
application design to the classroom. In Proceedings of
the 8th ACM International Conference on Interactive
Tabletops and Surfaces (ITS '13), 115-124.
http://dx.doi.org/10.1145/2512349.2512816.
28. Paul Luff, Marina Jirotka, Naomi Yamashita, Hideaki
Kuzuoka, Christian Heath and Grace Eden. 2013.
Embedded interaction: The accomplishment of actions
in everyday and video-mediated environments. ACM
Transactions on Computer-Human Interaction
(TOCHI), 20, 1 (March 2013), 1-22.
http://dx.doi.org/10.1145/2442106.2442112.
29. Paul K. Luff, Naomi Yamashita, Hideaki Kuzuoka and
Christian Heath. 2015. Flexible Ecologies And
Incongruent Locations. In Proceedings of the ACM
SIGCHI Conference on Human Factors in Computing
Systems (CHI '15), 877-886.
http://dx.doi.org/10.1145/2702123.2702286.
30. Nicolai Marquardt, Ken Hinckley and Saul Greenberg.
2012. Cross-device interaction via micro-mobility and f-
formations. In Proceedings of the 25th ACM Symposium
on User Interface Software and Technology (UIST '12),
13-22. http://dx.doi.org/10.1145/2380116.2380121.
31. Nicolai Marquardt, Frederico Schardong and Anthony
Tang. 2015. EXCITE: EXploring Collaborative
Interaction in Tracked Environments. In Proceedings of
the International Conference on Human-Computer
Interaction (INTERACT '15), 89-97.
http://dx.doi.org/10.1007/978-3-319-22668-2_8.
32. Roberto Martinez-Maldonado, Anthony Collins, Judy
Kay and Kalina Yacef. 2011. Who did what? who said
that? Collaid: an environment for capturing traces of
collaborative learning at the tabletop. In Proceedings of
the 6th ACM International Conference on Interactive
Tabletops and Surfaces (ITS '11), 172-181.
http://dx.doi.org/10.1145/2076354.2076387.
33. Roberto Martinez-Maldonado, Peter Goodyear, Yannis
Dimitriadis, Kate Thompson, Lucila Carvalho, Luis
Pablo Prieto and Martin Parisio. 2015. Learning about
Collaborative Design for Learning in a Multi-Surface
Design Studio. In Proceedings of the International
Conference on Computer-Supported Collaborative
Learning (CSCL '15), 174-181.
http://infoscience.epfl.ch/record/209190.
34. RoshanakZilouchian Moghaddam and Brian Bailey.
2011. VICPAM: A Visualization Tool for Examining
Interaction Data in Multiple Display Environments. In
Human Interface and the Management of Information.
Interacting with Information, Michael J. Smith and
Gavriel Salvendy (Eds.). Springer Berlin Heidelberg,
278-287.
35. Bonnie A Nardi. 1996. Context and consciousness:
activity theory and human-computer interaction. MIT
Press, Cambridge MA.
36. Donald A Norman. 2005. Human-centered design
considered harmful. Interactions, 12, 4 (August, 2005),
14-19. http://dx.doi.org/10.1145/1070960.1070976.
37. Pierre Rabardel and Gaëtan Bourmaud. 2003. From
computer to instrument system: a developmental
perspective. Interacting with Computers, 15, 5 (October
2003), 665-691.
http://iwc.oxfordjournals.org/content/15/5/665.abstract.
38. Roman Rädle, Hans-Christian Jetter, Nicolai Marquardt,
Harald Reiterer and Yvonne Rogers. 2014.
HuddleLamp: Spatially-Aware Mobile Displays for Ad-
hoc Around-the-Table Collaboration. In Proceedings of
the 9th ACM International Conference on Interactive
Tabletops and Surfaces 45-54.
http://dx.doi.org/10.1145/2669485.2669500.
39. Yvonne Rogers and Judi Ellis. 1994. Distributed
Cognition: an alternative framework for analysing and
explaining collaborative working. Journal of
Information Technology, 9, 2 (June 1994), 119-128.
http://www.palgrave-
journals.com/jit/journal/v9/n2/abs/jit199412a.html.
40. Yvonne Rogers. 2004. New theoretical approaches for
HCI. Annual review of information science and
technology, 38, 1 (September 2005), 87-143.
http://onlinelibrary.wiley.com/doi/10.1002/aris.1440380
103/abstract.
41. Kathy Ryall, Clifton Forlines, Chia Shen and Meredith
Ringel Morris. 2004. Exploring the effects of group size
and table size on interactions with tabletop shared-
display groupware. In Proceedings of the ACM
Conference on Computer-Supported Cooperative Work
(CSCW '04), 284-293.
http://dx.doi.org/10.1145/1031607.1031654.
42. Kjeld Schmidt and Liam Bannon. 2013. Constructing
CSCW: The First Quarter Century. Computer Supported
Cooperative Work (CSCW), 22, 4-6 (August 2013), 345-
372. http://dx.doi.org/10.1007/s10606-013-9193-7.
43. Bertrand Schneider, Matthew Tobiasz, Charles Willis
and Chia Shen. 2012. WALDEN: multi-surface multi-
touch simulation of climate change and species loss in
thoreau's woods. In Proceedings of the 7th ACM
International Conference on Interactive Tabletops and
Surfaces (ITS '12), 387-390.
http://dx.doi.org/10.1145/2396636.2396707.
44. Randall B. Smith, Ranald Hixon and Bernard Horan.
1998. Supporting flexible roles in a shared space. In
Proceedings of the ACM Conference on Computer
Supported-Cooperative Work (CSCW '98), 197-206.
http://dx.doi.org/10.1145/289444.289494.
45. Norbert A Streitz, Peter Tandler, Christian Müller-
Tomfelde and Shin’ichi Konomi. 2001. Roomware:
Towards the Next Generation of Human-Computer:
Interaction based on an Integrated Design of Real and
Virtual Worlds. In Human-Computer Interaction in the
New Millenium, John M. Carroll (Ed.), Addison Wesley,
New York, NY, 553578.
46. Norbert A. Streitz, Jörg Geißler, Torsten Holmer,
Shin'ichi Konomi, Christian Müller-Tomfelde,
Wolfgang Reischl, Petra Rexroth, Peter Seitz and Ralf
Steinmetz. 1999. i-LAND: an interactive landscape for
creativity and innovation. In Proceedings of the ACM
SIGCHI Conference on Human Factors in Computing
Systems (CHI '99), 120-127.
http://dx.doi.org/10.1145/302979.303010.
47. Lucy Suchman. 1987. Plans and situated actions: The
problem of human-machine communication. Cambridge
University Press, Cambridge, UK.
48. Noriko Suzuki, Tosirou Kamiya, Ichiro Umata,
Sadanori Ito, Shoichiro Iwasawa, Mamiko Sakata and
Katsunori Shimohara. 2013. Detection of Division of
Labor in Multiparty Collaboration. In Human Interface
and the Management of Information. Information and
Interaction for Learning, Culture, Collaboration and
Business, Sakae Yamamoto (Ed.), Springer Berlin
Heidelberg, 362-371.
49. A. Tang, M. Pahud, S. Carpendale and B. Buxton. 2010.
VisTACO: visualizing tabletop collaboration. In
Proceedings of the 5th ACM International Conference
on Interactive Tabletops and Surfaces (ITS '10), 29-38.
http://dx.doi.org/10.1145/1936652.1936659.
50. Anthony Tang, Melanie Tory, Barry Po, Petra Neumann
and Sheelagh Carpendale. 2006. Collaborative coupling
over tabletop displays. In Proceedings of the ACM
SIGCHI Conference on Human Factors in Computing
Systems (CHI '06), 1181-1190.
http://dx.doi.org/10.1145/1124772.1124950.
51. Nadir Weibel, Adam Fouse, Edwin Hutchins and James
D. Hollan. 2011. Supporting an integrated paper-digital
workflow for observational research. In Proceedings of
the 16th International Conference on Intelligent User
Interfaces (IUI '11), 257-266.
http://dx.doi.org/10.1145/1943403.1943443.
52. D Wigdor, H Jiang, C Forlines, M Borkin and C Shen.
2009. WeSpace: the design development and
deployment of a walk-up and share multi-surface visual
collaboration system. In Proceedings of the ACM
SIGCHI Conference on Human Factors in Computing
Systems (CHI '09), 1237-1246.
http://dx.doi.org/10.1145/1518701.1518886.
53. Terry Winograd and Fernando Flores. 1986.
Understanding computers and cognition: A new
foundation for design. Addison Wesley, Menlo Park,
California.
54. Alain Wisner. 1995. Understanding problem building:
ergonomic work analysis. Ergonomics, 38, 3 (July
2010), 595-605.
http://www.tandfonline.com/doi/abs/10.1080/00140139
508925133.
55. Jialiang Yao, Terrence Fernando, Hissam Tawfik,
Richard Armitage and Iona Billing. 2006. Towards a
Collaborative Urban Planning Environment. In
Computer Supported Cooperative Work in Design II,
Wei-ming Shen, Kuo-Ming Chao, Zongkai Lin, Jean-
PaulA Barthès and Anne James (Eds.). Springer Berlin
Heidelberg, 554-562.
56. Pippa Yeoman Habits and habitats: An ethnography of
learning entanglement. PhD Dissertation, The
University of Sydney, Sydney, Australia, 2015.
57. Nicola Yuill and Yvonne Rogers. 2012. Mechanisms
for collaboration: A design and evaluation framework
for multi-user interfaces. ACM Transactions on
Computer-Human Interaction (TOCHI), 19, 1 (March
2012), 1-25.
http://dx.doi.org/10.1145/2147783.2147784.
... The ACAD framework has been applied to the analysis and design of complex learning situations in a variety of contexts, including schools (Thibaut, Curwood, Carvalho, & Simpson, 2015;Yeoman, 2018); universities and MOOCs (Garreta-Domingo, Sloep, Hérnandez-Leo, & Mor, 2017;Sun, 2016); informal spaces such as libraries and museums (Carvalho, 2017); and to frame the processes of educational designers (Martinez-Maldonado, Goodyear, Kay, Carvalho, & Thompson, 2016). In this paper, our focus is on distance learning in higher education in developing contexts, specifically on how the framework helps to reveal connections between tools, tasks and social arrangements of students to a broader or macro context, and the ramifications of such a scenario for those designing and learning at distance modes at the meso (e.g., the university) or micro contexts (e.g., sequencing of a lesson). ...
Article
Full-text available
Teaching and learning in higher education are being transformed through complex configurations of people, tasks, material and digital resources. Successful designs for innovative learning require understanding how these complex configurations relate to learner’s activity. This paper illustrates the application of a networked learning approach to frame the design of distance learning in higher education in developing countries. A case study based in a Bachelor subject offered at a Brazilian university is discussed. Its analysis unveils how design fits into a broader social context that influences structural elements related to learning materials, the learning management systems adopted, and the social arrangements for students in this case study. The paper shows how the networked learning approach allows us to explore the complexities of distance learning in developing countries, offering an analytical ground to identify, explore, learn and adapt key re-usable design ideas, with the aim of improving distance teaching and learning in higher education.
... Another approach is the activity-centred analysis and design framework. This framework outlines group activities along three dimensions: (1) set (i.e., the place of group activity and relevant objects), (2) epistemic (i.e., "both implicit and explicit knowledge oriented elements that shape the participants' tasks and working methods"), and (3) social (i.e., group composition, assigned roles, and responsibilities) (Echeverria et al., 2019, p. 4;Martinez-Maldonado, Goodyear, Kay, Thompson, & Carvalho, 2016). More recently, Echeverria and colleagues (2019) added a further dimension to this framework: (4) the affective aspect. ...
Article
Full-text available
As technology advances, learning analytics is expanding to include students’ collaboration settings. Despite their increasing application in practice, some types of analytics might not fully capture the comprehensive educational contexts in which students’ collaboration takes place (e.g., when data is collected and processed without predefined models, which forces users to make conclusions without sufficient contextual information). Furthermore, existing definitions and perspectives on collaboration analytics are incongruent. In light of these circumstances, this opinion paper takes a collaborative classroom setting as context and explores relevant comprehensive models for collaboration analytics. Specifically, this paper is based on Pei-Ling Tan and Koh’s ecological lens (2017, Situating learning analytics pedagogically: Towards an ecological lens. Learning: Research and Practice, 3(1), 1–11. https://doi.org/10.1080/23735082.2017.1305661), which illustrates the co-emergence of three interactions among students, teachers, and content interwoven with time. Moreover, this paper suggests several factors to consider in each interaction when executing collaboration analytics. Agendas and recommendations for future research are also presented.
... Recently, inroads have been made into mapping from observable multimodal sensor data to unobservable educational constructs. For example, Echeverria and colleagues (2019) proposed and implemented a modelling approach and data structure called the multimodal matrix to map from indoor positioning, physiological, audio, and system logging data to a holistic view of collaboration that includes dimensions of collaboration (epistemic, physical, social, and affective) as per a theoretical framework of collaboration (Martinez-Maldonado, Goodyear, Kay, Thompson, & Carvalho, 2016). A similar mapping was conceptually proposed by Worsley and colleagues (2016) to navigate from low-level sensor data (e.g., extracted from eye-trackers, video/depth cameras, and microphones) to learning constructs (e.g., conceptual change, emotional intelligence, mindset, and identity) using a range of analytic techniques such as classifiers, prosodic analysis, hidden Markov models, and constrained local neural fields. ...
Article
Full-text available
Using data to generate a deeper understanding of collaborative learning is not new, but automatically analyzing log data has enabled new means of identifying key indicators of effective collaboration and teamwork that can be used to predict outcomes and personalize feedback. Collaboration analytics is emerging as a new term to refer to computational methods for identifying salient aspects of collaboration from multiple group data sources for learners, educators, or other stakeholders to gain and act upon insights. Yet, it remains unclear how collaboration analytics go beyond previous work focused on modelling group interactions for the purpose of adapting instruction. This paper provides a conceptual model of collaboration analytics to help researchers and designers identify the opportunities enabled by such innovations to advance knowledge in, and provide enhanced support for, collaborative learning and teamwork. We argue that mapping from low-level data to higher-order constructs that are educationally meaningful, and that can be understood by educators and learners, is essential to assessing the validity of collaboration analytics. Through four cases, the paper illustrates the critical role of theory, task design, and human factors in the design of interfaces that inform actionable insights for improving collaboration and group learning.
... However, HCI research is robust and covers all aspects of the interaction between humans and computers (eg, social, behavioural, cognitive, embodiment) (see the full review in Rogers, 2012). More specific to MMLA research, this includes theories that have been used to understand the complexity of the social learning and interaction ecosystems (eg, Bardram, 2005;Jetter et al., 2014;Martinez-Maldonado, Goodyear, Kay, Thompson, & Carvalho, 2016), observable multimodal interaction aspects (eg, Norris, 2004), and how non-observable cognitive processes and emotional states can play a significant role in reflection and learning (eg, DeVane, & Squire, 2012;Picard, 1997). ...
Article
Full-text available
Learning outcomes are the result of interactions between intraindividual factors, such as prior knowledge, emotions and motivation; as well as the contextual factors in which the learning takes place (Dumas, McNeish, & Greene, 2020). If any, perhaps only a few Learning Analytics (LA) researchers and practitioners would disagree with this shared understanding of learning amongst the modern learning scientists. After all, as stated in the definition of the field, LA aims to understand and optimise learning and the environments in which learning occurs through the measurement, collection, analysis and reporting of data about learners and their contexts. However, most LA research and practice today rely upon logged data from digital platforms, which oftentimes fall short to help us accurately and timely interpret and account for factors influencing learning. This is particularly the case for factors influencing the contexts in which learners perform and the intraindividual factors that are regularly categorised in the affective domain. Recently, Multimodal Learning Analytics (MMLA) provided promising results to make progress in this space to help us better understand, model and support learning in situ. However, MMLA also poses significant methodological, practical and ethical challenges. We are aware that MMLA research and practice can lead to significant concerns amongst educational stakeholders particularly with regards to the invasion of their privacy; bias, fairness, accountability and transparency of the MMLA models; as well as the risk of enabling a “surveillance” culture through constant monitoring of learner behaviours. In an attempt to raise awareness of some of these issues as well as providing some ways forward, we proposed this special issue on the promise and challenges of MMLA.
Article
The interdisciplinary field of the learning sciences encompasses educational psychology, cognitive science, computer science, and anthropology, among other disciplines. The Cambridge Handbook of the Learning Sciences, first published in 2006, is the definitive introduction to this innovative approach to teaching, learning, and educational technology. In this significantly revised third edition, leading scholars incorporate the latest research to provide seminal overviews of the field. This research is essential in developing effective innovations that enhance student learning - including how to write textbooks, design educational software, prepare effective teachers, and organize classrooms. The chapters illustrate the importance of creating productive learning environments both inside and outside school, including after school clubs, libraries, and museums. The Handbook has proven to be an essential resource for graduate students, researchers, consultants, software designers, and policy makers on a global scale.
Article
In recent years, the use of analytics and data mining – methodologies that extract useful information from large datasets – has become commonplace in science and business. When these methods are used in education, they are referred to as learning analytics (LA) and educational data mining (EDM). For example, adaptive learning platforms – those that respond uniquely to each learner – require learning analytics to model the learner’s current state of knowledge. The researcher can conduct second-by-second analyses of phenomena that occur over long periods of time or in an individual learning session. Large datasets are required for these analyses. In most cases, the data are gathered automatically – such as keystrokes, eye movement, or assessments – and are analyzed using algorithms based in learning sciences research. This chapter reviews prediction methods, structure discovery, relationship mining, and discovery with models.
Article
Simulation games are widely used to teach negotiation skills in political science education. However, existing studies focus on the impact of simulation games on students' knowledge gains and affective changes and largely ignore skill assessment and development in the gaming environment. This study aimed to understand the process of student groups practicing negotiation skills in a collaborative simulation game through social network analysis (SNA). We proposed a conceptual framework to assess negotiation skills by identifying different negotiation skillsets based on participatory roles in collaborative learning, investigated the skill development through the change of the skillsets over time, and examined the relationship between the negotiation skillsets and achievement. The results showed that the majority of student groups practiced more complex negotiation skillsets towards the end of the game, and the complexity of skillsets was positively related to the negotiation outcomes in the simulation game. The study demonstrated the possibilities of using SNA as an analytical tool to measure negotiation skills and explore dynamic skill development in a collaborative simulation game. It has also shown the potential of integrating SNA in a collaborative gaming environment for automated analysis of a large volume of data concerning interactions. Practitioner notes What is already known about this topic Simulation games are widely used to teach negotiation skills in political science education to improve the connection between theoretical knowledge and skill practice. Existing studies focus on the impact of simulation games on learners' knowledge gains and affective changes but ignore the skill assessment and development in the gaming environment, especially in the collaborative simulated gaming environment. What this paper adds The study investigated the process of student groups practicing negotiation skills in a collaborative simulation game through Social Network Analysis (SNA). We proposed a novel conceptual framework to measure negotiation skills by identifying different negotiation skillsets through connecting them to learners' participatory roles that emerged during the collaborative gameplay. The study demonstrated learners' dynamic and evolutionary process of practicing negotiation skills in the game. The results showed that there was a positive relationship between the complexity of negotiation skillsets and negotiation outcomes in the game. Implications for practice and/or policy The study demonstrates the possibilities of using SNA as an analytical tool to measure negotiation skills and explore dynamic skill development in a collaborative gaming environment. The results can guide teachers to identify risky game moves and students in need and provide personalized feedback to help improve students' negotiation skills. The findings can help teachers optimize the game design to ensure students' equal and active participation in the game. The study highlights the potential of integrating SNA in collaborative gaming environments for automated analysis of a large volume of data concerning interactions.
Article
This paper examines the theoretical and practical problems that arise from attempts to develop formal characterizations and explanations of many work activities, in particular, collaborative activities. We argue that even seemingly discrete individual activities occur in, and frequently draw upon a complex network of factors: individual, social and organizational. Similarly, organizational and social constraints and practices impact upon individual cognitive processes and the realization of these in specific tasks. Any adequate characterization of work activities therefore requires the analysis and synthesis of information from these traditionally separate sources. We argue that existing frameworks, emanating separately from the respective disciplines (cognitive, social and organizational) do not present an adequate means of studying the dynamics of collaborative activity in situ. An alternative framework, advocated in this paper, is distributed cognition. Its theoretical basis is outlined together with examples of applied studies of computer-mediated work activities in different organizational settings.
Book
Making offers a series of profound reflections on what it means to create things, on materials and form, the meaning of design, landscape perception, animate life, personal knowledge and the work of the hand. It draws on examples and experiments ranging from prehistoric stone tool-making to the building of medieval cathedrals, from round mounds to monuments, from flying kites to winding string, from drawing to writing. The book will appeal to students and practitioners alike, with interests in social and cultural anthropology, archaeology, architecture, art and design, visual studies and material culture.
Conference Paper
In this paper we report on some experiments with a high fidelity media space, t-Room, an immersive system that presents full scale, real-time images of co-participants. The system has been enhanced to provide more flexibility in the ways participants could organise themselves and the materials they are working on. Drawing on some quasi-naturalistic experiments, where the participants were required to undertake a range of complex tasks, we consider the formations they adopt and the issues and problems that arise when they attempt to establish and preserve a common focus and alignment. We conclude by briefly discussing the consequences for developing advanced spaces to support collaborative work and understanding complex video-mediated interaction.
Book
Instructional designers hold the responsibility of selecting, sequencing, synthesizing, and summarizing unfamiliar content to subject matter experts. To successfully achieve legitimate participation in communities of practice, instructional designers need to utilize a number of communication strategies to optimize the interaction with the subject matter expert. Instructional Design: Case Studies in Communities of Practice documents real-world experiences of instructional designers and staff developers who work in communities of practice. Instructional Design: Case Studies in Communities of Practice explains the strategies and heuristics used by instructional designers when working in different settings, articulates the sophistication of communication strategies when working with subject matter experts, and provides insight into the range of knowledge, skills, and personal characteristics required to complete the tasks expected of them.
Chapter
In recent years, educational research on interactive surfaces such as tablets, tabletops, and whiteboards, and spaces such as smart rooms and 3D sensing systems has grown in quantity, quality, and prominence. Departing from the mouse-and-keyboard form of input, users of these systems manipulate digital information directly with fingers, feet, and body movements, or through a physical intermediary such as token, pen, or other tractable object. Due to their support for natural user interfaces, direct input and multiple access points, these educational technologies provide significant opportunities to support colocated collaborative and kinesthetic learning. As hardware becomes affordable, development environments mature, and public awareness grows, these technologies are likely to see substantial uptake in the classroom. In this chapter, we provide a foothold on the current technology development and empirical literature, highlighting a range of exemplary projects that showcase the potential of interactive surfaces and spaces to support learning across age groups and content domains. We synthesize across the existing work to formulate implications of these technological trends for the design of interactive educational technologies, the impetus for academic research based on such systems, and the advancement of future educational practice.
Conference Paper
A central issue in designing collaborative multi-surface environments is evaluating the interaction techniques, tools, and applications that we design. We often analyse data from studies using inductive video analysis, but the volume of data makes this a time-consuming process. We designed EXCITE, which gives analysts the ability to analyse studies by quickly querying aspects of people’s interactions with applications and devices around them using a declarative programmatic syntax. These queries provide simple, immediate visual access to matching incidents in the interaction stream, video data, and motion-capture data. The query language filters the volume of data that needs to be reviewed based on criteria such as application events, and proxemics events, such as distance or orientation between people and devices. This general approach allows analysts to provisionally develop theories about the use of multi-surface environments, and to evaluate them rapidly through video-based evidence.