Content uploaded by Jelle van Dijk
Author content
All content in this area was uploaded by Jelle van Dijk on Nov 26, 2015
Content may be subject to copyright.
Beyond Distributed Representation: Embodied Cognition
Design Supporting Socio-Sensorimotor Couplings
Jelle van Dijk
SDU Design
Univ. of Southern Denmark,
6400 Sønderborg, Denmark
jelle.vandijk@hu.nl
Remko van der Lugt
Technology and Innovation
Utrecht Univ. of Appl. Sciences
P.O.box 182, 3500 AD,
Utrecht, Netherlands
remko.vanderlugt@hu.nl
Caroline Hummels
Industrial Design
Eindhoven Univ. of Technology
P.O.box 513, 5600 MB
Eindhoven, Netherlands
c.c.m.hummels@tue.nl
ABSTRACT
Embodied Cognition has been proposed as a relevant theory
for tangible and embedded interaction [14]. Based on two
2-year lasting Research-through-Design cases we identify
three variations of the theory: 1) Distributed Representation
and Computation, 2) Socially Situated Practices and 3)
Sensorimotor Coupling & Enactment. Both social
situatedness and sensorimotor coupling proved relevant for
design and for understanding user behavior in context. We
show how the ‘social’ and the ‘sensorimotor’ are part of
one integrated sensemaking process we call ‘socio-
sensorimotor coupling’. We argue that the, intuitively
appealing, idea of using tangibles for external
representation actually hinders designing for sensemaking
as socio-sensorimotor coupling. We present a vision of
Embodied Cognition Design, which goes beyond a
representational interpretation, aiming to intervene more
directly into the socio-sensorimotor loop.
Author Keywords
Embodied Cognition, situatedness, practice, sensorimotor
coupling, interaction, design, theory, tangible, augmented
ACM Classification Keywords
H.5.2. User Interfaces: Theory and methods.
General Terms
Design; Theory; Human Factors.
INTRODUCTION
“To understand is to experience the harmony between what
we aim at and what is given, between the intention and the
performance – and the body is our anchorage in a world”
Merleau-Ponty [23]
Embodied Cognition (EC) is a theory of how people think,
act and in general make sense of the world [2]. With the
rise of such new fields as augmented reality, ubiquitous
computing, tangible interaction, context-aware and
wearable computing, we are witnessing an unprecedented
trend towards integrating physical form and digital process.
The challenges designers are faced with are actually closely
related to the theoretical issues in EC [15, 9]. EC has
therefore been presented as a relevant set of principles that
may inspire interaction design [9, 15, 11, 22].
EC draws from a diversity of areas, ranging from robotics
[6] to anthropology [13]. It is therefore not surprising that
there are considerable differences, and even conflicting
claims, in how the main ideas are elaborated. As we aim to
show in this paper, these differences have consequences for
design, which show up concretely in how and what to
design, as well as in how to make sense of data from user-
studies. In what follows, we use lessons learned from two
long-term Research-through-Design cases that formed part
of a Phd project [33], in order to answer the following
question: What does it mean to design from an EC
perspective? That is, what can EC bring to design?
PAPER OUTLINE
In what follows, we first introduce EC theory and three
variations of it we discovered to be distinctive as well as
relevant for our design projects. Next, we introduce the
design cases. We discuss the theory by reflecting on
concrete design problems and user observations in our
cases. We show why one of the variations, the
‘representational’ view, proved to be problematic. We
explain how the two other variations, one focused on ‘social
interaction’ and the other on ‘sensorimotor coupling’, can
be combined, through design, in one integrated perspective.
This, then, forms the basis of our vision of Embodied
Cognition Design, and we provide directions for such a
design in the final part of the paper. But first, the theory.
EMBODIED COGNITION: IN THEORY AND IN DESIGN
EC developed as a rejection of the cognitivist picture of the
mind as a information-processing machine (the brain)
performing computations (reasonings) on internal
representations of the outside world [2, 3]. EC rejects the
modularity and sequentiality in classical models, in which
cognition is assumed to start first with ‘sensory input’, to be
processed internally in distinct mental modules, to result
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
TEI'14, February 16 - 19 2014, Munich, Germany
Copyright held by the owner/author(s). Publication rights licensed to ACM.
ACM 978-1-4503-2635-3/14/02…$15.00.
http://dx.doi.org/10.1145/2540930.2540934
finally in an appropriate ‘motor output’ [2]. Instead, EC
takes the body-in-action as a starting point, being neither
‘inner’ nor ‘outer’, but somewhere in between.
Phenomenologist Merleau-Ponty describes the peculiar
status of the body as follows:
“I move external objects with the aid of my body, which
takes hold of them in one place and shifts them to another.
But my body itself I move directly, I do not find it at one
point of objective space and transfer it to another, I have no
need to look for it, it is already with me … The
relationships between my decision and my body are, in
movement, magic ones.” [23, pp. 107-108]
With the body as a grounding structure, EC portrays
cognition essentially as a coordination, achieved through a
self-organizing network of elements [5]. This network
reaches beyond the brain to include bodily constraints,
homeostatic levels, sensorimotor properties, as well as
dynamic relations between body and the physical- and
social environment [2, 12, 17]. In all, body, brain and the
environment are seen as part of the cognitive system - part
of what makes cognition happen (See figure 1).
Figure 1: Sketch of the Embodied Cognition perspective.
Cognition emerges from interactions between brain, body and
the physical- and social environment © Jelle van Dijk
EMBODIED COGNITION: THREE FLAVORS
In this section we distinghuish between three ‘flavors’ of
EC theory, reviewing literature for each. After that, we
introduce the design cases that grounded this tripartition.
Even though a large proportion of EC theory is
incorporated, parts of it are neglected, as the analysis
evolved in service of the actual design cases and their
practical demands. Hence, Lakoff and Johnson’s theory of
Embodied Metaphor is left aside (see [4]). However,
metaphorical designs such as in [4] can be seen as part of
Distributed Representation and Computation (explained
below). Activity Theory [20], although anti-Cartesian, lies
equally beyond the scope of this paper.
Distributed representation and computation (DRC)
Seeds of EC can be found in the work of Norman’s
‘knowledge in the world’ [24]. Norman focuses on external
representation. For example, he describes his habit of
putting his bag against the front door in order not to forget
to take the bag to work [24]. Philosopher Andy Clark
dubbed this the ‘007 principle’: the environment provides
one with information on a ‘need-to-know basis’ [2, p. 46].
In distributed cognition [17] both representing information
and processing it (computation), is distributed between
brain and the environment. Hutchins shows how intelligent
behavior on board of a ship is a cooperative achievement of
a system, consisting of brains of people, as well as the
physical tools used [17]. This means people not only
represent information externally, but also use the
environment more actively to reason with. In this regard,
David Kirsh explains how pragmatic actions directly
contribute to achieving a goal, while epistemic actions
reorganize the environment to reduce cognitive load [21].
Taking out a pen and paper would be an epistemic action
that makes a hard calculation less difficult, enabling one to
solve the problem on paper instead of in the head. Clark
calls such tools and props ‘cognitive scaffolds’.
Manipulation of physical objects provides cues that enable
us to solve problems easily; problems that would be more
difficult when using only brain-internal computation [2].
Finally, people tend to live in pre-structured ‘life-worlds’
[1] within which task-related actions consume less
cognitive processing than would be expected in isolation.
For example, tasks often have a dedicated physical location
(cooking is done in a kitchen), tools needed are found close
together at the task location, routine maintenance in the
background helps to do tasks more easily, and so on [1].
Figure 2. The Distributed Representation and Computation
perspective (DRC). Details in text. © Jelle van Dijk
A sketch of DRC is given in figure 2. Cognition is a
computational-representational process, extending out into
the world to include objects and other people for external
representation and computation, reducing cognitive load.
Design from a DRC perspective
Many tangible interaction designs support exactly the
scaffolding strategies as described in the DRC framework.
Consider Ullmer & Isshi’s classic paper on the tangible
interface [19]. The tangibles PassiveLENS and activeLENS
each create an interface between the physical body and, in
this case, a digital street-plan (figure 3). DRC theory can
explain how this reduces cognitive load. ‘Phicons’ (physical
icons) are expected to outrank graphical icons, since they
exploit people’s natural bodily skills:
Figure 3. Tangible interaction in passiveLENS (left) and
activeLENS (right) (Brygg Ullmer, with kind permission)
“Tangible User Interfaces (TUIs) are built upon [human
sensing and manipulation] skills, and situate … digital
information in physical space. The key idea of TUIs is to
give physical forms to digital information. The physical
forms serve as both representations and controls for their
digital counterparts.” [18, p. x, my emphasis]
Indeed tangible interfaces like these may provide provide
user-friendly interfaces to digital information. This is one
way to see EC theory as supporting interaction design.
SOCIALLY SITUATED PRACTICE (SSP)
Research originating in anthropology and social science
[27, 9] has investigated the way tools become incorporated
in socially situated practices. SSP stresses the value of
concrete circumstances and opportunities that may arise ‘in
action’ [27, 9, 13]. For example, Lucy Suchman [27] argues
that people do not first internally create a ‘plan for action’
that is then executed. Instead, a person is found already
acting in the face of concrete circumstances in the world. In
doing so, plans evolve in an improvised manner. Along the
way people adapt, re-organize and use external artifacts:
“… cognitive phenomena have an essential relationship to a
publicly available, collaboratively organised world of
artefacts and actions, and … the significance of artefacts
and actions, and the methods by which their significance is
conveyed, have an essential relationship to their particular,
concrete circumstances.” [27, p. 50]
Importantly, where DRC treats people and physical objects
essentially alike, as computational units in a distributed
information-processing system [17], SSP shows how
objects ‘get taken up’ in a social activity:
“…the real cornerstone of knowledge is people. ... [A]
distinction [needs to be made]... between the idea that
knowledge can be represented and stored and the view that
it has to be contextualized and made relevant to the settings
in which it has to be applied. Meaning is not inherent to
information; information is made meaningful.” [9, p.185]
For SSP, ‘cognitive scaffolds’ can only exist in the context
of a social setting. Without social interrelations, roles,
norms, culture, politics, there would be no meaning at all in
using artifacts [27, see especially. p 277].
For DRC, physical media are locally available media for
storing knowledge ‘in the world’. SSP instead emphasizes
how such artifacts (for example, cardboard ‘flight-strips’,
used by air-traffic controllers) function as active
components in the way work gets organized. ‘What it takes
to be a representation is to be used as a representation in the
course of some activity…in systems of practice’ [9, p. 208].
In this regard, the public availability of artifacts makes
them ‘accountable’, that is, ‘observable and reportable’ by
other members of the community of practice [9].
A sketch of SSP is given in figure 4: Cognition is seen as an
ongoing achievement of social coordination. Physical
artifacts function as mediating objects in the way people
deal with each other in the context of a situated practice.
Figure 4: The Socially Situated Practice perspective (SSP).
Details in text. © Jelle van Dijk
Design from an SSP perspective
SSP turns the design question on its head: instead of
designing how a user can access the digital world, it is the
computer that needs to connect to people’s existing
embodied practices somehow. As Klemmer et al [22] state:
“Clearly, the digital world can provide advantages. To
temper that, we argue that because there is so much benefit
in the physical world, we should take great care before
unreflectively replacing it. … solutions that carefully
integrate the physical and digital worlds — leaving the
physical world alone to the extent possible — are likely to
be more successful by admitting the improvisations of
practice that the physical world offers.” [22, p. 147.)
Likewise, Ferneaeus et al [11] argue that interaction with
physical objects directs action to the social setting, and has
meaning in and of itself, apart from potential mappings to
digital states [11, p. 228].
Consider, as an example, the Reactable, a tangible-mounted
interactive surface for creating electronic music (figure 5).
Reactable was not designed in explicit reference to SSP, but
we can see many of its elements resurfacing. Yes, one could
also describe Reactable in terms of DRC. Each ‘tangible’
maps to a particular digital sound (representation) or
Figure 5: Reactable (Picture courtesy of Xavier Sivecas).
manipulation (computation). Yet, SSP helps to show how
Reactable is much more than just interfacing this mapping.
That is, using Reactable is a skill, including social
interactions, and the shared performance is coordinated by
drawing on the public visibility of each musicians’s actions.
Recently a study provided empirical support for the idea
that people indeed create meaning collaboratively using
Reactable as a shared space for sensemaking [35].
SENSORIMOTOR COUPLING & ENACTMENT (SCE)
The skills mentioned in the Reactable example hinted at a
third strand of research in EC, focusing on sensorimotor
activity. The Sensorimotor Coupling & Enactment
perspective (SCE) originated in ‘behavior-based’ robotics
[5, 6]. These robots drive on sensorimotor couplings in
direct interaction with the environment. Instead of
internally representing and planning action, such robots
navigate ‘us[ing] the world as its own model’, to quote
Rodney Brooks [6]. To explain SCE, Clark [2] gives a nice
example of a baseball outfielder. Instead of calculating first
the goal position and running speed to catch the ball, an
outfielder simply starts running, meanwhile making sure
that the ball maintains a straight horizontal line in his visual
field. By continual adjustments of running speed in order to
maintain that straight line, the outfielder is guaranteed to be
right at the spot where and when to catch the ball [2].
We can use SCE to understand the notion of affordance
[12]. An affordance is the way the world shows up for a
perceiver as directly affording some action, based on the
sensorimotor coupling in place. So, for instance, a river
might show up as ‘crossable’ or ‘non-crossable’ depending
whether one running or standing still in front of it. How one
sees the world depends on how one is acting in it, while
action and perception get coupled over time as
coordinations [2, 3]. As part of SCE, an affordance is
certainly not a message, encoded in physical form in the
object, communicating ‘how it should be used’ [3].
Unfortunately, this is how Don Norman introduced
affordances to the HCI community, thereby implicitly
subsuming the concept under a DRC perspective [24].
Related to affordances is Varela’s notion of enactment [32].
To ‘enact’ a world means to create meaning through the
process of sensorimotor coupling [34, 28]. In a way, the
word sense-making should be taken literally:
‘[We see] cognition as the creation and appreciation of
meaning or sense-making in short … [M]eaning is in the
engagements in which an organism builds its world.” [7, p.
358).
Theories of enactment draw from the philosophical position
of phenomenology, which recently gained renewed interest
in interaction design [9, 14, 23, 25, 27, 34].
A sketch of SCE is given in figure 6. Cognition is seen as a
temporal coupling between action and perception, sustained
through continuous bodily interactions with the
environment. Through this process meaning is enacted.
Figure 6. The Sensorimotor Coupling & Enactment
perspective (SCE). Details in text. © Jelle van Dijk
DESIGN BASED ON THE SCE PERSPECTIVE
Industrial designers have explored a vision called rich- or
embodied interaction [8], closely related to sensorimotor
theory, emphasizing how meaning is not predefined, but
arises in the interaction between user and product [8]. As an
example, we consider Stienstra et al’s digitally augmented
speed-skate [26], which continuously maps skate-action to
acoustic feedback over headphones:
“The amount of pressure delivered is sonified through the
intensity and loudness of the band-pass filter; …from the
absence of sound while lacking pressure to the intense
loudness … while put on full pressure. … Balancing on the
backside … translates in a low sound while balancing on
the front … translates in a high sound” [26]
In this concept, digital information is not ‘accessed’
through embodied interaction, but instead digital
information is fused back into embodied interaction with
the world, supporting sensorimotor coupling. This feedback
loop need not contain predefined meaning. Feedback will
over time come to be recruited for, in this case, skating, and
so come to ‘make sense’ for the skater. Meaning is created
in the interaction [8]. Or: the skater enacts meaning [34],
even if he may not even be able to describe explicitly how.
It is time to take stock. We have discussed EC as consisting
of three variations, which is in fact how the theory showed
up in our design cases. Let us now present the cases
themselves and further discuss the theory in light of these.
THE DESIGN CASES
The analysis in this paper is based on two 2-year lasting
Research-through-Design cases [33]. Both consisted of
three iterations resulting in working prototypes, including
various user-studies with prototypes, co-design sessions,
interviews and observations at stakeholder sites. In this
section we briefly introduce the cases. Details of these cases
and the user studies are reported elsewhere [29, 30, 31, 32].
NOOT, FLOOR-IT and creative group meetings
The systems NOOT and FLOOR-IT were designed to
support so-called ‘creative group meetings’. The idea was
to build further on the way brainstorm participants readily
use physical objects and spatial organizations as tangible
aids to gradually develop better, shared understanding of
the task at hand. That is, using sticky-notes, whiteboard,
physical props and the like, participants not only form
ideas; they also develop a better understanding on what the
actual challenge is that should be adressed. Our design
question was how to support this sensemaking process
using interactive technology. The underlying research
question was how to apply EC theory to such a challenge.
NOOT (figure 7), in its final form, supports shared
reflection [29]. It enables people to catch a fleeting moment
of ‘reflection’ by using a tangible clip to create a time-mark
in an audio-recording of the session. Participants may
revisit these earlier moments using a playback device. This
way, earlier moments can be elaborated and shared later on.
FLOOR-IT (figure 8) allows people to build a personal
‘trace of thought’ by taking picture-snapshots of
meaningful physical elements in the space (e.g. a sticky-
note, sketch or mock-up). These personal traces are
projected around the body and thereby form a publicly
addressable ‘context’ that participants may use to build a
shared insight together [30].
CONSTRUCTING THEORY: REFLECTING ON PRACTICE
We now discuss how the three variations of EC emerged
from the design projects, and how we ended up going
beyond DRC, favoring a combination of SSP and SCE. We
present only a selection of design insights, illustrating how
the theoretical analysis was shaped by the practice.
DRC, SSP or SCE?
At the start, we implicitly worked mostly from a DRC
perspective, thinking about a kind of external, situated
storage medium for brainstorm insights. As the projects
evolved, the social setting as described in SSP became more
significant than the goal of ‘information storage’: when we
observed the practice in situ, we could not ignore the fact
that whatever was going on, it was in any case a thoroughly
a social affair, with people relating to other people in
everything they did. At the same time, as designers, we
needed direction at the concrete level of embodied
interaction. We needed to design for movement, temporal
dynamics, bodily position, physical form, and the
interactive behavior of the system.
Figure 7. NOOT. Physical tags with RFID connect to time-
points in continuous audio-recording. Tags can be placed in a
relevant spatial setting, e.g. on a sticky-note or on the
whiteboard. Using a playback device one may revisit earlier
moments in the conversation. © Jelle van Dijk
Figure 8. Floor-It. Self-snapped pictures during the
brainstorm are projected around participants as a ‘traces of
ones thoughts’. Traces move along with the body and
manipulated by foot gestures. They can be used to connect to
other people, supporting the formation of shared insight
Here, SCE proved particularly meaningful. In observing
people using our protypes, we saw how they develop and
maintain sensorimotor couplings that emerge in ongoing
interaction. We realized our tools must connect to this
coupling process.
Designing ‘ beyond’ DRC
We give three examples showing how we gradually moved
away from DRC. We started the NOOT project aiming to
design an interactive ‘cognitive scaffold’ [2, 21]. For
example, we thought about ‘digitalizing’ sticky-notes,
presented on an interactive wall. Later, we saw that
participants talk a lot during a brainstorm, but write down
only little of it [29]. What added value would be created by
digitalizing these ‘poor’ representations? We realized
sticky-notes already work more as ‘triggers’ in the
conversation itself, than as storage containers of the
‘output’. This insight lead to NOOT: A tool to record the
conversation and provide actionable entry-points for
revisiting the conversational history [29].
Later on, we discussed how long an audio-sample for any
NOOT-tangible should be. Surely we did not want to miss
out on the crucial bit of talk! However, this meant the
system had to ‘know’ when to start and stop recording at
‘just the right moments’. Not wanting to invoke futuristic
AI technologies, we abandoned recording ‘samples’
altogether and connected each NOOT-tangible to a time-
point in the entire recording. A scrolling function invites
exploration of the entire session from each tangible starting
point. This decision rejected the idea of NOOT-objects as
‘tangible information carriers’; rather, the total set in its
spatial organization offers a tangible mapping between the
action-space and the conversation history [29].
In the co-design activities leading to FLOOR-IT, we
observed how people would talk about sticky-notes and
flip-charts as if these were ‘the ideas and insights’, and we
saw people transporting paper between sub-sessions, and
taking materials home at the end of the day. At the same
time, these artifacts hardly represented the actual insights
gained in the session, and moreover, people were aware of
this. They stated they would ‘probably never look at the
materials again’, and they indicated it would be hard to
‘remember what it all meant’, later. As said, these artifacts
are useful mostly within a session, as mediating objects
through which people negotiate a shared understanding ‘in
situ’. We decided to enhance the space to that effect,
instead of trying to create better storage devices [30, 31].
In sum, DRC-style thinking lead to problems (Why
digitalize sticky-notes? How to catch the right content in
the sample? Why save the ‘results’ if nobody will use
them?). The solution was to move away from DRC,
towards supporting social- and sensorimotor coupling.
Integrating the social- and the sensorimotor perspective
Qualitative observations of a facilitator using NOOT over
seven sessions, revealed that the value of NOOT lies in how
the act of grabbing and using a NOOT, is a sensorimotor
routine. This routine makes a person more sensitive to a
reflective mode of perceiving the ongoing activity.
Furthermore, the public visibility of using a NOOT may
invite other people to respond with reflection as well [31].
In FLOOR-IT, people’s ‘traces of thought’ are taken up as
mediating objects in interactions with other people. As a
whole, this set-up helps people to get a grip on the
challenge, using the floor as a ‘shared action space’ [13]. In
a user study (20 brainstorm triads in 20 min. videotaped
brainstorm) we compared a working prototype of FLOOR-
IT with a control version where images were projected on a
wall [32]. Qualitative analysis revealed that traces are used
by people to position themselves in relation to others [32].
Social positioning formed crucial for the way the group
developed shared insight. When people’s pictures were
projected on the wall, this induced problems: it was difficult
for people to take up a position vis-à-vis one another as the
wall distracted attention from social interaction (Figure 9,
left). In FLOOR-IT, as traces were connected to people’s
bodies, participants would connect to each other fluidly,
using the traces as scaffolding elements for doing so
(Figure 9, right). In sum: how users are able to ‘couple’ to
Figure 9: Left: A wall-display strongly draws attention, and
inhibits people to couple socially (the woman at the left fails to
‘hook-on’ to the conversation). Right: FLOOR-IT allows to
fluidly relate to each other socially, as mediated by the floor.
their traces in the activity itself may either dissociate social
interaction from the interaction with technology, or help to
integrate the social and technology interaction into a
unified, coherent activity.
In summary, both NOOT and FLOOR-IT illustrate how
social interactions in a group are sustained by sensorimotor
couplings, and how this ‘in situ’ activity underlies how
people make sense collaboratively. This leads us to view
cognition as essentially a process of socio-sensorimotor
coupling [32]. The cases also provide a first indication of
how an Embodied Cognition Design can create
technological support for socio-sensorimotor coupling [33].
EMBODIED COGNITION DESIGN: DIRECTIONS
If digital technology is no longer an external memory or
computational aid in a distributed cognition, we must
rethink the relation between digital process and the
embodied setting in which it is embedded. In tangible
interaction, physical form is often linked to digital process
by pre-determined, metaphorical mappings [18, 4]. In this
DRC-style design, bodily interaction is used as a means to
present or manipulate something in the digital realm.
Metaphorical mappings tend to ignore SSP and SCE’s
insistence on the fact that meaning is created ‘in situ’,
through situated social positioning and sensorimotor
coupling. That is, meanings are not predefined by the
mapping relation the designer choses1 [9]. The challenge,
then, is to couple interactive technology directly to this
socio-sensorimotor loop itself.
Based on the analysis so far, we see four ‘entry-points’ that
may help designers do just this (See Figure 10):
1 More fundamentally, it begs the question of where the
meaning of the digital object (that the tangible represents)
itself comes from, as it is already presupposed [3].
Figure 10. Four entry-points for designing in support of EC by
intervening in the sensing-acting, socially relating or tracing
aspect of the complete socio-sensorimotor loop. Details in text.
1. SENSE-TO-ACT transforms or creates new
opportunities for the way a person can sense the
environment. This means creating new, artificial ‘sensors’,
allowing a person to respond to aspects of the environment
that they hitherto couldn’t.
2. ACT-TO-SENSE creates new opportunities for
physically manipulating the environment. This is not unlike
what conventional tools do (with the hammer [14] and the
blind-man’s cane [23] as classic examples). Both new
‘sensing possibilities’ or new ‘action-possibilities’ change
the complete loop: New sensing ways afford new actions,
and new action possibilities create new sensations. In fact
all of the four entry-points are just that: they are entry-
points to what is ultimately one integrated coupling loop.
3. RELATE provides for new ways of social coordination
with other people in face-to-face contact to build and
sustains relations with others as part of the sensemaking
process. Relevant research has been done on the role of
digital technology in mediating social interaction [9]. We
suggest to take those insights and situate them in actual,
embodied space. This means less of a focus on language
and ‘message passing’ over a communication channel, and
renewed interest in nonverbal communication and social
coordination in action [13].
4. TRACE: Through action, people leave traces in the
environment, which subsequently may guide further
actions. As both NOOT and FLOOR-IT show, interactive
technology can provide for new kinds of traces in the
environment that people may then subsequently take up as
scaffolds in further activities [30]. The focus here is less on
the representational content of such traces (if there is any at
all) rather than on the way such traces get taken up into a
the socio-sensorimotor loop, that is, how these physical
aspects of the environment help coordinating activities of
the people in the situation.
TECHNOLOGY AS MATERIAL
In general, we propose to see sensors, actuators and digital
processes as ‘material’ to work with, together with physical
material [22, 11]. The question is what ‘digital materials’
can offer within the whole of digital- and mechanical form.
One opportunity may be that digital process can bring into
view as one a collection of temporally or spatially disparate
events that, based on our biological body and physical
tools, would never become one unified experience. That is,
digital technology can put things together that are normally
unconnected, allowing the socio-sensorimotor loop to
couple to it as a whole. For example, NOOT presents
‘moments in conversation history’ as one spatial
configuration, that people may then perceive and react to.
CONCLUSION
Many designs assume cognition to be essentially a
computational-representational process, even if one allows
the possibility that this process is distributed over brain and
environment, as in DRC. DRC aligns intuitively with the
vocabulary used to describe computational technology:
computers are computational-representational systems par
excellence, and it is tempting to describe user practices in
these same terms. We do not claim that this is wrong. Good
products have been designed based on DRC. Our point is
that there seems to be a bias towards DRC in tangible
interaction design, perhaps because we are working with
digital ‘materials’. Yet, assuming the framework of EC,
there is much more to human embodiment than external
representation. Moreover, DRC tends to obscure or neglect
these two other strands of EC theory: the social- and
sensorimotor perspective. Designing explicitly for socio-
sensorimotor couplings, we argue, brings new design
opportunities and new ways of understanding user behavior.
The body-in-action is a central part of human sensemaking
practices, even if, in our digital age, we may sometimes
come to forget this. It is uncovered in musical performance,
sports, dance, craftsmanship, professional know-how, and
in the way we cope with everyday affairs [10]. It is all the
more exciting, then, to find that in interactive systems
design, using (partly) digital systems, we see a renewed
‘anti-Cartesian’ trend, reconnecting present-day technology
to our embodied way of being-in-the-world [8, 9, 16, 22,
25]. With our practice-based analysis of EC theory we hope
to have provided some further directions towards this end.
ACKNOWLEDGMENTS
We thank all students involved, Creativity Company, Van
Berlo Design Company, YOUMEET and Future Centre
LEF and the TEI reviewers for their valuable comments.
This research was partly funded by CELL and Utrecht
University of Applied Sciences.
REFERENCES
1. Agre, P. & Horswill, I. (1997). Lifeworld analysis.
Journal of artificial intelligence research, 6, 111-145.
2. Clark, A. (1997) Being there: Putting brain, body and
world together again. Cambridge, MA: MIT.
3. Chemero, T. (2009). Radical embodied cognitive
science. Cambridge, MA: MIT.
4. Bakker, S., Antle, A., Van der Hoven, E., (2012)
Embodied metaphors in tangible interaction. Personal &
Ubiquitous Computing, 16, 433-4490.
5. Beer, R.D. (2008). Dynamical systems and embedded
cognition. In: K. Frankish and W. Ramsey (Eds.), The
Cambridge Handbook of Artificial Intelligence.
Cambridge: University Press.
6. Brooks, R.A., (1991) Intelligence without
representation, Artificial Intelligence, 47, 139–159.
7. De Jaegher (2009). Social understanding through direct
perception? Yes, by interacting. Consciousness and
Cognition, 18, 535–542.
8. Djajadiningrat, J.P., Wensveen, S.A.G., Frens, J.W. and
Overbeeke, C.J. (2004). Tangible products: redressing
the balance between appearance and action. Personal
and Ubiquitous Computing, 8 (5), 294-309.
9. Dourish, P. (2001) Where the Action Is: The
Foundations of Embodied Interaction. Cambridge: MIT.
10. Dreyfus, H.L. (2002). Intelligence without
representation: Merleau-ponty's critique of mental
representation. Phen. and the Cog. Sci., 1, 367-83.
11. Fernaeus, Y., Tholander, J., & Jonsson, M. (2008).
Toward a new set of ideals: consequences of the practice
turn in tangible interaction. Proc of TEI’08, Feb 18-20,
223-230, New York: ACM.
12. Gibson, J.J. (1979). The Ecological Approach to Visual
Perception. Boston: Houghton Mifflin.
13. Goodwin (2000). Action and embodiment within
situated human interaction. Journal of pragmatics, 32,
1489-1522.
14. Heidegger, M. (1927). Sein und Zeit. Tübingen: Max
Niemeyer Verlag. Reprinted in 1986.
15. Hornecker, E. and Buur, J. (2006). Getting a grip on
tangible interaction: a framework on physical space and
social interaction. Proc. of Human Factors in Comput.
Sys. ‘06, Apr. 22-27, 437-446, New York: ACM.
16. Hummels, C.C.M., Frens, J. (2008) Designing for the
unknown: A design process for the future generation - of
highly interactive systems and products. Proc. EPDE, 4-
5 Sept, 204-209.
17. Hutchins, E. (1995) Cognition in the wild. Cambridge:
MIT Press.
18. Ishii, H. (2008) Tangible bits: Beyond pixels. In: Proc.
of TEI’08, Feb 18-20, pp. XV-XXV. New York: ACM.
19. Ishii, H., and Ullmer, B. (1997). Tangible Bits: Towards
Seamless Interfaces between People, Bits, and Atoms.
Proc. of CHI’97, 234-241. New York: ACM.
20. Kaptelinin, V., and Nardi, B., (2006). Acting with
Technology: Activity Theory and Interaction Design.
Cambridge: MIT.
21. Kirsh, D. (2010). Thinking with external
representations. AI & Society, 25, 441-454.
22. Klemmer, S.R., Hartman, B. and Takayama, L. (2006).
How bodies matter: five themes for interaction design.
Proc. of DIS‘06, June 26–28, 140-149, New York:
ACM.
23. Merleau-Ponty, M. (1962). Phenomenology of
perception. New York: Routledge.
24. Norman, D.A. (2002). The design of everyday things.
New York: Basic Books.
25. Robertson, T. (1997). Cooperative Work and Lived
Cognition: A Taxonomy of Embodied Actions. Proc 5th
ECCSCW, 205-220, Dordrecht: Kluwer Academic.
26. Stienstra, J.T., Overbeeke, C.J. & Wensveen, S.A.G.
(2011). Embodying complexity through movement
sonification: case study on empowering the speed-
skater. Proc of 9th ACM Italian Chapter Int. Conf. CHI,
13-16 Sept., 39-44. New York: ACM.
27. Suchman, L.A. (2007). Human-Machine
Reconfigurations: Plans and Situated Actions 2nd
expanded edition. New York and Cambridge UK:
Cambridge University Press.
28. Torrance, S. (2006). In search of the enactive. Phen. and
the Cognitive Sciences (2006) 4: 357–368.
29. Van Dijk, Van der Roest, J., Van der, Lugt, R. and
Overbeeke, C.J. (2011) NOOT: A tool for sharing
moments of reflection during creative meetings. Proc.
C&C’11, Nov. 3–6, New York: ACM.
30. Van Dijk, J. and Vos, G.W. (2011) Traces in Creative
Spaces. Proc C&C’11, Nov. 3–6, New York: ACM.
31. Van Dijk, J. and Van der Lugt, R. (2013) Scaffolds for
shared understanding. AI EDAM, 27, 107–117.
32. Van Dijk, J., Van der Lugt, R. and Hummels, C.J.
(2013) Tracing shared insight. WIP workshop, TEI’13,
Feb 10-13, Barcelona, Spain.
33. Van Dijk, J. (2013) Creating Traces, Sharing Insights:
Explorations in Embodied Cognition Design.
Unpublished Phd thesis. Eindhoven University.
Retrieved from: www.jellevandijk.org
34. Varela, F. J., Thompson, E., & Rosch, E. (1991). The
embodied mind. Cambridge, MA, USA: MIT.
35. Xambó, A., Hornecker, E., Marshall, P., Jordà, S.,
Dobbyn, C. and Laney, R. (In press) 'Let's jam the
Reactable': Peer learning during musical improvisation
with a tabletop tangible interface'. ACM Transactions on
Computer-Human interaction.