Content uploaded by Ingar Brinck
Author content
All content in this area was uploaded by Ingar Brinck on Dec 08, 2017
Content may be subject to copyright.
Making Place for Social Norms in the
Design of Human-Robot Interaction
Ingar BRINCK a,1, Christian BALKENIUS a, and Birger JOHANSSON a
aDept. of Philosophy and Cognitive Science, Lund University, Sweden
Abstract. We argue that social robots should be designed to behave similarly to
humans, and furthermore that social norms constitute the core of human interaction.
Whether robots can be designed to behave in human-like ways turns on whether
they can be designed to organize and coordinate their behavior with others’ social
expectations. We suggest that social norms regulate interaction in real time, and
agents rely on dynamic information about their own and others’ attention, intention
and emotion to perform social tasks.
Keywords. robots, social norm, interaction dynamics, social expectation
1. The Normative Perspective: What Social Robots Should Do
Social robots are intended to facilitate human life by interacting directly with humans.
How they are made to interact relates to the function or role they have been assigned:
to assist elderly, perform fetch-and-carry tasks, give directions, help carrying goods, be
educators, and much more. The details of the design concern both what robots actually
can do and what developers think they should be able to do. What they should do is
determined by the task description together with normative considerations about what
behavior is adequate to perform the task and appropriate for robot-human interaction.
We think that robots should be designed to imitate human sociality and afford mu-
tually adaptive interaction that involves the robot as a partner instead of a tool. The ra-
tionale is to make interaction and communication as effortless and seamless as possi-
ble without interfering with the functionality of the robot. Choosing a design principle
that requires humans to develop novel skills for robot interaction would be counterpro-
ductive and hamper efficiency as well as effectivity. As interaction with social robots
will increase in the future, the strategy may even affect the human capacity for engage-
ment negatively, creating a reverse interaction problem. Humans are natural born com-
municators and collaborators, which makes it reasonable to take human behavior as the
starting-point, thereby avoiding basing the design on an assumption about a principled
gap or difference between robot and human. In the attempt to encourage humans to in-
teract with robots, social robotics has encountered certain difficulties that evidence the
importance of developing robots with similar social abilities as humans [1, 2]. Ideally
speaking, robots would need to be
1Corresponding author: Ingar Brinck, Department of Philosophy and Cognitive Science, Lund University;
E-mail: ingar.brinck@fil.lu.se
What Social Robots Can and Should Do
J. Seibt et al. (Eds.)
IOS Press, 2016
© 2016 The authors and IOS Press. All rights reserved.
doi:10.3233/978-1-61499-708-5-303
303
•intuitive and engaging to motivate humans to interact, and proactive to succeed in
their tasks;
•capable, cooperative, and caring for humans to accept or even seek their assis-
tance;
•sensitive to others’ mood and feelings so as not to evoke negative emotions such
as stress, frustration, or anger.
Fischer et al. [3] report that people address robots in the manner that the robot is address-
ing them. Thus the functionality of the robot will have a direct impact on the quality of
the interaction, and impoverished communicative skills will cause impoverished interac-
tion. Moreover, robots need to perform even basic actions in the same way as humans,
because behavior that is not similar enough, although it can be identified on a general,
abstract level, hinders the pragmatics of communication, and prevents the reading of in-
tention and target from bodily movements and actions, and causes misunderstandings
and distrust [4]. It also casts doubts on the performative capacities of the robot, making
the robot appear different or strange and unreliable.
Another strong point in support of building robots with human-like functionalities
is that they can serve as tools for understanding human sociality. Robotics contribute to
the research in the cognitive and social sciences e.g. by running large-scale simulations
or testing high-risk hypotheses. Positive outcomes are followed by other, more time- and
resource-consuming methods such as laboratory experiments and longitudinal field stud-
ies. Except for allowing for the examination of hypotheses that otherwise would have
been discarded, robotics can investigate phenomena that are impossible to test in humans
for physical or ethical reasons. For example, a robot that could control autonomous reac-
tions such as pupil dilation would make it possible to manipulate non-verbal interaction
in novel ways and investigate the effects of strategies unavailable to humans; moreover, it
could be used to investigate second-person social cognition without losing controllability
of the experimenter [4].
One might think that designing robots that appeal to the (positive) emotions of peo-
ple would boost human-robot interaction. While this may facilitate engagement (e.g., a
robot’s explicit expression of perceived likeness with a human will increase the latter’s
willingness to interact), the precise benefit is unclear. However, one might object that
putting humans in the position of an adult addressing a young infant or pet may gear the
interaction towards low-level emotional engagement at the expense of logical thinking,
infants and pets not being known to think in rational terms, and entail a less vigilant,
non-rational stance negative for the subject’s reasoning and decision-making capacities.
In contrast, robots that look like adult humans would not incur the risk of systematically
changing or distorting the nature of social interaction.
We think that robots should interact with humans using the same principles as hu-
mans and be sensitive to similar kinds of behavior in others. To answer the question
whether robots in fact can be designed to do so, we first need to consider the nature of
human-human interaction and clarify what skills it requires. Then we can discuss whether
social robots may exhibit them.
1.1. The Core of Human Interaction: Social Norms
There is ample evidence from the cognitive and social sciences that human interaction is
fundamentally normative: it is tailored to what is considered right or wrong, correct or in-
I. Brinck et al. / Making Place for Social Norms in the Design of Human-Robot Interaction304
correct, appropriate or inappropriate, and cannot be reduced to mere causal or statistical
regularities. Many forms of interaction are institutionalized and pertain to the political,
economic, social, and cultural structures of society [5]. They are defined by rules and
prescribed by laws that enforce behavior, violations being subject to formalized punish-
ment. Other types of interaction are based in conventions (arbitrary behavioral regulari-
ties that arise as solutions to coordination problems) and then are made to apply univer-
sally, such as on what side of the road people should drive [6]. Conventions depend on
people’s empirical expectations about how other people will behave.
The forms of interaction that are relevant to our present topic involve social norms,
i.e., informal and fluctuating, situation-dependent patterns of behavior [7]. Social norms
are based in personal normative beliefs about how the world should be (about right and
wrong) and social expectations about (i) what other people will do, and (ii) what other
people think one should do oneself. The first sort of social expectation, about what other
people will do is an empirical expectation. Whereas (i) is an empirical expectation, (ii)
is a normative expectation [8].
Social norms emerge spontaneously in places where people tend to gather and are
tied to an activity or situation where behavior is cluttered or unsystematic. They organize
and harmonize how individuals behave towards each other, giving rise to patterns of
reciprocal actions that improve how agents collectively manage (also when not engaged
in joint action). Social norms enable converging interaction on the fly. If the activity
repeats then the behavior may become standardized, permitting prediction and planning
of action.
Social norms reduce the cognitive costs associated with interaction, streamlining it,
in letting the agents focus their attention elsewhere. They are a key facilitator of joint
action. Examples include how you ought to behave when taking a public elevator, shop-
ping in the supermarket, dining with your new boss, attending meetings, talking to the
police or doctor, etc. Whether in one-shot activities or recurring ones, it is not hard to see
their usefulness in communities where agents show great varieties in behavior, and tend
to interact sporadically, possibly in ever changing constellations.
As opposed to institutionalized norms and conventions that apply universally in a
community, social norms are flexible, open-ended, and adjusted to contextual conditions.
They can apply locally and temporally, among just a few individuals, or persist for long
periods of time, holding for groups that are defined not by their members but by function
(say, families or a certain tribe or subculture), passing on traditions, values, and rituals
related to habit and history. Notice that the same behavior may be correct in one situation
(in the gym) but incorrect in another (in church), and that agents can observe several
social norms across situations, even inconsistent ones [9].
Social norms typically are not in the agent’s self-interest and sometimes agents incur
a cost in following them, yet they function as attractors or steady-states in the state space
of behavior. People prefer conforming to not conforming, and care about doing as people
in general ought to do. Compliance is bolstered by negative and positive emotion, e.g.
others’ disapproval and reactions of shame or guilt to violation. From an objective stand-
point social norms are arbitrary–there is nothing about the actual world that motivates or
grounds them.
On the received view, norms are based on rational deliberation and consist in mutual
higher-order propositional intentions, preferences and reasons for action of a group of
agents [10, 11]. But this gets things backwards–social norms emerge from behaviour and
depend on social expectations, not the opposite. We picture social norms as dynamic,
I. Brinck et al. / Making Place for Social Norms in the Design of Human-Robot Interaction 305
emergent patterns of interpersonal behavior [12–14]. We think of cognition as embodied,
functionally and constitutively dependent on physical bodies, and embedded, i. e. func-
tionally and constitutively dependent on the material, socio-cultural, historical context
in which it takes place, including practices and artefacts and their spatial and temporal
organization [15, 16]. The emphasis is on the relationship among the agents that grounds
behavior patterns (ultimately second-person engagement) and how it is jointly sustained
in real time; sometimes supported by artefacts and infrastructures of the shared envi-
ronment acting as cues to behavior [8, 17–19]. Complying with a social norm requires
accessing dynamic information in real time about others’ intentions and goals (interest)
and emotions (attitude and evaluation) and the situation that the interaction concerns.
2. Sources of Social Information
Leaving the explanation of how material contexts afford cues for norms for another time,
we distinguish three sources of social information: (a) gaze and head direction and facial
expression of emotion, (b) place in physical space, and (c) bodily orientation, posture,
and movement. Below, we describe what kinds of social information matters for acting
appropriately and navigating social space together, not necessarily by engaging in joint
action involving a shared target but also in the looser sense of coordinating behavior as a
means for reaching distinct, individual targets.
2.1. Gaze and Face
As an agent’s attention moves, so does her gaze, seeking out targets of action and acting
as a pointer to spatial locations. The direction of eye gaze makes an agent’s current inter-
est manifest to others. Head direction has a similar function, adding additional informa-
tion to gaze, and a quick glance without an accompanying head movement is less salient
than one where both head and posture move in the direction of the target.
Eye gaze is used frequently in intentional, non-verbal communication. Directed at
another person’s eyes, eye gaze establishes attention contact and signals the intention-to-
interact. Prolonged eye contact can also signal agreement during an interaction or convey
re-assurance or trust. Joint attention entails simultaneously attending to a target as a con-
sequence of attending to each other’s attentional states, the agents being mutually aware
of sharing their focus of attention. It can be used to establish joint reference or target of
action and then initiate communication about it, and has a central place in interaction. Di-
rected gaze, eye contact, and joint attention are important for timing and turn-taking and
to confer a common structure to interaction by centering it on jointly attended objects.
Faces communicate valence, or intrinsic attractiveness or aversiveness. Positive and
negative facial expressions of emotion also convey the agent’s evaluation of an attended
target and the resulting attitude that signals motivation. In cases such as the display of fear
or disgust, an agent’s evaluation can be adopted immediately by observers by mimicking
her behavior. For more subtle expressions, there is a progressive development of shared
valence that emerges from the ongoing interaction between the agents.
Emotion transfers by contagion when an observer mimics another agent’s emotion
expression and then catches the emotion. This results in the alignment of emotion and at-
titude and indirectly of action readiness, preparing for joint action [20]. A similar mech-
anism occurs in attention contagion that aligns agents with respect to a target in shared
I. Brinck et al. / Making Place for Social Norms in the Design of Human-Robot Interaction306
space and results in the transfer of interest [21]. Thus social information in gaze and head
direction can be imparted and registered implicitly (automatically) or explicitly (know-
ingly).
Research in neuroscience has shown that social attention, i. e. when two or more
agents are attending simultaneously within the same context of action, though not neces-
sarily jointly or to the same target, increases the quality and quantity of information [22].
Automatic and mandatory, social attention has significant effects on social interaction,
which suggests a strong tendency in humans to share information that facilitates joint
action and the synchronization of behavior.
Facial expressions of emotion and gaze can have multiple functions or meaning.
They depend on the external context of action, their target or cause, and crucially on the
internal context, a continuous dynamic flow of emotions or saccades and fixations, where
the present one acquires a value by its contrast to previous and anticipated emotions and
directions of gaze. Transitions between emotions can be abrupt or gradual, extreme or
subtle, and in a negative or positive direction, the present emotion being modulated along
several axes.
Apart from facial expressions, pupil size and other autonomic signals are important
for communicating evaluation and attitude, as they leak information about the processing
of external stimuli or information in memory [23]. Moreover pupil size also signals trust
and interest that are important for establishing a context for negotiating and converging
on social norms.
2.2. Place in Space
Following Lewin [24], we suggest that all activities can be seen as taking place in a space,
whether physical, social, or relational. In Lewin’s terminology, such a space is called a
field and is generated by the entities in the space as perceived by the individual. These
entities both have a location in the space and a valence. Although valence is often seen
as positive or negative, many entities can be both at the same time. A central concept
of Lewin’s field theory is the approach–avoidance conflict that occurs when an object
or goal produces both attractive and repellant forces. In particular, this is the case for a
social space. All forms of social interaction requires that an appropriate spatial distance
is maintained [25] and can be seen as an equilibrium of attractive and repellant factors
in the social interaction [26]. The suitable distance varies greatly with the type of in-
teraction, group and culture and is actively negotiated during the interaction. Burgoon
and Jones [27] proposed that this regulation depends on “(1) the amount of deviation
[from expectations], (2) the reward-punishment power of the initiator, and (3) the threat
threshold of the reactant”.
There is a dynamic interrelation between the processes that converge at an appro-
priate distance and processes that establishes the valence. Standing too far away from
another person signals that the negative valence assigned to that person is too high, while
standing too close signals that the positive valence assigned is too high, or perhaps a lack
of sensitivity to social norms [28].
Placing oneself at a position in physical space indicates interest in objects or agents
near that location, or possible disinterest, or even fear of objects or agents at other lo-
cations. Being at a particular location in space thus communicates something about the
structure of one’s field and in as much as it reflects valences it is also directly connected
to social norms. Positioning highlights a place and can be used to establish a norm for
I. Brinck et al. / Making Place for Social Norms in the Design of Human-Robot Interaction 307
on-going interaction, say, reserving the place, giving it a certain function, using it to
establish an order of doing things, etc.
2.3. Orientation, Posture and Movement
In addition to gaze direction, the orientation of the body is important in communicat-
ing the target of action, both when the action is directed toward a target and when it is
turned away from it or in the opposite direction. Orientation can be influenced by the
target without being directed toward or away from it as for instance when negotiating an
obstacle.
Bodily orientation can signal interest (leaning towards or away from somebody or
something) as well as emotion (liking or dislike). Small changes in bodily orientation
relative to another body displays agreement or disagreement. We continually signal our
reactions to others’ actions in the course of the on-going interaction by subtle move-
ments and the repositioning of ourselves in physical space and relative to other agents.
The smallest movement or postural change has valence, two central axes being approach-
avoidance and sharp-smooth shape, a third one fast-slow speed. The values that a behav-
ior acquires along these axes modulate each other and manifest themselves together. Po-
sition, orientation and movement are all crucial for establishing and maintaining social
interaction with other agents [29].
As soon as we reach for something we communicate the valence of the object and
the precise movement profile contains information about what we will do with the object.
A movement that is aimed at picking up an object slows down before it reaches it, while a
movement that will hit the object accelerates up until it reaches the object. More complex
movement can show hesitation or uncertainty about the status of the object or be a request
for information, as when a child reaches for an object, stops the motion and looks at its
parent for confirmation that it is allowed to take the object.
A simple demonstration that motion profiles can convey both the direction toward
and the valence of an object was given by Breitenberg’s [30] hypothetical vehicles that
use very simple mechanisms to steer toward or away from different stimuli. The precise
combinations of acceleration and deceleration toward an object is perceived as express-
ing different emotions such as fear, aggression, liking, and even love. An orientation
away from an object or agent similarly identifies the location of the object and the move-
ment shows whether the object should be passively avoided or perhaps feared as when
running away from it. These different movement profiles and their relation to points of
valence can be directly connected to Lewin’s field theory [31].
3. The Descriptive Perspective: What Social Robots Can Do
Many of the properties of agents interacting with each other described above can be or
have been implemented in robots and several research groups are working in this direc-
tion (see e.g. [32]). Shared practices and emerging routines involve reciprocal expecta-
tions about behavior. Such expectations facilitate or enhance social interaction and in
addition allows violations of the routines in ways that makes sense to the other agent and
can change the direction of the interaction.
There have been several attempts to implement the control of appropriate social dis-
tance in robots. Pacchierotti et al. [33] implemented a number of proxemic principles
I. Brinck et al. / Making Place for Social Norms in the Design of Human-Robot Interaction308
to allow a robot to pass a human in a corridor in a natural way. Similarly, Lindner and
Eschenbach [34] presented a formalization of social space, and Torta et al. [35] imple-
mented a behavior-based navigation architecture based on proxemic principles. Vroon et
al. [36] have investigated norms for social positioning in a study on how a robot should
position itself relative to a group of agents to behave appropriately. Although people
normally do not maintain a social distance to objects [25], the situation is different for
robots that are treated in a similar way to humans. Walters et al. [37] summarizes studies
of approach distance to different types of robots and show that it depends on various
factors including its appearance, movement and even its voice. Takayama & Pantofaru
[38] showed that experience of robots (and pets) influenced how close to a robot a per-
son would approach. The gaze direction of the robot was also a factor, where the effect
differed for men and women. Men would maintain a closer distance to the robot than
women when the robot was looking directly at them. How these factors could change
over time was addressed by Mitsunaga et al. [39] who implemented a reinforcement
based system that allows a robot to maintain appropriate social distance, gaze meeting,
and movement speed. It adapts its behavior based on detected discomfort signals con-
sisting of head movement and diverted gaze. This system can be seen as a first step to-
ward the mechanisms for control of social distance proposed by Burgoon & Jones [27].
An early robotic implementation of this kind of ‘social reinforcement’ was described by
Mataric [40].
Joosse et al. [7] raise a problem for implementing proxemics. Humans are very sen-
sitive to others’ discontent and apt at correcting mistakes, e.g., if they set their mind to it
they are very good at handling cultural and normative differences in real time pertaining
to bodily behavior. The basis of these skills are not clear and research into normative and
cultural differences in social robotics suggest that culturally different motion behaviors
are necessary for a robot guiding people at an airport. Yet cultural background may in-
fluence spatial behaviors in many ways and a limit must be set to how many the robot
needs to know – but how?
In many situations it is necessary to understand the goals of others to not only main-
tain the distance, but also not to interfere with them. A number of methods for recogni-
tion of goal or intent have been developed [41]. One such example comes from Johans-
son [42]. In a study with a group of robots, each of them tries to detect the goal of the
others in such a way that the anticipated movements of the other robots can be taken into
account to avoid collisions. This is similar to a cocktail party situation where constant
negotiation is necessary about where to stand, how to move from one location to another
without colliding etc. By tracking the movement of another agent, it becomes possible
to infer its goal. With this information, the robot can anticipate how the other robot will
move in the future and include it in its own behavior selection.
A particularly interesting case is when two robots have the same goal. If they did not
take the other robots into account, they would select identical movement and immediately
bounce into each other. If both takes the other robot into account in exactly the same
way, the result will not be better. Instead it is necessary that the robots decide who will
go first using some strategy. In Johansson’s study, one such strategy was that the robots
are assigned different ranks that are used to decide who goes first. Such ranks could
presumably be learned through the type of interactions described above. Yet a possibility
would be to consider this a coordination problem that is solved when behaviors converge
and settle into a convention [6].
I. Brinck et al. / Making Place for Social Norms in the Design of Human-Robot Interaction 309
Another example where a robot learns about social norms is the Builder Robot [43].
This robot uses a simple manipulator to build designs out of various building blocks.
These designs are not programmed into the robot. Instead, the robot is allowed to explore
its environment and look at designs already built. Through this process, the robot learns
to expect particular designs. When it later encounters a building block that it has seen as
a part of a design it will produce expectations of that design and will try to reproduce it.
When its expectations are not met, they will be used to set the goal of the robot [44, 45].
This mechanism can be used to pick up instrumental norms about object use (cf. [46]).
For example, if the experimenter routinely places red blocks on top of blue ones, the
robot will come to expect a red block every time it sees a blue one. If this expectation is
not met, the robot will try to find a red block and put it on top of the blue one. If there is
something else on top, the robot will instead remove it.
Interestingly, what started out as empirical expectations about object use based in the
causal properties of the object develops into social expectations about how people should
behave and others expect an agent to behave. The instrumental norm creates expectations
about how people should behave, viz., check that a red block is in place, and if it is not,
then arrange for one to be so. Since the behavior of the robot depends on its expectations,
it can be changed by modifying those expectations, say, if the experiment now starts to
build other designs instead. In that case, the robot will slowly change its expectations
and start to build these new designs instead. The idea is that its expectations about the
building blocks will change as an effect of its social expectations changing, and not the
opposite. Typically humans adopt their behavior to other agent’s social expectations and
not to their own instrumental expectations. Depending on the interaction between the
robot and the experimenter, either of them can pick up behaviors from the other and
influence how the different building blocks are used. This is similar to effects seen when
robots are allowed to learn language autonomously [47].
Dynamic systems theory offers a framework for understanding social norms as self-
organizing or synergy effects, processes that presuppose entrainment or the synchroniza-
tion of behavior by rhythm and mimicry. By aligning the agents, entrainment facilitates
joint attention and emotional engagement that permits agents to negotiate behavior in
real time during on-going reciprocal interaction, and then to converge. Each interaction
episode will leave traces of the process and its outcome in the situation where it took
place, traces that in the future can cue associated behavior and evoke past routines.
References
[1] G. Hoffman and C. Breazeal, Robots that work in collaboration with people, in AAAI Symposium on the
Intersection of Cognitive Science and Robotics, 2004.
[2] B. K¨
uhnlenz, S. Sosnowski, M. Buß, D. Wollherr, K. K¨
uhnlenz, and M. Buss, Increasing helpfulness
towards a robot by emotional adaption to the user, International Journal of Social Robotics,vol. 5, no. 4,
pp. 457–476, 2013.
[3] K. Fischer, K. Foth, K. Rohlfing, and B. Wrede, Is talking to a simulated robot like talking to a child?,
in 2011 IEEE International Conference on Development and Learning (ICDL),vol. 2, pp. 1–6, IEEE,
2011.
[4] A. Sciutti, C. Ansuini, C. Becchio, and G. Sandini, Investigating the ability to read others’ intentions
using humanoid robots, Frontiers in psychology,vol. 6, 2015.
[5] J. R. Searle, The construction of social reality. Simon and Schuster, 1995.
[6] D. K. Lewis, Convention. A Philosophical Study. Cambridge, MA: Harvard University Press, 1969.
I. Brinck et al. / Making Place for Social Norms in the Design of Human-Robot Interaction310
[7] M. Joosse, M. Lohse, and V. Evers, Lost in proxemics: spatial behavior for cross-cultural hri, in HRI,
vol. 2014, pp. 184–185, 2014.
[8] C. Bicchieri, The grammar of society. New York: Cambridge University Press, 2006.
[9] T. Fent, P. Groeber, and F. Schweitzer, Coexistence of social norms based on in-and out-group interac-
tions, Advances in Complex Systems,vol. 10, no. 02, pp. 271–286, 2007.
[10] M. E. Bratman, Shared intention, Ethics,vol. 104, no. 97–113, 1993.
[11] R. Tuomela, The importance of us: A philosophical study of basic social notions. Stanford University
Press, 1995.
[12] I. Brinck, Developing an understanding of social norms and games: Emotional engagement, nonverbal
agreement, and conversation, Theory & Psychology,vol. 24, no. 6, pp. 737–754, 2014.
[13] P. Lo Presti, An ecological approach to normativity, Adaptive Behavior,vol. 24, no. 1, pp. 3–17, 2016.
[14] S. Torrance and T. Froese, An inter-enactive approach to agency: participatory sense-making, dynamics,
and sociality, Humana Mente,vol. 15, pp. 21–53, 2011.
[15] E. Hutchins, The cultural ecosystem of human cognition, Philosophical Psychology,vol. 27, no. 1,
pp. 34–49, 2014.
[16] H. Heft, The social constitution of perceiver-environment reciprocity, Ecological psychology,vol. 19,
no. 2, pp. 85–105, 2007.
[17] F. Guribye, From artifacts to infrastructures in studies of learning practices, Mind, Culture, and Activity,
vol. 22, no. 2, pp. 184–198, 2015.
[18] E. Hutchins, Material anchors for conceptual blends, Journal of pragmatics,vol. 37, no. 10, pp. 1555–
1577, 2005.
[19] C. Rodr´
ıguez and C. Moro, Coming to agreement: Object use by infants and adults, in The shared mind:
Perspectives on intersubjectivity (J. Zlatev, T. P. Racine, C. Sinha, and E. Itkonen, eds.), ch. 5, pp. 89–
114, Amsterdam and Philadelphia: John Benjamins Press, 2008.
[20] E. Hatfield, L. Bensman, P. D. Thornton, and R. L. Rapson, New perspectives on emotional contagion:
a review of classic and recent research on facial mimicry and contagion, Interpersona,vol. 8, no. 2,
pp. 159–179, 2014.
[21] A. Falck, I. Brinck, and M. Lindgren, Interest contagion in violation-of-expectation-based false-belief
tasks, Frontiers in psychology,vol. 5, 2014.
[22] C. Becchio, L. Sartori, and U. Castiello, Toward you the social side of actions, Current Directions in
Psychological Science,vol. 19, no. 3, pp. 183–188, 2010.
[23] M. E. Kret, Emotional expressions beyond facial muscle actions. a call for studying autonomic signals
and their impact on social perception, Frontiers in psychology,vol. 6, 2015.
[24] K. Lewin, Principles of topological psychology. Read Books Ltd, 1936.
[25] E. H. Hall, The hidden dimension. N. Y.: Doubleday, 1966.
[26] M. Argyle and J. Dean, Eye-contact, distance and affiliation, Sociometry,vol. 28, no. 3, pp. 289–304,
1965.
[27] J. K. Burgoon and S. B. Jones, Toward a theory of personal space expectations and their violations,
Human Communication Research,vol. 2, no. 2, pp. 131–146, 1976.
[28] J. K. Burgoon, T. Birk, and M. Pfau, Nonverbal behaviors, persuasion, and credibility, Human commu-
nication research,vol. 17, no. 1, pp. 140–169, 1990.
[29] A. Kendon, Conducting interaction: Patterns of behavior in focused encounters,vol. 7. CUP Archive,
1990.
[30] V. Braitenberg, Vehicles: Experiments in synthetic psychology. Cambridge, MA: MIT Press, 1984.
[31] C. Balkenius, Natural intelligence in artificial creatures,vol. 37. Lund University Cognitive Studies,
1995.
[32] K. Dautenhahn, Socially intelligent robots: dimensions of human–robot interaction, Philosophical
Transactions of the Royal Society B: Biological Sciences, vol. 362, no. 1480, pp. 679–704, 2007.
[33] E. Pacchierotti, H. I. Christensen, and P. Jensfelt, Human-robot embodied interaction in hallway settings:
a pilot user study, in ROMAN 2005. IEEE International Workshop on Robot and Human Interactive
Communication, 2005., pp. 164–171, IEEE, 2005.
[34] F. Lindner and C. Eschenbach, Towards a formalization of social spaces for socially aware robots, in
International Conference on Spatial Information Theory, pp. 283–303, Springer, 2011.
[35] E. Torta, R. H. Cuijpers, J. F. Juola, and D. van der Pol, Design of robust robotic proxemic behaviour, in
International Conference on Social Robotics, pp. 21–30, Springer, 2011.
I. Brinck et al. / Making Place for Social Norms in the Design of Human-Robot Interaction 311
[36] J. Vroon, M. Joosse, M. Lohse, J. Kolkmeier, J. Kim, K. Truong, G. Englebienne, D. Heylen, and V. Ev-
ers, Dynamics of social positioning patterns in group-robot interactions, in Robot and Human Interactive
Communication (RO-MAN), 2015 24th IEEE International Symposium on, pp. 394–399, IEEE, 2015.
[37] M. L. Walters, K. Dautenhahn, R. Te Boekhorst, K. L. Koay, D. S. Syrdal, and C. L. Nehaniv, An empir-
ical framework for human-robot proxemics, in Proceedings of new frontiers in human-robot interaction,
pp. 144–149, SSAISB, 2009.
[38] L. Takayama and C. Pantofaru, Influences on proxemic behaviors in human-robot interaction, in 2009
IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5495–5502, IEEE, 2009.
[39] N. Mitsunaga, C. Smith, T. Kanda, H. Ishiguro, and N. Hagita, Robot behavior adaptation for human-
robot interaction based on policy gradient reinforcement learning, in 2005 IEEE/RSJ International Con-
ference on Intelligent Robots and Systems, pp. 218–225, IEEE, 2005.
[40] M. J. Mataric, Learning to behave socially, in From animals to animats 3: Proceedings ot the third
international conference on simulation of adaptive behavior (D. Cliff, P. Husbands, J.-A. Meyer, and
S. W. Wilson, eds.), vol. 617, (Cambridge, MA), pp. 453–462, MIT Press, 1994.
[41] Y. Demiris, Prediction of intent in robotics and multi-agent systems, Cognitive processing, vol. 8, no. 3,
pp. 151–158, 2007.
[42] B. Johansson, Anticipation and attention in robot control, vol. 142. Lund University Cognitive Studies,
2009.
[43] B. Johansson and C. Balkenius, Building the builder robot, LUCS Minor 16, Lund University Cognitive
Science, 2016.
[44] K. J. Friston and K. E. Stephan, Free-energy and the brain, Synthese, vol. 159, no. 3, pp. 417–458, 2007.
[45] K. J. Friston, J. Daunizeau, J. Kilner, and S. J. Kiebel, Action and behavior: a free-energy formulation,
Biological cybernetics, vol. 102, no. 3, pp. 227–260, 2010.
[46] C. Balkenius and S. Winter, Explorations in synthetic pragmatics, in Understanding Representation in
the Cognitive Sciences (A. Riegler, M. Peschl, and A. von Stein, eds.), pp. 199–208, Springer, 1999.
[47] L. Steels, Language games for autonomous robots, IEEE Intelligent systems, vol. 16, no. 5, pp. 16–22,
2001.
I. Brinck et al. / Making Place for Social Norms in the Design of Human-Robot Interaction312