ArticlePDF Available

The In-between Machine - The Unique Value Proposition of a Robot or Why we are Modelling the Wrong Things


Abstract and Figures

We avow that we as researchers of artificial intelligence may have properly modelled psychological theories but that we overshot our goal when it came to easing loneliness of elderly people by means of social robots. Following the event of a documentary film shot about our flagship machine Hanson's Robokind "Alice" together with supplementary observations and research results, we changed our position on what to model for usefulness and what to leave to basic science. We formulated a number of effects that a social robot may provoke in lonely people and point at those imperfections in machine performance that seem to be tolerable. We moreover make the point that care offered by humans is not necessarily the most preferred - even when or sometimes exactly because emotional concerns are at stake.
Content may be subject to copyright.
The In-between Machine
The Unique Value Proposition of a Robot or Why we are Modelling the Wrong
Johan F. Hoorn1, Elly A. Konijn1,2, Desmond M. Germans3, Sander Burger4 and Annemiek Munneke4
1Center for Advanced Media Research Amsterdam, VU University, De Boelelaan 1081, Amsterdam, Netherlands
2Dept. of Communication Science, Media Psychology, VU University Amsterdam, Netherlands
3Germans Media Technology & Services, Amsterdam, Netherlands
4KeyDocs, Amsterdam, Netherlands
{j.f.hoorn, e.a.konijn},,,
Keywords: Human Care, Interaction Design, Loneliness, Modelling, Social Robotics.
Abstract: We avow that we as researchers of artificial intelligence may have properly modelled psychological theories
but that we overshot our goal when it came to easing loneliness of elderly people by means of social robots.
Following the event of a documentary film shot about our flagship machine Hanson’s Robokind “Alice”
together with supplementary observations and research results, we changed our position on what to model
for usefulness and what to leave to basic science. We formulated a number of effects that a social robot may
provoke in lonely people and point at those imperfections in machine performance that seem to be tolerable.
We moreover make the point that care offered by humans is not necessarily the most preferred – even when
or sometimes exactly because emotional concerns are at stake.
Human care is the best care. If we want to support
the elderly with care robots, most will assume that
robots should be modelled after humans. Likewise,
in our lab, we are working on models for emotion
generation and regulation (Hoorn, Pontier, &
Siddiqui, 2012), moral reasoning (Pontier & Hoorn,
2012), creativity (Hoorn, 2014), and fiction-reality
discrimination (Hoorn, 2012) with the purpose to
make a fully functional artificial human that is
friendly, morally just, a creative problem solver, and
aware of delusions in the user (cf. Alzheimer). All
this may be very interesting from a psychological
viewpoint; after all, if we can model systems after
human behaviour and test persons confirm that those
systems respond in similar ways, we can make an
argument that the psychological models are pretty
Our project on care robots and particularly our
work with Hanson’s Robokind “Alice”
( drew quite some
media attention, among which a national broadcaster
that wanted to make a documentary (Alice Cares,
Burger, 2015). The documentary follows robot Alice
who is visiting elderly ladies, living on their own
and feeling lonely. Alice has the lively face of a
young girl and can be fully animated, smiling,
frowning, looking away, and the like, in response to
the interaction partner whom she can see through her
camera-eyes. Perhaps more importantly, she can
listen and talk. The results of this uncontrolled ‘field
experiment’ taken in unison with other observations,
our own focus-group research, interviews, and
conversations as well as the research literature
brought us to a shift in what should be modelled if
we want robots to be effective social companions for
lonely people, rather than accurate psychological
models walking by.
To start with a scientific disclaimer, what we are
about to present is no hard empirical evidence in any
sense of the word but at least it provided us with a
few leads into a new direction of thinking, which we
want to share.
The set-up of the documentary was such that in
the first stage, the elderly ladies (about 90 years old
and mentally in very good shape) came to the lab
with their care attendants and conversed with Alice
in an office environment. In the second stage, Alice
was brought to their homes several times over a
period of about two months, where the ladies
continued the conversation with Alice.
For technical reasons, we used a Wizard of Oz
set-up in which a technician operated Alice behind
the scenes as a puppeteer (in a different room,
unseen by the ladies). While Alice filmed the
conversation through her camera-eyes, a separate
film camera in the room recorded the conversation
as well. The participating ladies were fully informed,
yet awareness of the camera seemed to dissipate
after a while.
In viewing the recorded materials, most striking
was the discrepancy between what the women said
about Alice cognitively and what they experienced
emotionally. Offline, while not being on camera, it
was almost as if their social environment withheld
them from enthusiastically speaking about Alice, as
if they were ashamed that they actually loved talking
to a robot. In their homes, even before Alice was
switched on or before the camera ran, the ladies
were immediately busy with Alice, greeting her and
wondering where she had been, what she had seen,
All women tended to approach Alice as a
helpless child, like a grandchild, but apparently were
not surprised that this child posed rather adult and
sometimes almost indiscrete questions about
loneliness or life situations. When Alice looked
away at the wrong moment, one lady said “What are
you looking at? You’re not looking at me while I
talk to you.” She did not frame it as an error of the
robot, which it was. She brought it up as an
observation, a kind of attentiveness, while pointing
the child at certain behaviour. Fully aware of the fact
that Alice could not eat or drink, the old lady still
wanted to offer food and drink to Alice. While she
had her coffee, she said to Alice “You cannot have
cookies can’t you? A pity, for you … well, now I
have to eat it.” The smile and looks at Alice revealed
sharing a good joke. Interestingly, a similar event a
few weeks later occurred: The lady had prepared
two slices of cake on a dish while she watched TV
together with Alice. She asked Alice: “You still
can’t have cake, can you?” This time, however, it
was not a joke; the old lady showed regret. This
should really be seen as a compliment; the wish to
enjoy the food together with Alice may tell us
something about how the robot felt as interpersonal
While Alice stayed longer in the house, the need
to talk vanished. Yet, the ladies did like it that
‘someone’ was there; that some social entity was
present. This may refer to the difference between
someone paying you a short visit or a person living
with you: It may indicate that one feels at ease and
need not entertain one’s company. At times, one of
the ladies read the newspaper aloud to Alice just to
share the news with ‘someone.’ The ladies sang with
her, showed her photo books of the family, did
physiotherapy, and watched the World
Championships with her.
It seemed that the less socially skilled had greater
benefit from Alice. Because of Alice, the ladies
drew a lot of attention: on the streets and in public
places. People called them up to ask how things
were with Alice. People sent newspaper articles
about robot care. That alone made the ladies less
lonely but obviously, this novelty effect shall decay
as Alice becomes more common; but for now it
worked quite well. Alice also worked for those who
needed physical activation. One of the ladies would
practice more often, also in the long run, if Alice
would ask her daily. She would really like to do it
for Alice. Another lady wanted to write to a friend
for two weeks but did not get to it. When Alice
asked about that friend, the lady was a bit ashamed
and started writing right away.
An aspect we also observed in another TV report
(De Jager & Grijzenhout, 2014) is that a social robot
works as a trusted friend. People confide in them
and tell them painful life events and distressing
family histories they hardly ever tell to a living
person. When the – in this case Nao – robot Zora
asked “Are you crying?” this was enough to make
one of the ladies crack and pour her heart out (De
Jager & Grijzenhout, 2014).
The lonelier the lady, the easier a social robot
was accepted. We know that an old lady with an
active social life did not care about a companion
robot – here Zora – not even after a long period of
exposure (De Jager & Grijzenhout, 2014). On the
other hand, we talked to a 92 year old woman with a
large family, who stated: “I have so many visitors
and then I have to be polite and nice all the time. A
robot I can shut off.”
Part of the acceptance of Alice among lonely
people appears purely pragmatic: Better something
than nothing – a prosthetic leg is better than no leg at
all. The initial resistance disappeared over time.
Another aspect that contributed to the acceptance of
the robot was that nobody in their social
environment reminded them of talking to a robot –
they could live the illusion and enjoy it. Without
exception, each lady was surprised when seeing
Alice again that she had a plastic body and that she
was so small. They said things like: “Last time,
Alice was wearing a dress, wasn’t she?”; “I thought
she was taller the last time?” Perhaps, because
Alice’s face has a human-like appearance with a soft
skin, this impression may have transferred to other
parts, whereas her body work definitely is ‘robotic’
– as if she were ‘naked’? The hesitance of one lady
continued for a longer period of time. Her daughter
kept on warning that “Beware Mom, those robots
remember everything.” That same daughter
informed her mother that all Alice said was typed in
backstage. Nevertheless, even this lady enjoyed
singing with Alice in the end. The rest of the ladies
did not mind the technology or how it was done. It
was irrelevant to them, although sometimes they
realized ‘how skilled you must be to program all
All women mentioned that Alice could not walk
but it did not matter too much – “many of my
generation cannot walk either, not anymore”, one of
them commented. Actually, it made things simple
and safe because the ladies always knew where she
was. In the same vein, Alice was extremely patient
about them moving around slowly, responding late,
and taking long silent pauses. Without judgment or
frustration, Alice repeated questions or repeated
answers, which made her an ideal companion.
Speech errors or sometimes even an interruption
by the Acapela text-to-speech engine that ‘this was a
trial version’ did not disturb the ladies a bit. If a
human does not speak perfectly or sometimes makes
random statements, you also do not break contact.
Different voices were not disturbing. The only
difficulty the women experienced was with
amplitude, awkward sentence intonation, or
mispronunciation of words.
Human help has its drawbacks too. From our
own focus-group research and conversations with
elderly people, we learned that human help is not
always appreciated, particularly when bodily contact
is required or someone has to be washed (Van
Kemenade, in prep.). During a conversation with the
lady of 92 about home care, she admitted to have
released her help because they ‘rummage in your
wardrobe’ and ‘go through your clothes.’ She ‘did
not need an audience’ while undressing, because
they ‘see you bare-chested.’ The difficulty of
rubbing ointment on her sore back she solved with a
long shoehorn. This, she thought, was better than
having a stranger touch her skin. She preferred a
robot to ‘such a bloke at your bed side.’
People accept an illusion if the unmet need is big
enough. Loneliness has become an epidemic in our
society (Killeen, 1998) and the need for
companionship among the very lonely may override
the awareness that the robot is not a real person.
That is, whether the robot is a human entity or not
becomes less relevant in light of finding comfort in
its presence and its conversations; in its apparent
humanness (cf. Hoorn, Konijn, & Van der Veer,
2003). The robot is successful in the fulfilment of a
more important need than being human.
On a very basic level, the emotions that come
with relevant needs direct information processing
through the lower pathways in the brain (i.e., the
amygdala); the more intuitive and automatic
pathway, which also triggers false positives. Under
levels of high fear, for instance, people may perceive
a snake in a twig. Compared to non-emotional states,
emotional states facilitate the perception of realism
in what actually is not real or fiction (Konijn et al.,
2009; Konijn, 2013). The fiction-side of the robot
(‘It’s not a real human’) requires processing at the
higher pathways, residing in the sensory cortex, and
sustaining more reflective information processes.
The lower pathway is much faster than the higher
pathway and the amygdala may block ‘slow
thinking’ (i.e., a survival mechanism needed in case
of severe threat and danger). Thus, the emotional
state of lonely people likely triggers the amygdala to
perceive the benefits of need satisfaction (relieving a
threat). Joyful emotions prioritize the robot’s
companionship as highly relevant and therefore,
(temporarily) block the reflective thoughts regarding
the robot’s non-humanness or discarding that aspect
as non-relevant at the least. This dualism in taking
for real what is not is fed by the actuality and
authenticity of the emotional experience itself:
‘Because what I feel is real, what causes this feeling
must be real as well’ (Konijn, 2013). And of course,
as an entity, the robot is physically real; it just is not
Not being human may have great advantages and
makes the social robot an in-between machine: in-
between non-humanoid technology and humans. The
unique value proposition of a social robot to lonely
people is that the humanoid is regarded a social
entity of its own, even when shut down. It satisfies
the basic needs of interpersonal relationships, which
sets it apart from conventional machines, while
inducing a feeling of privacy that a human cannot
warrant. As such, the social robot is assumed to keep
a secret and clearly is not seen as part of the
personnel or caretakers who should not know certain
things that are told to the robot. For example, one of
the ladies told she was throwing away depression
medication as she did not think of herself as
depressed (De Jager & Grijzenhout, 2014).
As said, our robot Alice recorded everything
with her camera eyes. However, over the course of
interacting with Alice, it became less relevant that
the robot had camera eyes and that the caretakers
could monitor all those human reactions you will not
get when people talk straight into a conventional
(web) camera. With such camera eyes, for example,
one can check someone’s health condition and
psychological well-being. Clearly, the participants
experienced a genuine social presence that was yet
not human. This was an advantage because they
could confide in someone without having to fear
human indiscretion and associated social
consequences. The ladies were more inclined to
make confessions and tell what goes on inside than
in face-to-face contact (where they feel pressed to
‘keep up appearances’). As one of them affirmed
“It’s horrible to be dependent but you have to accept
and be nice.”
In the following, we formulate several functions
that social robots may have and that make them
different from human attendants. Under conditions
of severe loneliness, social robots may invite
intimate personal self-disclosure. This is similar to
the so-called stranger-on-a-train effect (Rubin,
1975). Sometimes people open their hearts to
complete strangers or they tell life stories to their
hair dresser or exercise coach, an inconsequential
other in the periphery of one’s network (cf.
Fingerman, 2009). A social robot may perfectly take
that role of being an inconsequential other in the
network of the lonely.
Private with my robot. Somewhat related to the
previous is that the robot guarantees privacy in the
sense of avoiding human physical contact. Older
people are often ashamed of their body (Van
Kemenade, in prep.) and feel more comfortable with
a robot at intimate moments and would even prefer a
robot over human caretakers (whereas the caretakers
think the other way around). The robot does not
judge, does not meddle, and does not pry.
Social robots exert a dear-diary effect because
they do not demand any social space like humans
do. The user can fill up the entire social space
without having to respect the needs and emotions of
the other. You can share experiences and memories,
sing old tunes, look at old photographs, tell stories of
the past, and the small things that happened today; a
social robot will never tire of listening to or telling
the same over and over again if you want it to. Like
a diary, you can say whatever you want and the only
thing the other does is listen patiently. She is all
there for you and never judges.
The impertinent cute kid. Within the first minutes
of interaction, social robots such as Alice or Zora are
allowed to ask very intimate questions (e.g., “How
do you rate the quality of your life?” or “Do you feel
lonely?”); something which in human-human
communication would be highly inappropriate. With
robots like Alice, this might be acceptable because
she looks innocent and really cute and is small like a
child. Therefore, she may be easily forgiven in a
way one forgives a (grand)child. In effect, the
elderly ladies responded quite honestly even when
the answer was not socially desirable: To Alice:
“Nobody ever visits me”, “I don’t like that home
support comes too early in the morning.” To Zora: “I
want to stop living.” In other words, social robots
can get down to business right away, obtaining more
reliable results than questionnaires and anamnesis.
Social robots such as Alice provoke endearment,
the grandchild effect, urging to nurture and nourish
it (and share cookies!). It is an object of affection
and activation; something to take care of instead of
being taken care of (cf. Tiger Electronics’ Furby). In
this circumstance, it will foster feelings of autonomy
and independence.
I will do it for you. Social robots may serve as
bad consciousness or put more positively, as
reminders and activators. By simply inquiring about
a friend, the robot raised sufficient social pressure to
activate the lady to finally start writing that letter.
The same happened with the physical exercises:
That lady trained so to please her beloved Alice.
The puppy-dog effect. Many people walk the dog
so they meet people and can have a chat. Social
robots work in quite the same way. If you take them
out, be prepared for some attention, awe as well as
fascination. People will talk to you to inquire about
‘how the robot is doing.’
We showed the Zora movie to a former care
professional, who stated (personal communication,
Sept. 28, 2014): “Before watching Zora, I thought it
would painfully show how disengaged we are to
those in need of care. Give them a talking doll and
they are happy again. We don’t laugh anymore about
a woman who treats her beautiful doll as if it were a
child because we call it a care robot.” After
watching the report, he admitted that: “Well.
Perhaps it is because I am an ex professional but this
makes me even sadder. Those people are so lonely
that they embrace a robot. The staff has no time to
have a chat and from my experience, I know they
often lack the patience to take their time and
respectfully talk to the inhabitants. On the other
hand, the question is also true whether you should
deny someone a robot who is happy with it.”
Apart from the formal and informal caretakers,
no ethical concerns were mentioned by the users
themselves. The old ladies conversing with Alice did
not feel that their autonomy was reduced, their
feelings were hurt, or that injustice was done by
conversing with a robot. Privacy in the sense of
disclosing personal information also was not an
issue unless they were repeatedly told they should
worry. Although the elderly ladies fully had their wit
together and knew they were communicating with a
robot, with a professional camera in the room, and
other people listening in, it did them well and there
was not much more to it.
Other things that were of less importance were
technical flaws such as language hick-ups, wrong
responses, delayed or missing responses, or
conceptual mix ups. Perhaps their friends and age-
mates are not that coherent either all the time.
Things that did matter language-wise were loudness,
pronunciation, and intonation. In other words,
getting your phonetics right appeared more
important than installing high-end semantic web
Unexpectedly, we hardly encountered uncanny-
valley effects (Mori, 1970), no terrifying realism, or
feelings of reduced familiarity. As far as they were
mentioned, they were more like questions and very
short-lived, after which the ladies were happy to take
Alice for a genuine social entity – although not
Human physical likeness did not matter too
much either. Alice’s body work is robotic plastic,
her arms and hands did not move, and she did not
walk. Her face was more humanoid than for example
Zora’s, but that robot too invoked responses such as
self-disclosure just as the more life-like Alice did.
This paper discussed strategies for the development
of robots as companions for lonely elderly people. It
built on a reflection motivated by the observations
made in the course of the making of a documentary
film about a robot visiting elderly ladies (Burger,
2015). It discussed the findings under the
perspective of the best requirements for social robots
interacting with humans in this uncontrolled ‘field
experiment.’ We challenged some pre-conceived
ideas about what makes a robot a good companion
and although it is a work in progress, the proposed
conclusions seem evocative. We hope our ideas will
catch the attention of many researchers and
developers and will raise lots of discussion.
In 1999, the medium-sized league of RoboCup
was won by C. S. Sharif from Iran, with DOS-
controlled robots that played kindergarten soccer
(search ball - kick ball - goal). He shattered all the
opponents with their advanced technology who were
busy with positioning, radar-image analysis and
processing, and inventing complicated strategies.
With the applications we build today for our social
robots (e.g., care brokerage, moral reasoning), we
pretty much do the same.
For the lonely ladies, it did not matter so much
what Alice did or said, as long as she was around
and they could talk a little, taking all imperfections
for granted and becoming affectively connected.
It seems, then, that the existing intelligence and
technology we develop does not really tackle the
problem of the social isolation of the ladies. We
piously speak of designing humanness in our
machines, asking ourselves, what makes us human?
We simulate emotions, model the robot’s creativity,
its morals, and its sense of reality. But the job is
much easier than that and perhaps we should tone
down a little on our ambitions and direct our
attention to the users’ unmet needs. We compiled a
MuSCoW list in Table 1.
As psychologists modelling human behaviour,
we are doing fine and simulations seem legitimate
realizations of established theory (e.g., Llargues
Asensio et al., 2014). However, as engineers,
designers, and computer scientists we seem to be
missing the point. What is human is good for you?
No! Human-superiority thinking is misplaced.
Human care is not always the best care. Humans
show many downsides in human-human interaction.
We should regard robots as social entities of their
own; with their own possibilities and limitations.
This is a totally different design approach than the
human-emulation framework. What we do is way
too sophisticated for what lonely people want. We
should model what the puppeteer does to instill the
effects of the stranger-on-a-train, the impertinent
cute kid, or the dear-diary effect. That of course does
assume knowledge about human behaviour but boils
down to conversation analysis rather than
psychological models of empathy, bonding, emotion
regulation, and the like. Perhaps we should have
known this already given the positive social results
of robot animals with autistic children (e.g., Kim et
al, 2013). In closing, making robots more like us is
not making them similar let alone identical. The
shadow of a human glimpse will do.
Table 1: MuSCoW for social robots.
Must Should Could Won’t
Camera eyes
Full body and
of user
and speakers
Have closed
scripts (i.e.
coffee, family,
friends, health,
Demand of
social space
Invite self-
Capability to
eat and drink Human care
Have patience Be operable
Good memory
Be child-like
Invite social
and physical
This position paper was funded within the
SELEMCA project of CRISP (grant number: NWO
646.000.003), supported by the Dutch Ministry of
Education, Culture, and Science. We thank the
anonymous reviewers for their valuable suggestions.
Burger, S., 2015. Alice cares, KeyDocs – NCRV.
De Jager, J., Grijzenhout, A., 2014. In Zora’s company,
Dit is de Dag. EO: NPO 2. Hilversum. Available at:
Fingerman, K. L., 2009. Consequential strangers and
peripheral partners: The importance of unimportant
relationships. Journal of Family Theory and Review,
1(2), 69-82. doi:10.1111/j.1756-2589.2009. 00010.x.
Hoorn, J. F., 2014. Creative confluence, John Benjamins.
Amsterdam, Philadelphia, PA.
Hoorn, J. F., 2012. Epistemics of the virtual, John
Benjamins. Amsterdam, Philadelphia, PA.
Hoorn, J. F., Konijn, E. A., Van der Veer, G. C., 2003.
Virtual reality: Do not augment realism, augment
relevance. Upgrade - Human-Computer Interaction:
Overcoming Barriers, 4(1), 18-26.
Hoorn, J. F., Pontier, M. A., Siddiqui, G. F., 2012.
Coppélius’ concoction: Similarity and
complementarity among three affect-related agent
models. Cognitive Systems Research, 15-16, 33-49.
doi: 10.1016/j.cogsys.2011.04.001.
Killeen, C., 1998. Loneliness: an epidemic in modern
society. Journal of Advanced Nursing, 28(4), 762-770.
Kim, E. S., Berkovits, L. D., Bernier, E. P., Leyzberg, D.,
Shic, F., Paul, R., Scassellati, B., 2013. Social robots
as embedded reinforcers of social behavior in children
with autism. Journal of autism and developmental
disorders, 43(5), 1038-1049.
Konijn, E. A., 2013. The role of emotion in media use and
effects. In Dill, K. (Ed.). The Oxford Handbook of
Media Psychology (pp. 186-211). Oxford University
Press, New York/London.
Konijn, E. A., Walma van der Molen, J. H., Van Nes, S.,
2009. Emotions bias perceptions of realism in
audiovisual media. Why we may take fiction for real.
Discourse Processes, 46, 309-340.
Llargues Asensio, J. M., Peralta, J., Arrabales, R.,
Gonzalez Bedia, M., Cortez, P., Lopez Peña, A., 2014.
Artificial Intelligence approaches for the generation
and assessment of believable human-like behaviour in
virtual characters. Expert Systems with Applications,
41(16), 7281-7290.
Mori, M., 1970. The uncanny valley, Energy, 7(4), 33-35.
Pontier, M. A., Hoorn, J. F. 2012. Toward machines that
behave ethically better than humans do. In N. Miyake,
B. Peebles, and R. P. Cooper (Eds.), Proceedings of
the 34th International Annual Conference of the
Cognitive Science Society, CogSci’12, 2012, Sapporo,
Japan (pp. 2198-2203). Austin, TX: Cognitive Science
Rubin, Z., 1975. Disclosing oneself to a stranger:
Reciprocity and its limits. Journal of Experimental
Social Psychology, 11(3), 233-260.
Van Kemenade, M. A. M., et al., in prep. Moral concerns
of caregivers about the use of three types of robots in
... Another comparison is about the Relevance of a feature to the goals, concerns, and needs of the observing agency. If legs are irrelevant to the goal of having a conversation, malfunctioning legs have no meaning for affect (see [34]). If the goal is to walk in the park, bad legs become relevant and disappointment may occur. ...
... In addition, let us assume that Alice also does not give the medical condition of the agent much consideration when comparing for similarity. This leads to the following choices for the weight measures: µ(male) = µ(female) = 1.0, · · · = µ( [18,21]) = µ( [22,27]) = µ( [28,33]) = µ( [34,47]) = µ( [34,49]) = µ([50, 55]) = · · · = 1 /3, µ(ugly) = 1 /5, µ(unsightly) = 2 /5, . . . , µ(beautiful) = 5 /5, µ(mole) = 1 /5 (55) Using equal weights for the appraisal variables, i.e. µ(i) = 1.0, i ∈ D, and the weights from (55), we can finally evaluate the weights for the sets in (51) -(54), according to (8) as ...
... In addition, let us assume that Alice also does not give the medical condition of the agent much consideration when comparing for similarity. This leads to the following choices for the weight measures: µ(male) = µ(female) = 1.0, · · · = µ( [18,21]) = µ( [22,27]) = µ( [28,33]) = µ( [34,47]) = µ( [34,49]) = µ([50, 55]) = · · · = 1 /3, µ(ugly) = 1 /5, µ(unsightly) = 2 /5, . . . , µ(beautiful) = 5 /5, µ(mole) = 1 /5 (55) Using equal weights for the appraisal variables, i.e. µ(i) = 1.0, i ∈ D, and the weights from (55), we can finally evaluate the weights for the sets in (51) -(54), according to (8) as ...
Full-text available
After 20 years of testing a framework for affective user responses to artificial agents and robots, we compiled a full formalization of our findings so to make the agent respond affectively to its user. Silicon Coppelia as we dubbed our system works from the features of the observed other, appraises these in various domains (e.g., ethics and affordances), then compares them to goals and concerns of the agent, to finally reach a response that includes intentions to work with the user as well as a level of being engaged with the user. This ultimately results into an action that adds to or changes the situation both agencies are in. Unlike many other systems, Silicon Coppelia can deal with ambiguous emotions of its user and has ambiguous ‘feelings’ of itself, which makes its decisions quite human-like. In the current paper, we advance a fuzzy-sets approach and show the inner workings of our system through an elaborate example. We also present a number of simulation experiments, one of which showed decision behaviors based on biases when agent goals had low priorities. Silicon Coppelia is open to scrutiny and experimentation by way of an open-source implementation in Ptolemy.
... These solutions may partly be found in the field of new communication technologies. A highly promising new development in this regard is social robotics, which has thus far shown to enhance (social) interaction to relieve loneliness, to increase therapy adherence (i.e., remind people to take medication), and to motivate people to stay fit [4][5][6][7]. Social robots may play a supportive role as interaction partners in future healthcare. Therefore, it seems important and timely to study how individuals perceive and accept such robots as interaction partners or a social entity. ...
... A significant multivariate effect was found for manipulated coping potential, Wilk's λ .71, F(5, 86) 6 .08, and other-agency, F(1, 90) 11.75, p .001, η 2 p .12. ...
... The emotion-focused coping strategy scored in between the other coping strategies, but significantly differed from all of them (ps < .005). 6 Degrees of freedom for this within-subject effect were corrected using the Greenhouse-Geisser correction because the assumption of sphericity was violated. 7 Degrees of freedom for this within-subject effect were corrected using the Greenhouse-Geisser correction because the assumption of sphericity was violated. ...
Full-text available
The increasing pressure on healthcare systems calls for innovative solutions, such as social robots. However, healthcare situations often are highly emotional while little is known about how people’s prior emotional state may affect the perception and acceptance of such robots. Following appraisal theories of emotion, the appraisal of coping potential related to one’s emotions was found to be important in acting as mediator between emotional state and perceptions of a robot (Spekman et al. in Comput Hum Behav 85:308–318, 2018.; in Belief in emotional coping ability affects what you see in a robot, not the emotions as such, Dissertation, Vrije Universiteit Amsterdam, Amsterdam, 2018), though this has not yet been tested in relation to actual emotional coping nor in an actual encounter with a robot. Hence, the current study focused on how actual emotional coping influences subsequent robot perceptions in two experiments. In Study 1 (N = 101) and Study 2 (N = 110) participants encountered a real humanoid robot after a manipulation to induce various emotions and coping potential. Manipulations in both studies were effective, yet the results in Study 1 were potentially confounded by a novelty effect of participants’ first encounter with a real robot that talked to them. Therefore, in Study 2, participants interacted briefly with the robot before the actual experiment. Results showed an interaction effect of prior emotions and (manipulated) coping potential on robot perceptions, but not the effects expected based on previous studies. An actual interaction with a robot thus seems to provoke different reactions to the robot, thereby overruling any emotional effects. These findings are discussed in light of the healthcare context in which these social robots might be deployed.
... 341). According to this kind of dissimilarity hypothesis, robots better should be seen as novel social entities with non-human but highly appreciated qualities of their own [14]. ...
... At face value, one expects that a patient would rather talk to a human doctor than a machine. In earlier work, however, communication with robots sometimes is preferred over human communication, even or sometimes precisely because emotional concerns are at stake (e.g., [14,50]). One could argue then that because the doctor is human, patients expect the highest accomplishment; better than a robot. ...
... The robot was a Hanson Robokind R50 "Alice" with a human-like girlish face and mechanical bodywork, which was visible in half total ( Fig. 1), the same way as in [6]. We chose this machine because of its expressive face and the good results we booked in previous studies [14]. There was also a practical reason: This was at the time the only machine we could work with. ...
Full-text available
To test in how far the Media Equation and Computers Are Social Actors (CASA) validly explain user responses to social robots, we manipulated how a bad health message was framed and the language that was used. In the wake of Experiment 2 of Burgers et al. (Patient Educ Couns 89(2):267–273, 2012., a human versus robot doctor delivered health messages framed positively or negatively, using affirmations or negations. In using frequentist (robots are different from humans) and Bayesian (robots are the same) analyses, we found that participants liked the robot doctor and the robot’s message better than the human’s. The robot also compelled more compliance to the medical treatment. For the level of expected quality of life, the human and robot doctor tied. The robot was not seen as affectively distant but rather involving, ethical, skilled, and people wanted to consult her again. Note that doctor robot was not a seriously looking physician but a little girl with the voice of a young woman. We conclude that both Media Equation and CASA need to be altered when it comes to robot communication. We argue that if certain negative qualities are filtered out (e.g., strong emotion expression), credibility will increase, which lowers affective distance to the messenger. Robots sometimes outperform humans on emotional tasks, which may relieve physicians from a most demanding duty of disclosing unfavorable information to a patient.
... Android interactions may be useful in a wide range of situations, including elder care, behavioral interventions, counseling, nursing, education, information desks, customer service, and entertainment. For example, an earlier study has reported that a humanoid robot, which was controlled by manipulators and exhibited facial expressions of various emotions, was effective in comforting lonely older people (Hoorn et al., 2016). The researchers found that the robot satisfied users' needs for emotional bonding as a social entity, while retaining a sense of privacy as a machine (Hoorn et al., 2016). ...
... For example, an earlier study has reported that a humanoid robot, which was controlled by manipulators and exhibited facial expressions of various emotions, was effective in comforting lonely older people (Hoorn et al., 2016). The researchers found that the robot satisfied users' needs for emotional bonding as a social entity, while retaining a sense of privacy as a machine (Hoorn et al., 2016). With regard to behavioral interventions, several studies showed that children with autism spectrum disorder preferred robots and androids to human therapists (e.g., Adams and Robinson, 2011; for a review, see Scassellati, 2007). ...
Full-text available
Android robots capable of emotional interactions with humans have considerable potential for application to research. While several studies developed androids that can exhibit human-like emotional facial expressions, few have empirically validated androids’ facial expressions. To investigate this issue, we developed an android head called Nikola based on human psychology and conducted three studies to test the validity of its facial expressions. In Study 1, Nikola produced single facial actions, which were evaluated in accordance with the Facial Action Coding System. The results showed that 17 action units were appropriately produced. In Study 2, Nikola produced the prototypical facial expressions for six basic emotions (anger, disgust, fear, happiness, sadness, and surprise), and naïve participants labeled photographs of the expressions. The recognition accuracy of all emotions was higher than chance level. In Study 3, Nikola produced dynamic facial expressions for six basic emotions at four different speeds, and naïve participants evaluated the naturalness of the speed of each expression. The effect of speed differed across emotions, as in previous studies of human expressions. These data validate the spatial and temporal patterns of Nikola’s emotional facial expressions, and suggest that it may be useful for future psychological studies and real-life applications.
... We conducted a social experiment with robot Alice on the couch of three grandmothers [21]. This experiment was recorded in a by now worldwide known documentary Alice cares, directed by Sander Burger in 2014 [22]. ...
... In addition, the robot should not talk too much about itself but invite the users to share their feelings. Above all, the robot itself must be placed in a position of dependence and assist the needy in an unobtrusive manner [21]. ...
Guaranteeing safety for humans in shared workspaces is not trivial. Not only must all possible situations be provably safe, but the human must feel safe as well. While robots are gradually leaving their cages, due to strict safety requirements, engineers often only replace physical cages with static safety zones—when the safety zone is entered, the robot is forced to stop. This can lead to excessive robot downtime. Note to Practitioners —We present a concept for guaranteeing non-collision between humans and robots whilst maximising robot uptime and staying on-path. We evaluate how users react to this approach, in a trial over three non-consecutive days, compared to a control approach of static safety zones. We measure working efficiency as well as human factors such as trust, understanding of the robot, and perceived safety. Using our approach, the robot is indeed more efficient compared to static safety zones and the effect persists over multiple trials on separate days. We also observed that understanding of the robot’s movement increased for our method over the course of trials, and the perceived safety of the robot increased for both our method and the control.
In building on theories of Computer-Mediated Communication (CMC), Human–Robot Interaction, and Media Psychology (M[Formula: see text]; i.e., Theory of Affective Bonding), this paper proposes an explanation of how over time, people experience the mediated or simulated aspects of the interaction with a social robot. In two simultaneously running loops, a more reflective process is balanced with a more affective process. If human interference is detected behind the machine, Robot-Mediated Communication commences, which basically follows CMC assumptions; if human interference remains undetected, Human–Robot Communication (HRC) comes into play, holding the robot for an autonomous social actor. The more emotionally aroused a robot user is, the more likely they develop an affective relationship with what actually is a machine. The main contribution of this paper is an integration of CMC, HRC, and M[Formula: see text], outlining a full-blown theory of robot communication connected to friendship formation, accounting for communicative features, modes of processing, as well as psychophysiology.
When people use electronic media for their communication, Computer-Mediated Communication (CMC) theories describe the social and communicative aspects of people’s interpersonal transactions. When people interact via a remote-controlled robot, many of the CMC theses hold. Yet, what if people communicate with a conversation robot that is (partly) autonomous? Do the same theories apply? This paper discusses CMC theories in confrontation with observations and research data gained from human–robot communication. As a result, I argue for an addition to CMC theorizing when the robot as a medium itself becomes the communication partner. In view of the rise of social robots in coming years, I define the theoretical precepts of a possible next step in CMC, which I elaborate in a second paper.
Soziale Innovation bedeutet in einer stark verkürzten Formel, dass Viele etwas anders machen und daraus eine neue gelebte Praxis entspringt. Ursachen für ein über Individualhandeln hinausgehend beobachtbar neues Verhalten gibt es mannigfaltige. In der Hauptsache liegen sie in einer Unzufriedenheit mit bestehenden Praktiken oder Lebensumständen – in einigen Fällen jedoch entsteht innovatives Handeln aus einem Leidensdruck.
The global demand on technological services that make people independent of others is growing. Social robots seem an outstanding candidate to offer services for self-management and companionship because they can deliver abstract information in an understandable way and are treated as trusted partners. Recently, I initiated the Robot Brain Server (RBS) project, which handles the data, data security, and Artificial Intelligence that drives the robots. RBS takes a hybrid-centered design approach in which software developers work with the public at large to produce a new generation of artificial cognitive service systems to support specialists in care, education, hospitality, and other service professions.
Full-text available
With the increasing dependence on autonomous operating agents and robots the need for ethical machine behavior rises. This paper presents a moral reasoner that combines connectionism, utilitarianism and ethical theory about moral duties. The moral decision-making matches the analysis of expert ethicists in the health domain. This may be useful in many applications, especially where machines interact with humans in a medical context. Additionally, when connected to a cognitive model of emotional intelligence and affective decision making, it can be explored how moral decision making impacts affective behavior.
Full-text available
This study investigated whether emotions induced in TV-viewers (either as an emotional state or co-occurring with emotional involvement) would increase viewers' perception of realism in a fake documentary and affect the information value that viewers would attribute to its content. To that end, two experiments were conducted that manipulated (a) participants' emotions and (b) the framing of a TV-documentary (fiction vs. reality-based). The results of Study 1, a 2 (Mood Induction: yes vs. no) × 2 (Fiction-Based vs. Reality-Based) design, indicated that when they believed the program was fictional, emotional viewers attributed more realism and higher information value to the TV-program than non-emotional viewers did. In Study 2, a 3 (Positive vs. Negative vs. No-Mood Induction) × 2 (Fiction-Based vs. Reality-Based) design, a significant effect of emotional involvement (empathy) on perceptions of realism and information value was found. Potential explanations for the finding that emotional viewers seemed more inclined to take fiction for real than non-emotional viewers in light of recent literature on perceiving realism and emotion theory is discussed.
Full-text available
In this study we examined the social behaviors of 4- to 12-year-old children with autism spectrum disorders (ASD; N = 24) during three tradic interactions with an adult confederate and an interaction partner, where the interaction partner varied randomly among (1) another adult human, (2) a touchscreen computer game, and (3) a social dinosaur robot. Children spoke more in general, and directed more speech to the adult confederate, when the interaction partner was a robot, as compared to a human or computer game interaction partner. Children spoke as much to the robot as to the adult interaction partner. This study provides the largest demonstration of social human-robot interaction in children with autism to date. Our findings suggest that social robots may be developed into useful tools for social skills and communication therapies, specifically by embedding social interaction into intrinsic reinforcers and motivators.
Having artificial agents to autonomously produce human-like behaviour is one of the most ambitious original goals of Artificial Intelligence (AI) and remains an open problem nowadays. The imitation game originally proposed by Turing constitute a very effective method to prove the indistinguishability of an artificial agent. The behaviour of an agent is said to be indistinguishable from that of a human when observers (the so-called judges in the Turing test) cannot tell apart humans and non-human agents. Different environments, testing protocols, scopes and problem domains can be established to develop limited versions or variants of the original Turing test. In this paper we use a specific version of the Turing test, based on the international BotPrize competition, built in a First-Person Shooter video game, where both human players and non-player characters interact in complex virtual environments. Based on our past experience both in the BotPrize competition and other robotics and computer game AI applications we have developed three new more advanced controllers for believable agents: two based on a combination of the CERA–CRANIUM and SOAR cognitive architectures and other based on ADANN, a system for the automatic evolution and adaptation of artificial neural networks. These two new agents have been put to the test jointly with CCBot3, the winner of BotPrize 2010 competition (Arrabales et al., 2012), and have showed a significant improvement in the humanness ratio. Additionally, we have confronted all these bots to both First-person believability assessment (BotPrize original judging protocol) and Third-person believability assessment, demonstrating that the active involvement of the judge has a great impact in the recognition of human-like behaviour.
In aiming for behavioral fidelity, artificial intelligence cannot and no longer ignores the formalization of human affect. Affect modeling plays a vital role in faithfully simulating human emotion and in emotionally-evocative technology that aims at being real. This paper offers a short expose about three models concerning the regulation and generation of affect: CoMERG, EMA and I-PEFiCADM, which each in their own right are successfully applied in the agent and robot domain. We argue that the three models partly overlap and where distinct, they complement one another. To enable their integration, we provide an analysis of the theoretical concepts, resulting in a more precise representation of affect simulation in virtual humans, which we verify with simulation tests.
Two experiments explored the determinants of self-disclosure between strangers in airport departure lounges. Experiment I focused on the effects of demand characteristics on self-disclosure reciprocity. Subjects were asked to provide either “handwriting samples” or written “self-descriptions.” More intimate and longer disclosures were provided in the self-description condition. Subjects in the self-description condition also tended to reciprocate the intimacy level of the experimenter's prior disclosure to a greater degree. These results were attributed to a process of modeling, in response to the demand characteristics of the situation. Experiment II employed the handwriting paradigm to probe the limits of self-disclosure reciprocity. The experimenter first disclosed himself at either a low, medium, or high level of intimacy, and he did so either nonpersonalistically (he simply copied a standard measage) or personalistically (he pretended to create the message specifically for the subject). It was predicted that in the nonpersonalistic conditions subjects would again model the experimenter's level of intimacy. In the personalistic conditions, however, considerations of trust were expected to supplement or supplant the modeling mechanism. In particular, the personalistic, high intimacy message was expected to give rise to suspicion rather than trust and, as a result, to elicit a reduced degree of self-disclosure. The results with respect to the length of the subjects' messages conformed closely to the predicted pattern. On a qualitative measure of intimacy, there was a less perfect fit between predictions and results. Other results from both studies concerned the impact of sex roles upon patterns of self-disclosure. In Experiment II it was also found that out-of-town visitors wrote longer messages than did local residents, suggesting the operation of a “passing stranger” effect.
Social networks in the 21st century include a wide array of partners. Most individuals report a few core ties (primarily family) and hundreds of peripheral ties. Weak ties differ from intimate ties in emotional quality, stability, density (i.e., who knows whom), and status hierarchies. Undoubtedly, close ties are essential for human survival. Yet peripheral ties may enhance life quality and allow people to flourish. Weak ties may serve (a) distinct functions from intimate ties (e.g., information, resources, novel behaviors, and diversion), (b) parallel functions to intimate ties (e.g., defining identity and positions within social hierarchies, helping when a family member is ill, providing a sense of familiarity), and (c) reciprocal influences between peripheral partners and family members (e.g., bioecological theory). Family science might benefit from investigating consequential strangers who pepper daily life.