Content uploaded by Heloisa Candello
All content in this area was uploaded by Heloisa Candello on Jun 08, 2020
Content may be subject to copyright.
Co-designing a Conversational
Interactive Exhibit for Children
São Paulo, BR
São Paulo, BR
University of São Paulo
São Paulo, BR
São Paulo, BR
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for proﬁt or commercial advantage and that copies bear this notice and the full citation
on the ﬁrst page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
Copyright held by the owner/author(s).
IDC ’20 Extended Abstracts, June 21–24, 2020, London, UK
In this paper, we describe and reﬂect on the process of co-
designing an Artiﬁcial Intelligence (AI) exhibition aimed to
teach children basic AI concepts in the Catavento Science
Museum in São Paulo, Brazil. We focus on two of the co-
design process with the museum staff: one which sought
to design the ﬂow of the experience and get the sense of
the target audience; and another which intended to provide
content for the AI exhibition. We describe the activities and
show how they assisted in the design process and opened
possibilities for the design team to develop an exciting ex-
perience, tailored to children, promoting informal learning of
Artiﬁcial Intelligence concepts.
Co-designing AI systems, Conversational user interfaces;
exhibitions for children.
There are several challenges to the design of Artiﬁcial In-
telligence systems, particularly for conversational based AI
. Some are related to technological constraints, such as
natural language understanding in voice-based conversa-
tional systems, but others are related to the data available
and acquired to make the system works.
Museums and cultural heritage places are using those tech-
nologies to engage visitors’ in their spaces [7, 1, 4]. In ad-
dition to the technological challenges and constraints, using
AI in museums also introduce new difﬁculties due to the use
of open and public spaces in the experience: time length,
environmental noises, crowds, and visitors’ engagement.
To design a museum exhibition supported by AI technolo-
gies requires an interdisciplinary team who is able to handle
technology constraints, heritage interpretations, content
creation, and visitors’ interaction in public spaces.
Figure 1: The exhibition sketch.
Figure 2: Demo in the lab: The
three robots talking to each other
and displaying the answer
conﬁdence level. (Parents signed a
consent form for the use of data
and images for research
Figure 3: Demo in the lab: Child
typing an example question to
teach the robots.
In this paper, we describe and reﬂect on the process of co-
designing an exhibition, supported by AI technologies, with
an interdisciplinary team, in the Catavento Science Mu-
seum located in São Paulo, Brazil. The aim of the exhibition
is to introduce basic AI concepts for children between 9 and
14 years old. In the hands-on exhibition, children are in-
vited to teach science content to three machines (robots)
which will then participate in a quiz show to test what they
have learned. The interaction is in natural language and
children also have the chance to play the role of the host of
the show to evaluate if the machines learned the content
We co-designed this experience with museum curators and
guides to have a prototype concept ready to test with chil-
dren. We brieﬂy describe here the design process, focus-
ing on two activities which contributed more extensively
to the progress of the project. We detail and discuss the
ideas which emerged from each activity and how those con-
tributed to the understanding the particularities of designing
an informal learning, AI experience for children in a mu-
We describe here the co-design activities of a conversa-
tional Artiﬁcial Intelligence exhibit which aims to teach chil-
dren essential Artiﬁcial Intelligence concepts, such as the
ones discussed in . The project team was interdisci-
plinary and had extensive experience in designing and de-
ployment of conversational systems for public spaces. Our
team included HCI researchers, designers, and computer
scientists. The process considered museum curators and
staff as co-creators, and together we co-designed several
steps of the experience, considering the pace and tailor the
process for our target age. In the process, we had the par-
ticipation of 4 museum curators, two technicians, and 118
museum guides. They signed a consent form. No children
participated in this work.
The process followed the Design Research methodology 
and a Human centered and Participatory approach [5, 6].
It consisted in conducting a range of techniques involving
empirical work and collaborative co-creation workshops in-
side the museum (see Figure 4). The design research pro-
cess was composed of: Preliminary Research, Prototyping
phase, Assessment phase and, Reﬂection and Documenta-
tion . The process was iterative and informed the design
concept of the exhibition along the way (see a sketch of the
exhibit on Figure 1). In this paper, we focus mostly on the
Prototyping phase, where we had an active participation of
the museum staff.
We were invited by the museum curators to promote an Ar-
tiﬁcial Intelligence exhibit which was previously presented
in another venue (described in ). After several interac-
tions with museum curators and visitors during the Prelim-
inary Research, it was unveiled that the adult theme of the
previous exhibit would be not suitable for the public of the
Figure 4: The Design Process of the Conversational AI exhibition.
museum, mainly children between 9 and 13 years old. To-
gether with the museum curators in brainstorm sessions
we selected Artiﬁcial Intelligence as the core theme for the
future permanent exhibition. We also agreed that the AI
exhibition would serve to recall the main science concepts
children had explored with guides and teachers on the mu-
The Science Museum
Catavento is a science mu-
seum located in São Paulo,
Brazil. It encompasses more
than 250 exhibits. The mu-
seum receives more than
2,500 visitors daily. Besides
traditional walk-in visitors,
the museum also provides
guided tours for groups with
20 to 22 students.
In summary, the goal of the exhibit is to teach Artiﬁcial In-
telligence concepts in an interdisciplinary way, exploring
content learned by children in other museum sections. The
installation aims to explore three main basic AI concepts
in the interaction : (1) AI systems use knowledge ac-
quired from human beings; (2) AI systems do not know
everything and makes mistakes; and (3) AI systems are
corrected and, improved by human beings. We focus on
providing what we consider the minimum understanding of
AI to enable citizens to make better decisions about its use
in society, and to understand and question the black-box
nature of most modern AI systems.
The envisioned exhibit space resembles, in many aspects,
the famous AI event of 2011 where the IBM Watson com-
puter defeated two world champions in the TV quiz game
Jeopardy. In the space (see Figure 1, 2), three AI systems
represented by robot heads compete in a quiz game about
science topics the children have just learned in their visit to
the museum. During the game, the children see some of
the robots being unable to answer some questions. Then
the children are invited to improve the AI systems of the
robots by working on three touch displays, each connected
to one of the heads (Figure 3).
We created an initial script for the basic experience de-
scribed above which was tested in a role-playing activity.
The inclusion of role-playing activities in the design process
is not a novelty , and it is a recurrent practice employed
by designers and HCI researchers to make the process
more experimental and creatively generative bringing the
project to another level . In our case, applying this tech-
nique resulted in unveiling the need of certain elements to
make the experience more pleasurable and of better inte-
gration of the experience to the physical space.
Figure 5: Museum staff and
project team discussing the script
with the director.
Figure 6: Role-playing activity in
the real setting of the future
Figure 7: Mapped visitor questions
and answers by guides.
We conducted two afternoons of role-playing activities
with the museum staff. In the ﬁrst day we tested the ex-
hibit mode where visitors ask questions to the robots and
listen to the answers. This mode will be also available for
spontaneous visitors not participating in the guided visits. In
the second day, we simulated the guided visit, in which chil-
dren in groups would teach the robots examples of human
questions, matching the information robots know about the
sessions of the Science Museum. In this simulation guides
explain the basics of AI and how such machines learn and
acquire conﬁdence to answer human questions.
Museum curators, guides, and educators from the museum,
along with developers and designers, were the actors of the
role-playing activity. First, they read the script together, and
one of the members of our team acted as a director. The
script was adapted and discussed out loud with the charac-
ters of the scene: three robots, one human guide to explain
the activity, and children in each training station. They were
advised to embody the behavior of the characters, for in-
stance, pretending to be a 9 year-old child asking questions
(see Figures 5, 6).
The activity was led in the space of the future exhibition, in
a space with mock-ups of the training stations, and a pro-
totype of the talking heads prepared by the museum was
used to set up the scene better. The original script and
prints of the digital screens were prepared by the project
team to simulate the interactions (Figure 6). Each session
took in average 3 hours. We repeated the script three times
in each session with breaks to discuss and improve the ex-
perience based on suggestions from the whole team. In the
last 30 minutes, we did a debrieﬁng, followed with steps on
how to proceed to polish and make the experience more
engaging for children.
Among the results of the role-playing activity we cite: 1
Robots: by improvising, we noticed the need to have the
robots to respond more often when interaction in the screens
was done. We also noticed the need for creating three en-
gaging personalities and expressions for the robots (Funny,
Patronizing, and Intelligent). 2Human Guides: we identi-
ﬁed that the guides need an interface to control the expe-
rience and manage children’s attention, for instance, vol-
ume control of the robots’ voice; locking the training station;
turn on/off emergence lights; screen-sharing). 3Integration
with physical space: The sequence of visual screens in the
pulpit was also modiﬁed to ﬁt the sequence of actions and
adding robot voice reactions to them were deemed crucial.
Signs and graphic panels were included in the environment
to make the experience more intuitive to visitors. Addition-
ally, we simulated the movement of the people in the space
and decided when to make training stations available for
children’s interaction in each step of the experience. The
results of the role-playing activity were incorporated to the
envisioned experience, described in the side box.
Datathon: Creating Content with Museum Guides
Creating content for conversational systems is a challenge
in many contexts. The cold start problem, known as not
having enough data for the system works properly, is one of
the pitfalls of AI conversational systems based in Machine
Learning models . To address this issue, we promoted
ﬁve workshop sessions (called Datathons) with museum
guides to create the sentences the robots would answer to
the public and also predict possible questions visitors would
ask when interacting with the exhibition.
Another challenge was to tailor the questions and the an-
swers in the system suitable for the museum public. The
museum guides are trained to deal with visitors’ questions
every day and have extensive experience with a broad age
range, and adapting the scientiﬁc information to explain to
the diverse visitors is a particular skill they have. Therefore,
we found they were the best partners to collaborate in the
Figure 8: Guides exchanging
cards and discussing the content.
Figure 9: Guides mapping
questions from the museum space.
Figure 10: Datathon activity:
Guides learning how to create
In total, 111 guides participated in ﬁve datathons of 20-30
participants. Each of the ﬁve sessions consisted ﬁrst on a
warm-up design ﬁction exercise to understand their expec-
tations of the meaning and reactions to bring conversational
AI systems to the museum . Second, in small groups,
guides were asked to choose a visitor age between 9 to 13
years old. Third, each of them received three small paper
cards and were instructed to write real visitor questions,
one on which card, suitable to the age chosen. Fourth, we
asked them to write in the back of the card the answer for
the written question (Figure 7). Fifth, they exchanged cards
with colleagues in the same group, and write possible vari-
ations of the same question on each new card received
(Figure 8). Sixth, with the exchanged card, they read the
answer, corrected it if it was necessary, and shortened it.
Guides were free to discuss in the groups after exchanging
their cards. Seventh, they were invited to do a role-playing
activity to share their created questions and answers. In the
role-playing, one guide acted like a robot, answering the
questions, and another pretended to be the child in the age
deﬁned in the card, allowing them to adjust the content.
After doing two sessions, we changed the location of the
activity from an empty room to the science ﬂoor of the mu-
seum. The quality of the questions increased and guides
seemed to be able to frame the questions more as the visi-
tors would ask, using a more colloquial language. The pan-
els on the Museum ﬂoors helped to trigger new questions
not previously thought when the activity was hosted in a
room with chairs (Figure 10).
Afterward, they were invited to attend a training session
of the IBM Watson Assistant API service which allowed
them to learn how to build and train conversational agents,
such as the ones in the exhibition space. Guides used their
polished questions and answers created in the previous
activity to implement a chatbot using the API service, work-
ing in pairs on a shared computer. After each pair built its
chatbot, two museum curators were invited to evaluate the
chatbots giving a score for each team based on the quality
of answers and performance of the chatbot. Participants
who had their bots ranked among the top three groups won
symbolic prizes (books).
Overall, we collected 1,099 questions from eleven differ-
ent themes. The themes include Earth, Evolution, Insects,
Matter, and Life. It also included wayﬁnding questions (i.e.,
where is the ladies’ room?). We clustered the questions
into similar domains and created three versions of answers
tailored to each robots’ personality. The robot answers were
revised by the curators, and the executive committee of
the museum, and then added to the IBM Watson Assistant
workspace used in the project.
Lessons Learned and Applicability
Involving cultural heritage professionals and museum staff
was essential to create an interactive exhibition consid-
ering the dynamics of the space, audience engagement,
and the challenges of guided visits with children. The role-
playing activity seems to have contributed to designers,
researchers, and museum curators as a possibility to ma-
terialize the experience and deal with the particularities of
developing conversational systems in informal settings. The
Datathon helped to start the conversational AI experience
with an initial training data set tailored to the audience and
Basic description of the
1The guide explains that the
robots learn by matching the
question asked to previously
2The guide asks questions
to the three robots in a sci-
ence quiz game.
3The guide tells the chil-
dren they can improve the
machine’s performance by
providing more question ex-
amples and shows them how
to do that.
4Each of the three groups
of children works on one of
the touchscreen displays to
provide more examples and
improve the performance of
a particular machine using a
conversational interface (see
5The guide repeats the
quiz game, asking the ques-
tions the kids provided to
the robots. The improved
performance of the machines
is visually shown.
6The guide congratulates all
and discusses with the chil-
dren the concepts learned.
engaged the museum guides in the process. We hope this
process might serve as an example to follow in projects with
similar challenges. We plan to conduct evaluation studies
with children in the museum before the exhibition opens in
the second semester of 2020.
 Stefania Boiano, Ann Borda, Guiliano Gaia, Stefania
Rossi, and Pietro Cuomo. 2018. Chatbots and New
Audience Opportunities for Museums and Heritage
Organisations. In EVA London 2018.DOI:
 Heloisa Candello, Benjamin Cowan, and Cosmin
Munteanu. 2020. CUI@CHI: Mapping Grand
Challenges for the Conversational User Interface
Community. In Proceedings of the 2020 CHI
Conference on Human Factors in Computing Systems.
 Heloisa Candello, Mauro Pichiliani, Mairieli Wessel,
Claudio Pinhanez, and Michael Muller. 2019a.
Teaching Robots to Act and Converse in Physical
Spaces: Participatory Design Fictions with Museum
Guides. In Proceedings of the Halfway to the Future
Symposium 2019. 1–4.
 Heloisa Candello, Claudio Pinhanez, Mauro Pichiliani,
Paulo Cavalin, Flavio Figueiredo, Marisa Vasconcelos,
and Haylla Do Carmo. 2019b. The Effect of Audiences
on the User Experience with Conversational Interfaces
in Physical Spaces. In Proceedings of the 2019 CHI
Conference on Human Factors in Computing Systems
(CHI ’19). ACM, Glasgow, Scotland UK, 90:1–90:13.
 Luigina Ciolﬁ, Gabriela Avram, Laura Maye, Nick
Dulake, Mark T Marshall, Dick van Dijk, and Fiona
McDermott. 2016. Articulating co-design in museums:
Reﬂections on two participatory processes. In
Proceedings of the 19th ACM Conference on
Computer-Supported Cooperative Work & Social
 Hugo Fuks, Heloisa Moura, Debora Cardador, Katia
Vega, Wallace Ugulino, and Marcos Barbato. 2012.
Collaborative museums: an approach to co-design. In
Proceedings of the ACM 2012 conference on
Computer Supported Cooperative Work. 681–684.
 Giuliano Gaia, Stefania Boiano, and Ann Borda. 2019.
Engaging Museum Visitors with AI: The Case of
Chatbots. In Museums and Digital Culture. Springer,
 Xuan Nhat Lam, Thuc Vu, Trong Duc Le, and Anh Duc
Duong. 2008. Addressing cold-start problem in
recommendation systems. In Proceedings of the 2nd
international conference on Ubiquitous information
management and communication. 208–211.
 Susan McKenney and Thomas C Reeves. 2014.
Educational design research. In Handbook of research
on educational communications and technology.
 Kristian T Simsarian. 2003. Take it to the next stage:
the roles of role playing in the design process. In
CHI’03 extended abstracts on Human factors in
computing systems. 1012–1013.
 David Touretzky, Christina Gardner-McCune, Fred
Martin, and Deborah Seehorn. 2019. Envisioning AI for
K-12: What Should Every Child Know about AI?. In
Proceedings of the AAAI Conference on Artiﬁcial
Intelligence, Vol. 33. 9795–9799.