Conference PaperPDF Available

Co-designing a Conversational Interactive Exhibit for Children

Authors:

Abstract and Figures

In this paper, we describe and reflect on the process of co-designing an Artificial Intelligence (AI) exhibition aimed to teach children basic AI concepts in the Catavento Science Museum in São Paulo, Brazil. We focus on two of the co-design process with the museum staff: one which sought to design the flow of the experience and get the sense of the target audience; and another which intended to provide content for the AI exhibition. We describe the activities and show how they assisted in the design process and opened possibilities for the design team to develop an exciting experience , tailored to children, promoting informal learning of Artificial Intelligence concepts.
Content may be subject to copyright.
Co-designing a Conversational
Interactive Exhibit for Children
Heloisa Candello
IBM Research
São Paulo, BR
hcandello@br.ibm.com
Sara Vidon
IBM Research
Sara.Vidon@ibm.com
Claudio Pinhanez
IBM Research
São Paulo, BR
csantosp@br.ibm.com
Mairieli Wessel
University of São Paulo
São Paulo, BR
mairieliw@gmail.com
Mauro Pichiliani
IBM Research
São Paulo, BR
mpichi@br.ibm.com
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
Copyright held by the owner/author(s).
IDC ’20 Extended Abstracts, June 21–24, 2020, London, UK
ACM 978-1-4503-8020-1/20/06.
10.1145/3397617.3397840
Abstract
In this paper, we describe and reflect on the process of co-
designing an Artificial Intelligence (AI) exhibition aimed to
teach children basic AI concepts in the Catavento Science
Museum in São Paulo, Brazil. We focus on two of the co-
design process with the museum staff: one which sought
to design the flow of the experience and get the sense of
the target audience; and another which intended to provide
content for the AI exhibition. We describe the activities and
show how they assisted in the design process and opened
possibilities for the design team to develop an exciting ex-
perience, tailored to children, promoting informal learning of
Artificial Intelligence concepts.
Author Keywords
Co-designing AI systems, Conversational user interfaces;
exhibitions for children.
Introduction
There are several challenges to the design of Artificial In-
telligence systems, particularly for conversational based AI
[2]. Some are related to technological constraints, such as
natural language understanding in voice-based conversa-
tional systems, but others are related to the data available
and acquired to make the system works.
Museums and cultural heritage places are using those tech-
nologies to engage visitors’ in their spaces [7, 1, 4]. In ad-
dition to the technological challenges and constraints, using
AI in museums also introduce new difficulties due to the use
of open and public spaces in the experience: time length,
environmental noises, crowds, and visitors’ engagement.
To design a museum exhibition supported by AI technolo-
gies requires an interdisciplinary team who is able to handle
technology constraints, heritage interpretations, content
creation, and visitors’ interaction in public spaces.
Figure 1: The exhibition sketch.
Figure 2: Demo in the lab: The
three robots talking to each other
and displaying the answer
confidence level. (Parents signed a
consent form for the use of data
and images for research
purposes.)
Figure 3: Demo in the lab: Child
typing an example question to
teach the robots.
In this paper, we describe and reflect on the process of co-
designing an exhibition, supported by AI technologies, with
an interdisciplinary team, in the Catavento Science Mu-
seum located in São Paulo, Brazil. The aim of the exhibition
is to introduce basic AI concepts for children between 9 and
14 years old. In the hands-on exhibition, children are in-
vited to teach science content to three machines (robots)
which will then participate in a quiz show to test what they
have learned. The interaction is in natural language and
children also have the chance to play the role of the host of
the show to evaluate if the machines learned the content
taught.
We co-designed this experience with museum curators and
guides to have a prototype concept ready to test with chil-
dren. We briefly describe here the design process, focus-
ing on two activities which contributed more extensively
to the progress of the project. We detail and discuss the
ideas which emerged from each activity and how those con-
tributed to the understanding the particularities of designing
an informal learning, AI experience for children in a mu-
seum space.
Methodology
We describe here the co-design activities of a conversa-
tional Artificial Intelligence exhibit which aims to teach chil-
dren essential Artificial Intelligence concepts, such as the
ones discussed in [11]. The project team was interdisci-
plinary and had extensive experience in designing and de-
ployment of conversational systems for public spaces. Our
team included HCI researchers, designers, and computer
scientists. The process considered museum curators and
staff as co-creators, and together we co-designed several
steps of the experience, considering the pace and tailor the
process for our target age. In the process, we had the par-
ticipation of 4 museum curators, two technicians, and 118
museum guides. They signed a consent form. No children
participated in this work.
The process followed the Design Research methodology [9]
and a Human centered and Participatory approach [5, 6].
It consisted in conducting a range of techniques involving
empirical work and collaborative co-creation workshops in-
side the museum (see Figure 4). The design research pro-
cess was composed of: Preliminary Research, Prototyping
phase, Assessment phase and, Reflection and Documenta-
tion [9]. The process was iterative and informed the design
concept of the exhibition along the way (see a sketch of the
exhibit on Figure 1). In this paper, we focus mostly on the
Prototyping phase, where we had an active participation of
the museum staff.
Design Concept
We were invited by the museum curators to promote an Ar-
tificial Intelligence exhibit which was previously presented
in another venue (described in [4]). After several interac-
tions with museum curators and visitors during the Prelim-
inary Research, it was unveiled that the adult theme of the
previous exhibit would be not suitable for the public of the
Figure 4: The Design Process of the Conversational AI exhibition.
museum, mainly children between 9 and 13 years old. To-
gether with the museum curators in brainstorm sessions
we selected Artificial Intelligence as the core theme for the
future permanent exhibition. We also agreed that the AI
exhibition would serve to recall the main science concepts
children had explored with guides and teachers on the mu-
seum floor.
The Science Museum
Catavento is a science mu-
seum located in São Paulo,
Brazil. It encompasses more
than 250 exhibits. The mu-
seum receives more than
2,500 visitors daily. Besides
traditional walk-in visitors,
the museum also provides
guided tours for groups with
20 to 22 students.
In summary, the goal of the exhibit is to teach Artificial In-
telligence concepts in an interdisciplinary way, exploring
content learned by children in other museum sections. The
installation aims to explore three main basic AI concepts
in the interaction [11]: (1) AI systems use knowledge ac-
quired from human beings; (2) AI systems do not know
everything and makes mistakes; and (3) AI systems are
corrected and, improved by human beings. We focus on
providing what we consider the minimum understanding of
AI to enable citizens to make better decisions about its use
in society, and to understand and question the black-box
nature of most modern AI systems.
The envisioned exhibit space resembles, in many aspects,
the famous AI event of 2011 where the IBM Watson com-
puter defeated two world champions in the TV quiz game
Jeopardy. In the space (see Figure 1, 2), three AI systems
represented by robot heads compete in a quiz game about
science topics the children have just learned in their visit to
the museum. During the game, the children see some of
the robots being unable to answer some questions. Then
the children are invited to improve the AI systems of the
robots by working on three touch displays, each connected
to one of the heads (Figure 3).
Role-playing Activity
We created an initial script for the basic experience de-
scribed above which was tested in a role-playing activity.
The inclusion of role-playing activities in the design process
is not a novelty [10], and it is a recurrent practice employed
by designers and HCI researchers to make the process
more experimental and creatively generative bringing the
project to another level [10]. In our case, applying this tech-
nique resulted in unveiling the need of certain elements to
make the experience more pleasurable and of better inte-
gration of the experience to the physical space.
Figure 5: Museum staff and
project team discussing the script
with the director.
Figure 6: Role-playing activity in
the real setting of the future
exhibition.
Figure 7: Mapped visitor questions
and answers by guides.
We conducted two afternoons of role-playing activities
with the museum staff. In the first day we tested the ex-
hibit mode where visitors ask questions to the robots and
listen to the answers. This mode will be also available for
spontaneous visitors not participating in the guided visits. In
the second day, we simulated the guided visit, in which chil-
dren in groups would teach the robots examples of human
questions, matching the information robots know about the
sessions of the Science Museum. In this simulation guides
explain the basics of AI and how such machines learn and
acquire confidence to answer human questions.
Museum curators, guides, and educators from the museum,
along with developers and designers, were the actors of the
role-playing activity. First, they read the script together, and
one of the members of our team acted as a director. The
script was adapted and discussed out loud with the charac-
ters of the scene: three robots, one human guide to explain
the activity, and children in each training station. They were
advised to embody the behavior of the characters, for in-
stance, pretending to be a 9 year-old child asking questions
(see Figures 5, 6).
The activity was led in the space of the future exhibition, in
a space with mock-ups of the training stations, and a pro-
totype of the talking heads prepared by the museum was
used to set up the scene better. The original script and
prints of the digital screens were prepared by the project
team to simulate the interactions (Figure 6). Each session
took in average 3 hours. We repeated the script three times
in each session with breaks to discuss and improve the ex-
perience based on suggestions from the whole team. In the
last 30 minutes, we did a debriefing, followed with steps on
how to proceed to polish and make the experience more
engaging for children.
Among the results of the role-playing activity we cite: 1
Robots: by improvising, we noticed the need to have the
robots to respond more often when interaction in the screens
was done. We also noticed the need for creating three en-
gaging personalities and expressions for the robots (Funny,
Patronizing, and Intelligent). 2Human Guides: we identi-
fied that the guides need an interface to control the expe-
rience and manage children’s attention, for instance, vol-
ume control of the robots’ voice; locking the training station;
turn on/off emergence lights; screen-sharing). 3Integration
with physical space: The sequence of visual screens in the
pulpit was also modified to fit the sequence of actions and
adding robot voice reactions to them were deemed crucial.
Signs and graphic panels were included in the environment
to make the experience more intuitive to visitors. Addition-
ally, we simulated the movement of the people in the space
and decided when to make training stations available for
children’s interaction in each step of the experience. The
results of the role-playing activity were incorporated to the
envisioned experience, described in the side box.
Datathon: Creating Content with Museum Guides
Creating content for conversational systems is a challenge
in many contexts. The cold start problem, known as not
having enough data for the system works properly, is one of
the pitfalls of AI conversational systems based in Machine
Learning models [8]. To address this issue, we promoted
five workshop sessions (called Datathons) with museum
guides to create the sentences the robots would answer to
the public and also predict possible questions visitors would
ask when interacting with the exhibition.
Another challenge was to tailor the questions and the an-
swers in the system suitable for the museum public. The
museum guides are trained to deal with visitors’ questions
every day and have extensive experience with a broad age
range, and adapting the scientific information to explain to
the diverse visitors is a particular skill they have. Therefore,
we found they were the best partners to collaborate in the
content creation.
Figure 8: Guides exchanging
cards and discussing the content.
Figure 9: Guides mapping
questions from the museum space.
Figure 10: Datathon activity:
Guides learning how to create
chatbots
In total, 111 guides participated in five datathons of 20-30
participants. Each of the five sessions consisted first on a
warm-up design fiction exercise to understand their expec-
tations of the meaning and reactions to bring conversational
AI systems to the museum [3]. Second, in small groups,
guides were asked to choose a visitor age between 9 to 13
years old. Third, each of them received three small paper
cards and were instructed to write real visitor questions,
one on which card, suitable to the age chosen. Fourth, we
asked them to write in the back of the card the answer for
the written question (Figure 7). Fifth, they exchanged cards
with colleagues in the same group, and write possible vari-
ations of the same question on each new card received
(Figure 8). Sixth, with the exchanged card, they read the
answer, corrected it if it was necessary, and shortened it.
Guides were free to discuss in the groups after exchanging
their cards. Seventh, they were invited to do a role-playing
activity to share their created questions and answers. In the
role-playing, one guide acted like a robot, answering the
questions, and another pretended to be the child in the age
defined in the card, allowing them to adjust the content.
After doing two sessions, we changed the location of the
activity from an empty room to the science floor of the mu-
seum. The quality of the questions increased and guides
seemed to be able to frame the questions more as the visi-
tors would ask, using a more colloquial language. The pan-
els on the Museum floors helped to trigger new questions
not previously thought when the activity was hosted in a
room with chairs (Figure 10).
Afterward, they were invited to attend a training session
of the IBM Watson Assistant API service which allowed
them to learn how to build and train conversational agents,
such as the ones in the exhibition space. Guides used their
polished questions and answers created in the previous
activity to implement a chatbot using the API service, work-
ing in pairs on a shared computer. After each pair built its
chatbot, two museum curators were invited to evaluate the
chatbots giving a score for each team based on the quality
of answers and performance of the chatbot. Participants
who had their bots ranked among the top three groups won
symbolic prizes (books).
Overall, we collected 1,099 questions from eleven differ-
ent themes. The themes include Earth, Evolution, Insects,
Matter, and Life. It also included wayfinding questions (i.e.,
where is the ladies’ room?). We clustered the questions
into similar domains and created three versions of answers
tailored to each robots’ personality. The robot answers were
revised by the curators, and the executive committee of
the museum, and then added to the IBM Watson Assistant
workspace used in the project.
Lessons Learned and Applicability
Involving cultural heritage professionals and museum staff
was essential to create an interactive exhibition consid-
ering the dynamics of the space, audience engagement,
and the challenges of guided visits with children. The role-
playing activity seems to have contributed to designers,
researchers, and museum curators as a possibility to ma-
terialize the experience and deal with the particularities of
developing conversational systems in informal settings. The
Datathon helped to start the conversational AI experience
with an initial training data set tailored to the audience and
Basic description of the
exhibition experience:
1The guide explains that the
robots learn by matching the
question asked to previously
stored examples.
2The guide asks questions
to the three robots in a sci-
ence quiz game.
3The guide tells the chil-
dren they can improve the
machine’s performance by
providing more question ex-
amples and shows them how
to do that.
4Each of the three groups
of children works on one of
the touchscreen displays to
provide more examples and
improve the performance of
a particular machine using a
conversational interface (see
Figure 4).
5The guide repeats the
quiz game, asking the ques-
tions the kids provided to
the robots. The improved
performance of the machines
is visually shown.
6The guide congratulates all
and discusses with the chil-
dren the concepts learned.
engaged the museum guides in the process. We hope this
process might serve as an example to follow in projects with
similar challenges. We plan to conduct evaluation studies
with children in the museum before the exhibition opens in
the second semester of 2020.
REFERENCES
[1] Stefania Boiano, Ann Borda, Guiliano Gaia, Stefania
Rossi, and Pietro Cuomo. 2018. Chatbots and New
Audience Opportunities for Museums and Heritage
Organisations. In EVA London 2018.DOI:
http://dx.doi.org/10.14236/ewic/eva2018.33
[2] Heloisa Candello, Benjamin Cowan, and Cosmin
Munteanu. 2020. CUI@CHI: Mapping Grand
Challenges for the Conversational User Interface
Community. In Proceedings of the 2020 CHI
Conference on Human Factors in Computing Systems.
3476–3487.
[3] Heloisa Candello, Mauro Pichiliani, Mairieli Wessel,
Claudio Pinhanez, and Michael Muller. 2019a.
Teaching Robots to Act and Converse in Physical
Spaces: Participatory Design Fictions with Museum
Guides. In Proceedings of the Halfway to the Future
Symposium 2019. 1–4.
[4] Heloisa Candello, Claudio Pinhanez, Mauro Pichiliani,
Paulo Cavalin, Flavio Figueiredo, Marisa Vasconcelos,
and Haylla Do Carmo. 2019b. The Effect of Audiences
on the User Experience with Conversational Interfaces
in Physical Spaces. In Proceedings of the 2019 CHI
Conference on Human Factors in Computing Systems
(CHI ’19). ACM, Glasgow, Scotland UK, 90:1–90:13.
DOI:http://dx.doi.org/10.1145/3290605.3300320
[5] Luigina Ciolfi, Gabriela Avram, Laura Maye, Nick
Dulake, Mark T Marshall, Dick van Dijk, and Fiona
McDermott. 2016. Articulating co-design in museums:
Reflections on two participatory processes. In
Proceedings of the 19th ACM Conference on
Computer-Supported Cooperative Work & Social
Computing. 13–25.
[6] Hugo Fuks, Heloisa Moura, Debora Cardador, Katia
Vega, Wallace Ugulino, and Marcos Barbato. 2012.
Collaborative museums: an approach to co-design. In
Proceedings of the ACM 2012 conference on
Computer Supported Cooperative Work. 681–684.
[7] Giuliano Gaia, Stefania Boiano, and Ann Borda. 2019.
Engaging Museum Visitors with AI: The Case of
Chatbots. In Museums and Digital Culture. Springer,
309–329.
[8] Xuan Nhat Lam, Thuc Vu, Trong Duc Le, and Anh Duc
Duong. 2008. Addressing cold-start problem in
recommendation systems. In Proceedings of the 2nd
international conference on Ubiquitous information
management and communication. 208–211.
[9] Susan McKenney and Thomas C Reeves. 2014.
Educational design research. In Handbook of research
on educational communications and technology.
Springer, 131–140.
[10] Kristian T Simsarian. 2003. Take it to the next stage:
the roles of role playing in the design process. In
CHI’03 extended abstracts on Human factors in
computing systems. 1012–1013.
[11] David Touretzky, Christina Gardner-McCune, Fred
Martin, and Deborah Seehorn. 2019. Envisioning AI for
K-12: What Should Every Child Know about AI?. In
Proceedings of the AAAI Conference on Artificial
Intelligence, Vol. 33. 9795–9799.
... Examples of spoken dialog system prototypes for children include word games for pre-schoolers [19], aids for reading, and pronunciation tutoring [20]. Historically, multimodal interfaces that combine speech with a variety of other input modalities such as text, touch, mouse clicks, handwriting, and gestures have been designed [21]. Results of these designs indicate that multiple modalities, rather than a single modality, lead to more efficient and natural interaction and enhance the overall user experience. ...
... For ideal results, user interface design must be an essential and early factor in the whole design of a system. User interface design and application are most effective as an iterative process, with interfaces tested analytically on groups of children users, then amended as shortcomings are detected and rectified, and then retested until system performance is balanced and adequate [21]. Building VUI for children is stimulating and is a process encompassing several steps. ...
... Researchers have built several tools in support of VUI design [25], SUEDEenabled Wizard-of-Oz style prototyping of VUIs. SPICE and SToNE are toolkits for helping developers and researchers design speech recognizers for VUI applications [21]. To assist designers to modify the integrated voice in more useful and cost-effective ways, Amazon and IBM created their own innovative SSML (Speech Synthesis Markup Language) tags that contain the effects of various primitive standard SSML tags [26]. ...
Article
Full-text available
Voice User Interface (VUI) is an Artificial Intelligence tool that enables children to access a computing device and complete tasks through speech instead of using learning methods. VUI, a form of AI (Artificial Intelligence), takes a sound that children articulate in a spoken statement and use intent recognition to understand the action required to fulfill the child’s spoken request. The design and features of VUI have been developed to increase the interpersonal level of communication with users and, to some degree, make voice assistants behave like humans. The features that have been created, have been shaped in such a way as to improve learning efficacy and ease of use for early childhood learning development. The current available VUIs in the market have been geared to provide children with a simpler way to interact with access to educational technology learning tools. The research posits that there are two primary uses of VUI in childhood learning development exploration, whereby children use VUI as a form of entertainment and information seeking, and children use VUI to develop various knowledge facets. For children in the early language stages currently using language to communicate, VUI language stimulation can help children to engage in continuous communication processes, use and understand various words, and successfully complete more complex sentences. The research seeks to state the problems associated with VUI and the standard opinions based on research associated with the problem. Moreover, the study seeks to articulate the hypothesis that VUI is an effective tool for early childhood language learning through the use of peer-reviewed evidence and examples, to the hypothesis, to generate new and innovative perspectives.
... For example, in some cases, participants involved in design were not necessarily members of an affected stakeholder group themselves, but rather individuals who the project team perceived as being able to stand-in and voice stakeholders' preferences and values based on lived and/or work experience [8,44,45,50,138,145,157,204,224,226]. Some researchers we interviewed relied on policymakers they believed could "somehow summarize whatever the affected users are saying" [P7]. ...
Conference Paper
Full-text available
Despite the growing consensus that stakeholders affected by AI systems should participate in their design, enormous variation and implicit disagreements exist among current approaches. For researchers and practitioners who are interested in taking a participatory approach to AI design and development, it remains challenging to assess the extent to which any participatory approach grants substantive agency to stakeholders. This article thus aims to ground what we dub the "participatory turn" in AI design by synthesizing existing theoretical literature on participation and through empirical investigation and critique of its current practices. Specifically, we derive a conceptual framework through synthesis of literature across technology design, political theory, and the social sciences that researchers and practitioners can leverage to evaluate approaches to participation in AI design. Additionally, we articulate empirical findings concerning the current state of participatory practice in AI design based on an analysis of recently published research and semi-structured interviews with 12 AI researchers and practitioners. We use these empirical findings to understand the current state of participatory practice and subsequently provide guidance to better align participatory goals and methods in a way that accounts for practical constraints.
... In the process, we had the participation of 4 museum curators, two technicians, and 118 museum guides. The process is described in [4] and followed the Design Research methodology and a Human centered and Participatory approach. It consisted in conducting a range of techniques involving empirical work and collaborative co-creation workshops inside the museum. ...
... In the process, we had the participation of 4 museum curators, two technicians, and 118 museum guides. The process is described in [4] and followed the Design Research methodology and a Human centered and Participatory approach. It consisted in conducting a range of techniques involving empirical work and collaborative co-creation workshops inside the museum. ...
... The IDC workshop AIco Critical Reflection is an excellent opportunity to share what we learned so far in designing conversational systems in public spaces for children. [4]. ...
Conference Paper
Full-text available
Teaching Artificial intelligence concepts is incorporated in many formal and informal educational institutions. This initiative shows a novel learning experience of fundamental AI concepts for children in a 30-minute engaging, hands-on, playful experience in formal and informal settings such as museums and community centers. The experience aims to make children comprehend that AI systems are built from and by people, and is materialized in a series of activities where the children improve the abilities of three AI-powered conversational robots which compete in a quiz game. We would like to discuss in the workshop how children can learn about AI by teaching machines in natural language in a way that enables them to grasp those essential ideas about AI intuitively.
Conference Paper
Full-text available
The aim of this workshop is twofold. First, it aims to grow critical mass in Conversational User Interfaces (CUI) research by mapping the grand challenges in designing and researching these interactions. Second, this workshop is intended to further build the CUI community with these challenges in mind, whilst also growing CUI research presence at CHI. In particular, the workshop will survey and map topics such as: interaction design for text and voice-based CUI; the interplay between engineering efforts such as in Natural language Processing (NLP) and the design of CUI; practical CUI applications (e.g. human-robot interaction, public spaces, hands-free and wearables); and social, contextual, and cultural aspects of CUI design (e.g. ethics, privacy, trust, information exploration, persuasion, well-being, or decision-making, marginalized users). By drawing from the diverse interdisciplinary expertise that defines CHI, we are proposing this workshop as a platform on which to build a community that is better equipped to tackle an emerging field that is rapidly-evolving, yet is under-studied --- especially as the commercial advances seem to outpace the scholarly research in this space.
Conference Paper
Full-text available
This paper reflects on the expectations of museum guides regarding companion AI-powered robots in a science museum space. We employed Design Fiction as a technique to explore machine teaching of future technologies in public spaces. The fiction is illustrated by an open-ended “imaginary abstract” which showcases the dilemma of buying AI robots to work as floor guides in a science museum. Forty-seven museum guides participated in a study in which they were asked to write the end of a fictional story. Participants described their impressions and implications of teaching robots who would do their jobs. This design fiction activity is expected to help grounding the debate on machine teaching paradigms, values, and social dilemmas which new technologies bring to physical spaces.
Chapter
Full-text available
This chapter explores the application of artificial intelligence (AI) in museums and galleries in engaging their audiences, specifically through the development and use of chatbot technologies. Through a case study approach, the chapter further provides a practical focus on the design and implementation of an audience development pilot in Milan involving four historic house museums (Case Museo di Milano). The pilot aimed to find new and interesting ways to engage teenagers in visiting these museums through visualizing narrative using a convergence of chatbot and gamification platforms.
Article
Full-text available
The ubiquity of AI in society means the time is ripe to consider what educated 21st century digital citizens should know about this subject. In May 2018, the Association for the Advancement of Artificial Intelligence (AAAI) and the Computer Science Teachers Association (CSTA) formed a joint working group to develop national guidelines for teaching AI to K-12 students. Inspired by CSTA's national standards for K-12 computing education, the AI for K-12 guidelines will define what students in each grade band should know about artificial intelligence, machine learning, and robotics. The AI for K-12 working group is also creating an online resource directory where teachers can find AI- related videos, demos, software, and activity descriptions they can incorporate into their lesson plans. This blue sky talk invites the AI research community to reflect on the big ideas in AI that every K-12 student should know, and how we should communicate with the public about advances in AI and their future impact on society. It is a call to action for more AI researchers to become AI educators, creating resources that help teachers and students understand our work.
Conference Paper
Full-text available
How does the presence of an audience influence social interaction with a conversational system in a physical space? To answer this question, we analyzed data from an art exhibit where visitors interacted in natural language with three chatbots representing characters from a book. We performed two studies to explore the influence of audiences. In Study 1, we did fieldwork cross-analyzing the reported perception of the social interaction, the audience conditions (visitor is alone, a visitor is observed by acquaintances and/or strangers), and control variables such as the visitor's familiarity with the book and gender. In Study 2, we analyzed over 5,000 conversation logs and video recordings, identifying dialogue patterns and how they correlated with the audience conditions. Some significant effects were found, suggesting that conversational systems in physical spaces should be designed based on whether other people observe the user or not. CCS CONCEPTS • Human-centered computing → Empirical studies in HCI; Empirical studies in interaction design.
Chapter
Full-text available
Educational design research is a genre of research in which the iterative development of solutions to practical and complex educational problems provides the setting for scientific inquiry. The solutions can be educational products, processes, programs, or policies. Educational design research not only targets solving significant problems facing educational practitioners but at the same time seeks to discover new knowledge that can inform the work of others facing similar problems. Working systematically and simultaneously toward these dual goals is perhaps the most defining feature of educational design research. This chapter seeks to clarify the nature of educational design research by distinguishing it from other types of inquiry conducted in the field of educational communications and technology. Examples of design research conducted by different researchers working in the field of educational communications and technology are described. The chapter concludes with a discussion of several important issues facing educational design researchers as they pursue future work using this innovative research approach.
Conference Paper
In this paper we reflect on the process of co-design by detailing and comparing two strategies for the participatory development of interaction concepts and prototypes in the context of technologically-enhanced museum visiting experiences. While much work in CSCW, HCI and related disciplines has examined different role configurations in co-design, more research is needed on examining how collaborative design processes can unfold in different ways. Here we present two instances of co-design of museum visiting aids, one stemming from an open brief, another from an initial working prototype; we discuss the process in each case and discuss how these alternative strategies presented the team with different possibilities as well as constraints, and led to different patterns of collaboration within the design team. Finally, we draw a set of themes for discussion and reflection to inform and aid researchers and practitioners participating in similar co-design processes, particularly in the domain of cultural heritage.
Conference Paper
Using role play at every stage of the design process has been a vital tool for IDEO in working with clients and users. With the dual properties of bringing participants into the moment and making shared activities physical rather than just mental, role playing techniques make the process more experiential and creatively generative. Role playing is complimentary to traditional design techniques providing additional team dynamics and insights that bring the process and designs to another level. This paper describes how we have used role-playing in our design process and how it can be integrated into any HCI project.
Conference Paper
Recommender systems for automatically suggested items of interest to users have become increasingly essential in fields where mass personalization is highly valued. The popular core techniques of such systems are collaborative filtering, content-based filtering and combinations of these. In this paper, we discuss hybrid approaches, using collaborative and also content data to address cold-start - that is, giving recommendations to novel users who have no preference on any items, or recommending items that no user of the community has seen yet. While there have been lots of studies on solving the item-side problems, solution for user-side problems has not been seen public. So we develop a hybrid model based on the analysis of two probabilistic aspect models using pure collaborative filtering to combine with users' information. The experiments with MovieLen data indicate substantial and consistent improvements of this model in overcoming the cold-start user-side problem.