Conference PaperPDF Available

Little Bear – A Gaze Aware Learning Companion for Early Childhood Learners

Authors:

Abstract and Figures

Computing devices such as mobile phones and tablet computers are increasingly used to support early childhood learning. Currently, touching the screen is the most common interaction technique on such devices. To augment the current interaction experience, overcome posture-related issues with tablet usage and promote novel ways of engagement, we propose gaze as an input modality in educational applications for early learners. In this demonstration, we present the Little Bear, a gaze aware pedagogical agent that tailors its verbal and non-verbal behaviour based on the visual attention of the child. We built an application using the Little Bear, to teach the names of everyday fruits and vegetables to young children. Our demonstration system shows the potential of gaze-based learning applications and the novel engagement possibility provided by gaze-aware pedagogical agents.
Content may be subject to copyright.
Little Bear A Gaze Aware Learning Companion for
Early Childhood Learners
Deepak Akkil 1, Prasenjit Dey 2, Nitendra Rajput 3 *
1 University of Tampere, Finland
deepak.akkil@uta.fi
2 IBM Research, Bangalore, India.
prasenjit.dey@in.ibm.com
3 InfoEdge India Limited
nitendra@acm.org
Abstract. Computing devices such as mobile phones and tablet computers are
increasingly used to support early childhood learning. Currently, touching the
screen is the most common interaction technique on such devices. To augment
the current interaction experience, overcome posture-related issues with tablet
usage and promote novel ways of engagement, we propose gaze as an input
modality in educational applications for early learners. In this demonstration, we
present the Little Bear, a gaze aware pedagogical agent that tailors its verbal and
non-verbal behaviour based on the visual attention of the child. We built an
application using the Little Bear, to teach the names of everyday fruits and
vegetables to young children. Our demonstration system shows the potential of
gaze-based learning applications and the novel engagement possibility provided
by gaze-aware pedagogical agents.
Keywords. Gaze, Touch, Pedagogical Agent, Early-childhood learning,
Vocabulary building, Games, Mobile devices, Engagement.
1 Introduction
Touchscreen devices such as Apple iPad and Microsoft Surface tablets are
increasingly applied in early childhood pedagogical environments, such as preschools
and kindergartens. Growing popularity and access to mobile devices provide exciting
opportunities to design innovative, ubiquitous, and constructive learning experiences
for young children. While the immense potential of such devices for early childhood
learning is well accepted, there are also many concerns regarding touch-based
interaction on such devices in an early childhood learning environment.
There are several challenges in designing engaging and intuitive touch-based
applications for young children. Plowman and Mcpake [12] note that if children do not
understand what they need to do, the interactivity offered by such devices may be
* This work was done when Nitendra Rajput was working for IBM Research.
counter-productive to learning. This requires careful design of the stimuli and prompts,
to make the interaction intuitive [5]. In addition, the underdeveloped fine motor skills
and finger dexterity in children [6] lead to difficulty performing certain touch gestures
[11], lead to slower and error-prone interactions [15] and cause accidental or
unintended touch [10], affecting the overall interaction.
Another concern regarding the use of a mobile device is to balance the placement of
the device for optimal touch interaction and considerations of neutral posture of the
child. Straker et al. [14] studied posture and muscle activity of children while using
desktop computers and tablets and found that touch-based interaction on a tablet is
linked with asymmetric spinal and strained neck postures. Similar concerns regarding
posture and prolonged tablet use are also raised by teachers and parents [4]. Researchers
have proposed several recommendations to overcome the problem, by promoting task
variations [14] and encouraging elevated placement of the device which promote
neutral viewing postures [16]. In turn, elevated placement of the device may make the
touch interaction difficult.
A third challenge in designing learning applications for children is that children have
very limited attention spans and easily get distracted by environmental factors (e.g.
noise from the hallway, or a colourful object in the tablet screen of a peer) [2]. Luna [9]
note that the younger the children, the more easily they get distracted. It is hence
important that educational applications designed for early learners are aware of
children’s attention and that they employ ways to reorient the attention when the child
is distracted, to facilitate learning.
We propose gaze as a viable and potentially beneficial input modality in learning
applications for children. Unlike explicit touch-based interaction that inherently
requires the device to be placed close to the child, by using gaze, the child can interact
with the device at a distance, enabling the device to be optimally placed to promote
better posture.
Applications that are aware of the visual attention of the child could implicitly adapt
themselves, by integrating learning with their curious visual exploration providing a
rich and embodied experience. In addition, gaze has a strong association with attention
and gaze-aware learning applications can also keep track of the attention of the child
and employ means to reorient the attention when the child is distracted.
There are two distinct ways of using gaze information in learning applications. First,
by using gaze as the only interaction modality, which could be useful in simple
interaction tasks. Second, by using gaze in combination with conventional touch-based
interaction, which could be suitable when more complex interactions are required. In
this demonstration, we will focus on applications that use gaze as the only input
modality. To showcase the interaction and engagement possibility offered by the
modality, we designed Little Bear, an animated pedagogical agent capable of oral
communication that is also aware of the visual attention of the learner.
Animated pedagogical agents or virtual characters designed to teach or guide users,
provide engagement and motivational benefits in learning applications. However,
Kramer and Bente [7] note that current generation of agents do not exhibit sophisticated
non-verbal communication nor do they exert a social influence. They envision that
agents that are more aware of the emotional and cognitive state of the user and show
capacities for non-verbal communication, may have more pedagogical value. Gaze-
aware agents have been studied in previous research for children with special needs
[8,13] and for adult users [3]. The novelty of our system is that the Little Bear, uses the
gaze information to adapt its verbal as well as non-verbal behavior and exhibit
emotional states as a mean to reorient attention of the child, when distracted from the
learning activity.
2 Demonstration application
Fig 1. Agent based learning application. The application can be interacted using gaze. The
red boxes indicate the gaze reactive area.
We designed an application with the “little-bear”, a bear-like animated pedagogical
agent. The application was set in a garden-like 3D environment, where the agent would
take the child for a walk, and the application was designed to teach children the names
of some everyday fruits and vegetables. Different fruits and vegetables would appear
on screen at pre-defined locations during the walk, which the child could interact with
by using gaze. When the child glanced at a specific fruit, the bear spoke an interesting
detail about the fruit. The speech was powered by IBM Watson text to speech service,
and further customized by choosing the parameters for the speed of speech, pitch and
pauses between words, to make the speech feel natural, fitting to the character and easy
to understand. The agent also exhibited realistic lip movement and blink behavior to
complement the speech.
For accurate gaze tracking, we used Tobii EyeX, an off-the-shelf video-based gaze
tracker. The agent used the gaze information to adapt its verbal and non-verbal
behaviour. For example, when the child is distracted and does not look at the screen,
the character becomes sad (see Figure 2a) and uses speech to attract attention by saying,
I become sad when you do not look at me.
A fixation of more than 500ms on a fruit resulted in its activation and the agent spoke
an interesting detail about the fruit (e.g. you are looking at apple or apple is red in
color). The fixation duration was selected, based on previous works that suggest that
the normal median gaze fixation duration for children in an image viewing task is 300
ms and that children have difficulty fixating at a target for longer durations. The speech
after the activation of a fruit lasted for roughly 3-6 seconds, during which the
application did not respond to any other gaze fixations. Choosing a relatively short
fixation duration for activation allowed our application to be implicitly reactive to the
interest/visual attention of the child, without requiring an explicit gaze action. When
the character was not speaking, the head of the character would orient towards the
direction the user was looking at, giving an implicit feedback of gaze tracking and
helping establish joint attention. When the character was speaking, the head of the
character was oriented directly ahead, abiding by the established social conventions of
eye contact during face-to-face conversation.
3 Summary
In this paper, we described the challenges of using touch-based interaction on mobile
devices for children and how gaze input could be used as a beneficial input modality in
educational applications for children. We further presented the little-bear, a gaze aware
pedagogical agent that tailors its verbal and non-verbal behavior in response to visual
attention of the user. Our demonstration system will allow others to experience the
potential of gaze-based interaction. The demonstration and the related user study [1]
shows the novel engagement possibilities of a gaze aware pedagogical agent.
4 Requirements for the demonstration setup
The requirements for the demonstration setup are a desk, access to power sockets for
the tablet, and preferably a demonstration area with no direct sunlight, or other intense
sources of infrared light, immediately in front of, or behind, the user.
Figure 2. Agent non-verbal behaviour in response to visual attention of the child. (a) A
frame from the sad animation. (b)-(e) head orientation of the bear changes based on the
visual attention of the child (agent looks where the child is looking).
References
1. Akkil D, Dey P, Salian D, Rajput N. Gaze Awareness in Agent-Based Early-Childhood
Learning Application. In Proceedings of Human-Computer Interaction. 2017.
2. Bruckman A, Bandlow A, Forte A. HCI for kids. Handbook of Human-Computer
Interaction, J. Jacko and A. Sears, Ed. Lawrence Erlbaum Associates.793-809 (2008)
3. D'Mello S, Olney A, Williams C, Hays P. Gaze tutor: A gaze-reactive intelligent tutoring
system. International Journal of human-computer studies. 70(5):377-98 (2012).
4. Fawcett L. Tablets in Schools : How Useful Are They ? (2016)
5. Hiniker A, Sobel K, Hong SR, Suh H, Kim D, Kientz JA. Touchscreen prompts for
preschoolers: designing developmentally appropriate techniques for teaching young
children to perform gestures. In Proceedings Interaction Design and Children. 109-118
(2015).
6. Hourcade JP. Interaction design and children. Foundations and Trends in Human-Computer
Interaction. 1(4):277-392 (2008).
7. Krämer NC, Bente G. Personalizing e-learning. The social effects of pedagogical agents.
Educational Psychology Review. 22(1):71-87 (2010).
8. Lahiri U, Warren Z, Sarkar N. Design of a gaze-sensitive virtual social interactive system
for children with autism. IEEE Transactions on Neural Systems and Rehabilitation
Engineering. 19(4):443-52 (2011).
9. Luna B. Developmental changes in cognitive control through adolescence. Advances in
child development and behavior. 37:233-78 (2009).
10. McKnight L, Fitton D. Touch-screen technology for children: giving the right instructions
and getting the right responses. In Proceedings of interaction design and children. 238-241
(2010).
11. Nacher V, Jaen J, Navarro E, Catala A, González P. Multi-touch gestures for pre-
kindergarten children. International Journal of Human-Computer Studies. 73:37-51 (2015).
12. Plowman L, McPake J. Seven myths about young children and technology. Childhood
Education. 89(1):27-33 (2013).
13. Ramloll R, Trepagnier C, Sebrechts M, Finkelmeyer A. A gaze contingent environment for
fostering social attention in autistic children. In Proceedings of Eye tracking research &
applications. 19-26 (2004).
14. Straker LM, Coleman J, Skoss R, Maslen BA, Burgess-Limerick R, Pollock CM. A
comparison of posture and muscle activity during tablet computer, desktop computer and
paper use by young children. Ergonomics. 51(4):540-55 (2008).
15. Vatavu RD, Cramariuc G, Schipor DM. Touch interaction for children aged 3 to 6 years:
Experimental findings and relationship to motor skills. International Journal of Human-
Computer Studies. 74:54-76 (2015).
16. Young JG, Trudeau M, Odell D, Marinelli K, Dennerlein JT. Touch-screen tablet user
configurations and case-supported tilt affect head and neck flexion angles. Work. 41(1):81-
91 (2012).
... Children's gaze has been studied as an input modality in a very specific group of children (no functional use of their arms and legs), for a low-interactivity tasks (typing, reading and drawing) [16] and examined as a gaze-aware agent beneficial for early childhood learners [17]. Gaze provides the potential of a promising interaction modality for children, offering interesting and engaging gameplay experiences. ...
Article
Full-text available
Gaze interaction has become an affordable option in the development of innovative interaction methods for user input. Gaze holds great promise as an input modality, offering increased immersion and opportunities for combined interactions (e.g., gaze and mouse, touch). However, the use of gaze as an input modality to support children’s gameplay has not been examined to unveil those opportunities. To investigate the potential of gaze interaction to support children’s gameplay, we designed and developed a game that enables children to utilize gaze interaction as an input modality. Then, we performed a between subjects research design study with 28 children using mouse as an input mechanism and 29 children using their gaze (8-14 years old). During the study, we collected children’s attitudes (via self-reported questionnaire) and actual usage behavior (using facial video, physiological data and computer logs). The results show no significant difference on children’s attitudes regarding the ease of use and enjoyment of the two conditions, as well as on the scores achieved and number of sessions played. Usage data from children’s facial video and physiological data show that sadness and stress are significantly higher in the mouse condition, while joy, surprise, physiological arousal and emotional arousal are significantly higher in the gaze condition. In addition, our findings highlight the benefits of using multimodal data to reveal children’s behavior while playing the game, by complementing self-reported measures. As well, we uncover a need for more studies to examine gaze as an input mechanism.
Article
Full-text available
This paper presents a systematic review of the literature on Tangible User Interfaces (TUIs) and interactions in young children’s education by identifying 155 studies published between 2001 and 2019. The review was based on a set of clear research questions addressing application domains, forms of tangible objects, TUI design and assessment. The results indicate that (i) the form of tangible object is closely related to the application domain, (ii) the manipulatives are the most dominant form of tangible object, (iii) the majority of studies addressed all three stages of TUI development (design, implementation and evaluation) and declared a small sample of young children as a major shortcoming, and (iv) additional empirical research is required to collect evidence that TUIs are truly beneficial for children’s acquisition of knowledge. This review also identifies gaps in the current work, thus providing suggestions for future research in TUIs application in educational context expected to be beneficial for researchers, curriculum designers and practitioners in early years’ education. To the authors’ knowledge, this is the first systematic review specific to TUIs’ studies in early years’ education and is an asset to the scientific community.
Conference Paper
Full-text available
Use of technological devices for early childhood learning is increasing. Now, kindergarten and primary school children use interactive applications on mobile phones and tablet computers to support and complement classroom learning. With increase in cognitive technologies, there is further potential to make such applications more engaging by understanding the user context. In this paper, we present the Little Bear, a gaze aware pedagogical agent, that tailors its verbal and non-verbal behavior based on the visual attention of the child and employs means to reorient the attention of the child, when distracted from the learning activity. We used the Little Bear agent in a learning application to enable teaching the vocabulary of everyday fruits and vegetables. Our user-study (n = 12) with preschoolers shows that children interacted longer and showed improved short-term retention of the vocabulary using the gaze aware agent compared to a baseline touch-based application. Our results demonstrate the potential of gaze aware application design for early childhood learning.
Conference Paper
Full-text available
Though toddlers and preschoolers are regular touchscreen users, relatively little is known about how they learn to perform unfamiliar gestures. In this paper we assess the responses of 34 children, aged 2 to 5, to the most common in-app prompting techniques for eliciting specific gestures. By reviewing 100 touchscreen apps for preschoolers, we determined the types of prompts that children are likely to encounter. We then evaluated their relative effectiveness in teaching children to perform simple gestures. We found that children under 3 were only able to interpret instructions when they came from an adult model, but that children made rapid gains between age 3 and 3-and-a-half, at which point they were able to follow in-app audio instructions and on-screen demonstrations. The common technique of using visual state changes to prompt gestures was ineffective across this age range. Given that prior work in this space has primarily focused on children's fine motor control, our findings point to a need for increased attention to the design of prompts that accommodate children's cognitive development as well.
Article
Full-text available
The direct manipulation interaction style of multi-touch technology makes it the ideal mechanism for learning activities from pre-kindergarteners to adolescents. However, most commercial pre-kindergarten applications only support tap and drag operations. This paper investigates pre-kindergarteners’ (2-3 years of age) ability to perform other gestures on multi-touch surfaces. We found that these infants could effectively perform additional gestures, such as one-finger rotation and two-finger scale up and down, just as well as basic gestures, despite gender and age differences. We also identified cognitive and precision issues that may have an impact on the performance and feasibility of several types of interaction (double tap, long press, scale down and two-finger rotation) and propose a set of design guidelines to mitigate the associated problems and help designers envision effective interaction mechanisms for this challenging age range.
Conference Paper
Use of technological devices for early childhood learning is increasing. Now, kindergarten and primary school children use interactive applications on mobile phones and tablet computers to support and complement classroom learning. With increase in cognitive technologies, there is further potential to make such applications more engaging by understanding the user context. In this paper, we present the Little Bear, a gaze aware pedagogical agent, that tailors its verbal and non-verbal behavior based on the visual attention of the child and employs means to reorient the attention of the child, when distracted from the learning activity. We used the Little Bear agent in a learning application to enable teaching the vocabulary of everyday fruits and vegetables. Our user-study (n=12) with preschoolers shows that children interacted longer and showed improved short-term retention of the vocabulary using the gaze aware agent compared to a base-line touch-based application. Our results demonstrate the potential of gaze aware application design for early childhood learning.
Article
Parents and educators tend to have many questions about young children's play with computers and other technologies at home. They can find it difficult to know what is best for children because these toys and products were not around when they were young. Some will tell you that children have an affinity for technology that will be valuable in their future lives. Others think that children should not be playing with technology when they could be playing outside or reading a book.
Article
Our present understanding of young children׳s touch-screen performance is still limited, as only few studies have considered analyzing children׳s touch interaction patterns so far. In this work, we address children aged between 3 and 6 years old during their preoperational stage according to Piaget׳s cognitive developmental theory, and we report their touch-screen performance with standard tap and drag and drop interactions on smart phones and tablets. We show significant improvements in children׳s touch performance as they grow from 3 to 6 years, and point to performance differences between children and adults. We correlate children׳s touch performance expressed with task completion times and target acquisition accuracy with sensorimotor evaluations that characterize children׳s finger dexterity and graphomotor and visuospatial processing abilities, and report significant correlations. Our observations are drawn from the largest children touch dataset available in the literature, consisting in data collected from 89 children and an additional 30 young adults to serve as comparison. We use our findings to recommend guidelines for designing touch-screen interfaces for children by adopting the new perspective of sensorimotor abilities. We release our large dataset into the interested community for further studies on children׳s touch input behavior. It is our hope that our findings on the little-studied age group of 3- to 6-year-olds together with the companion dataset will contribute toward a better understanding of children׳s touch interaction behavior and toward improved touch interface designs for small-age children.
Article
We developed an intelligent tutoring system (ITS) that aims to promote engagement and learning by dynamically detecting and responding to students' boredom and disengagement. The tutor uses a commercial eye tracker to monitor a student's gaze patterns and identify when the student is bored, disengaged, or is zoning out. The tutor then attempts to reengage the student with dialog moves that direct the student to reorient his or her attentional patterns towards the animated pedagogical agent embodying the tutor. We evaluated the efficacy of the gaze-reactive tutor in promoting learning, motivation, and engagement in a controlled experiment where 48 students were tutored on four biology topics with both gaze-reactive and non-gaze-reactive (control condition) versions of the tutor. The results indicated that: (a) gaze-sensitive dialogs were successful in dynamically reorienting students’ attentional patterns to the important areas of the interface, (b) gaze-reactivity was effective in promoting learning gains for questions that required deep reasoning, (c) gaze-reactivity had minimal impact on students’ state motivation and on self-reported engagement, and (d) individual differences in scholastic aptitude moderated the impact of gaze-reactivity on overall learning gains. We discuss the implications of our findings, limitations, future work, and consider the possibility of using gaze-reactive ITSs in classrooms.
Article
An abstract is not available.