Fig 4 - uploaded by Deepak Akkil
Content may be subject to copyright.
(a) Average time (in seconds) spent per phase for the two conditions. (b) Average number of activations per fruits for the two conditions Number of items retained after the interaction. 

(a) Average time (in seconds) spent per phase for the two conditions. (b) Average number of activations per fruits for the two conditions Number of items retained after the interaction. 

Source publication
Conference Paper
Full-text available
Use of technological devices for early childhood learning is increasing. Now, kindergarten and primary school children use interactive applications on mobile phones and tablet computers to support and complement classroom learning. With increase in cognitive technologies, there is further potential to make such applications more engaging by underst...

Contexts in source publication

Context 1
... activations per fruit. Figure 4(b) shows the average number of activations per fruit for the two conditions. The median value indicates that participants activated each fruit almost twice as often in the LittleBear condition as in the baseline condition and hence indulged in more learning activity. The difference was found to be statistically significant using the pair- wise randomisation test (p=0.001). Figure 5 shows the boxplot for the number of correct answers in the paper-based eval- uation following each condition. This evaluation was used as a measure of the short- term retention of the vocabulary, following the interaction. The median values indicate that our participants retained about 23% more names of fruits and vegetables after using the gaze-based learning application when compared to touch-based interaction. The difference was found to be statistically significant using the pair-wise randomisation test ...
Context 2
... time spent per phase. Figure 4(a) shows the time spent per phase interacting with the application for the two conditions. This study was about free exploration and the application did not have any restriction on how many times user had to activate each fruit. The extent of time spent on each phase was hence completely controlled by the child in both conditions. Therefore, we can assume that the time spent in each of the phases in different condi- tions reflects the overall engagement. The median value indicates that our participants spent almost twice as much time interacting with the application in the LittleBear con- dition than baseline. The difference was found to be statistically significant using the pair-wise randomisation test ...

Similar publications

Conference Paper
Full-text available
Computing devices such as mobile phones and tablet computers are increasingly used to support early childhood learning. Currently, touching the screen is the most common interaction technique on such devices. To augment the current interaction experience, overcome posture-related issues with tablet usage and promote novel ways of engagement, we pro...
Article
Full-text available
Background: Pain is a common condition with a significant physical, psychosocial, and economic impact. Due to enormous progress in mobile device technology as well as the increase in smartphone ownership in the general population, mobile apps can be used to monitor patients with pain and support them in pain management. Objective: The aim of thi...
Article
Full-text available
Assessments of Life-space Mobility (LSM) evaluate the locations of movement and their frequency over a period of time to understand mobility patterns. Advancements in and miniaturization of GPS sensors in mobile devices like smartwatches could facilitate objective and high-resolution assessment of life-space mobility. The purpose of this study was...
Article
Full-text available
This study aimed to examine the relationship between cumulative use of electronic devices and musculoskeletal symptoms. Smartphones and tablet computers are very popular and people may own or operate several devices at the same time. High prevalence rates of musculoskeletal symptoms associated with intensive computer use have been reported. However...
Article
Full-text available
Objectives To estimate the efficacy of app-based interventions designed to support medication adherence and investigate which behaviour change techniques (BCTs) used by the apps are associated with efficacy. Design Systematic review of randomised controlled trials (RCTs), with meta-analysis. Setting Medline/PubMed, PsycINFO, Cumulative Index to N...

Citations

... Akkil et al. 2017 [17] explored the potential of gaze-based interaction for educational applications for children. One of their studies [34] proposes the use of a gaze-aware adaptive agent who shows emotional response to young learners while teaching them about fruits and vegetables during their gameplay interaction. ...
... Their study illustrates the potential of gaze and places it as an acceptable interaction compared to touchscreen in mobile games. , [34] recognized the challenges of the common touch-based interaction when designing applications for children (e.g. the problem of accidental touches [40], need for careful positioning of the screen [41]), and explored the value of gaze aware agent named "Little Bear" in a learning application teaching vocabulary to children. Their results showed that children had longer interaction with the game and improved vocabulary in the gaze aware interaction, when compared to touch. ...
... Our choice to use mouse and not touch-based interaction in our game for the comparison was driven by several reasons. First, touch is mostly popular with mobile devices for children's applications [34], while in our case we used a desktop computer. Second, one difficulty that children face with touch interaction is moving their fingers across the screen at a constant speed [56], which in our case would possibly cause problems and tire the children, since as movement is an essential action to be performed in the Extreme Yoga game. ...
Article
Full-text available
Gaze interaction has become an affordable option in the development of innovative interaction methods for user input. Gaze holds great promise as an input modality, offering increased immersion and opportunities for combined interactions (e.g., gaze and mouse, touch). However, the use of gaze as an input modality to support children’s gameplay has not been examined to unveil those opportunities. To investigate the potential of gaze interaction to support children’s gameplay, we designed and developed a game that enables children to utilize gaze interaction as an input modality. Then, we performed a between subjects research design study with 28 children using mouse as an input mechanism and 29 children using their gaze (8-14 years old). During the study, we collected children’s attitudes (via self-reported questionnaire) and actual usage behavior (using facial video, physiological data and computer logs). The results show no significant difference on children’s attitudes regarding the ease of use and enjoyment of the two conditions, as well as on the scores achieved and number of sessions played. Usage data from children’s facial video and physiological data show that sadness and stress are significantly higher in the mouse condition, while joy, surprise, physiological arousal and emotional arousal are significantly higher in the gaze condition. In addition, our findings highlight the benefits of using multimodal data to reveal children’s behavior while playing the game, by complementing self-reported measures. As well, we uncover a need for more studies to examine gaze as an input mechanism.
... Gaze is an important nonverbal communication signal in everyday human-human interaction [4], and has become a popular research topic for technology-mediated interaction [17,43,60]. The ability to tell what someone is looking at-'gaze awareness'-is a useful way to gauge the attention of others [1,2,14,63]. Gaze observed over time is an effective predictor of human intention [26,27,50,56]. ...
Chapter
Full-text available
As it becomes more common for humans to work alongside artificial agents on everyday tasks, it is increasingly important to design artificial agents that can understand and interact with their human counterparts naturally. We posit that an effective way to do this is to harness nonverbal cues used in human-human interaction. We, therefore, leverage knowledge from existing work on gaze-based intention recognition, where the awareness of gaze can provide insights into the future actions of an observed human subject. In this paper, we design and evaluate the use of a proactive intention-aware gaze-enabled artificial agent that assists a human player engaged in an online strategy game. The agent assists by recognising and communicating the intentions of a human opponent in real-time, potentially improving situation awareness. Our first study identifies the language requirements for the artificial agent to communicate the opponent’s intentions to the assisted player, using an inverted Wizard of Oz method approach. Our second study compares the experience of playing an online strategy game with and without the assistance of the agent. Specifically, we conducted a within-subjects study with 30 participants to compare their experience of playing with (1) detailed AI predictions, (2) abstract AI predictions, and (3) no AI predictions but with a live visualisation of their opponent’s gaze. Our results show that the agent can facilitate awareness of another user’s intentions without adding visual distraction to the interface; however, the cognitive workload was similar across all three conditions, suggesting that the manner in which the agent communicates its predictions requires further exploration. Overall, our work contributes to the understanding of how to support human-agent teams in a dynamic collaboration scenario. We provide a positive account of humans interacting with an intention-aware artificial agent afforded by gaze input, which presents immediate opportunities for improving interactions between the counterparts.
... Our demonstration system will allow others to experience the potential of gaze-based interaction. The demonstration and the related user study [1] shows the novel engagement possibilities of a gaze aware pedagogical agent. ...
Conference Paper
Full-text available
Computing devices such as mobile phones and tablet computers are increasingly used to support early childhood learning. Currently, touching the screen is the most common interaction technique on such devices. To augment the current interaction experience, overcome posture-related issues with tablet usage and promote novel ways of engagement, we propose gaze as an input modality in educational applications for early learners. In this demonstration, we present the Little Bear, a gaze aware pedagogical agent that tailors its verbal and non-verbal behaviour based on the visual attention of the child. We built an application using the Little Bear, to teach the names of everyday fruits and vegetables to young children. Our demonstration system shows the potential of gaze-based learning applications and the novel engagement possibility provided by gaze-aware pedagogical agents.
Chapter
According to a recent statistical analysis conducted in 2018, more than 40% of the population has no reading or writing skills especially in rural areas of Pakistan. On the contrary, the mobile phone users have grown at a very steep rate even with a stagnant literacy rate. We formed a user-driven approach to research, develop and test a prototype mobile application that could be used to teach illiterates basic reading, writing and counting skills without using traditional schooling techniques. This first of a kind application provided the user the ability to customize their own learning plan. Focusing on native language Urdu, the application teaches them the required skill they need for daily life activities such as writing their own name, scenario-based calculations, identifying commonly used words.