Agents providing assistance to humans are faced with the challenge of automatically adjusting the level of assistance to ensure optimal performance. In this work, we argue that identifying the right level of assistance consists in balancing positive assistance outcomes and some (domain-dependent) measure of cost associated with assistive actions. Towards this goal, we contribute a general mathematical framework for structured tasks where an agent playing the role of a ‘provider’—e.g., therapist, teacher—assists a human ‘receiver’—e.g., patient, student. We specifically consider tasks where the provider agent needs to plan a sequence of actions over a fixed time horizon, where actions are organized along a hierarchy with increasing success probabilities, and some associated costs. The goal of the provider is to achieve a success with the lowest expected cost possible. We present OAssistMe, an algorithm that generates cost-optimal action sequences given the action parameters, and investigate several extensions of it, motivated by different potential application domains. We provide an analysis of the algorithms, including proofs for a number of properties of optimal solutions that, we show, align with typical human provider strategies. Finally, we instantiate our theoretical framework in the context of robot-assisted therapy tasks for children with Autism Spectrum Disorder (ASD). In this context, we present methods for determining action parameters based on a survey of domain experts and real child-robot interaction data. Our contributions unlock increased levels of flexibility for agents introduced in a variety of assistive contexts.
... More information on how this method was adapted to the CARAS can be found in the Materials section. Further information about the general algorithm can be found in [62]. immediate execution of the task-a comparatively simple ITS model that has already been successfully tested with people with disabilities was used. ...
... The model from [62] provides a mathematical solution based on the human's abilities and invested costs of the single assistance options. The system is implemented using dynamic programming and was tested with children with autism and a humanoid robot. ...
... More information on how this method was adapted to the CARAS can be found in the Materials section. Further information about the general algorithm can be found in [62]. Baraka et al. (2020). ...
People with disabilities are severely underrepresented in the open labor market. Yet, pursuing a job has a positive impact in many aspects of life. This paper presents a possible approach to improve inclusion by including a robotic manipulator into context-aware Assistive Systems. This expands the assistance possibilities tremendously by adding gesture-based feedback and aid. The system presented is based on the intelligent control system of behavior trees, which—together with a depth camera, specifically designed policies, and a collaborative industrial robotic manipulator—can assist workers with disabilities in the workplace. A developed assistance node generates personalized action sequences. These include different robotic pointing gestures, from simple waving, to precisely indicating the target position of the workpiece during assembly tasks. This paper describes the design challenges and technical implementation of the first Context-Aware Robotic Assistive System. Moreover, an in-field user study in a Sheltered Workshop was performed to verify the concept and developed algorithms. In the assembly task under consideration, almost three times as many parts could be assembled with the developed system than with the baseline condition. In addition, the reactions and statements of the participants showed that the robot was considered and accepted as a tutor.
... Regarding the ASD, social robots have fostered verbal and non-verbal communication skills [67][68][69], enhanced joint attention [70], collaborative skills [71], visual-perspective taking [72], and social/emotional engagement [73], which are specific difficulties for children with ASD [74]. Social robots have also been used for long-term interventions outside controlled lab settings and without extensive technical supervision [46]. ...
... Social robots interacting with people in healthcare settings. (a) The social robot Kaspar which has been designed for interactions with children with special needs engaging a child in an activity with educational and therapeutic objectives[84]; (b) Nao robot playing a self-management educational game with a child with diabetics[64]; (c) the robot assisting an autism therapist in ASD diagnosis training session[68]; (d) the robot engaging in therapy tasks with a child with special needs[69]; (e) the robot as a collaborator promoting engagement and performance for gait rehabilitation of a patient with neurological disorder during a therapy session; (f) a training assistant robot providing encouragement and motivation to a patient in a cardiac rehabilitation training session. Image credits. ...
This comprehensive survey and review presents stuttering treatment approaches that have been reported in the past 20 years in order to highlight the different characteristics in each intervention. The comprehensive survey presented in this article was conducted according to the PRISMA guidelines to extract articles on stuttering interventions, published between 01/01/2000 and 01/08/2020. 11 formal programs, 9 fluency induction techniques and 7 adjunct therapy approaches were identified through the comprehensive survey and summarized. The most common results were the Lidcombe program and altered auditory feedback techniques. The comprehensive survey and review presented in this article strives to provide knowledge that can help researchers in other areas, such as Human-Robot Interaction (HRI), acquire a preliminary understanding of stuttering interventions and further the field of stuttering interventions with the introduction of technological advancements.
... Regarding the ASD, social robots have fostered verbal and non-verbal communication skills [67][68][69], enhanced joint attention [70], collaborative skills [71], visual-perspective taking [72], and social/emotional engagement [73], which are specific difficulties for children with ASD [74]. Social robots have also been used for long-term interventions outside controlled lab settings and without extensive technical supervision [46]. ...
... Social robots interacting with people in healthcare settings. (a) The social robot Kaspar which has been designed for interactions with children with special needs engaging a child in an activity with educational and therapeutic objectives[84]; (b) Nao robot playing a self-management educational game with a child with diabetics[64]; (c) the robot assisting an autism therapist in ASD diagnosis training session[68]; (d) the robot engaging in therapy tasks with a child with special needs[69]; (e) the robot as a collaborator promoting engagement and performance for gait rehabilitation of a patient with neurological disorder during a therapy session; (f) a training assistant robot providing encouragement and motivation to a patient in a cardiac rehabilitation training session. Image credits. ...
The inclusion of technologies such as telepractice, and virtual reality in the field of communication disorders has transformed the approach to providing healthcare. This research article proposes the employment of similar advanced technology – social robots, by providing a context and scenarios for potential implementation of social robots as supplements to stuttering intervention. The use of social robots has shown potential benefits for all the age group in the field of healthcare. However, such robots have not yet been leveraged to aid people with stuttering. We offer eight scenarios involving social robots that can be adapted for stuttering intervention with children and adults. The scenarios in this article were designed by human–robot interaction (HRI) and stuttering researchers and revised according to feedback from speech-language pathologists (SLPs). The scenarios specify extensive details that are amenable to clinical research. A general overview of stuttering, technologies used in stuttering therapy, and social robots in health care is provided as context for treatment scenarios supported by social robots. We propose that existing stuttering interventions can be enhanced by placing state-of-the-art social robots as tools in the hands of practitioners, caregivers, and clinical scientists.
... The decision units are often delegated to a wizard of oz technique [28,29], in which participants believe they are working with an autonomous system, when in fact the researcher is control-ling system responses. Content customization [30], or easy to follow tutoring strategies [31] are also commonly used in these domains. In general research on ITS the focus is often put on symbolic interaction, such as logical proofs or language training with students without disabilities. ...
Inclusion of people with disabilities in the open labor market using robotic assistance is a promising new and important field of research, albeit challenging. People with disabilities are severely underrepresented in the open labor market, although inclusion adds significant value on both financial and social levels. Here, collaborative industrial robots offer great potential for support. This work conducted a month-long, in-field user study in a workshop for people with disabilities to improve learning progress through collaboration with an innovative intelligent robotic tutoring system. Seven workers with a wide variety of disabilities solved assembly tasks while being supervised by the system. In case of errors or hesitations, different modes of assistance were automatically offered. Modes of assistance included robotic pointing gestures, speech prompts, and calling a supervisor. Which assistance to offer the different participants during the study was personalized by a shared policy using reinforcement learning. Here, new, non-stationary Contextual Multi-Armed Bandit algorithms were developed during the prior simulation-based study planning to include the workers contextual information. Pioneering results were obtained in three main areas. The participants significantly improved their skills in terms of time required per task. The algorithm learned within only one session per participant which modes of assistance were preferred. Finally, a comparison between simulation and re-simulation, including the study results, revealed the underlying basic assumptions to be correct but individual variation led to strong performance differences in the real-world setting. Looking ahead, the innovative system developed could pave the way for many people with disabilities to enter the open labor market.
... They usually have simpler expressions than humans, which can ease the work with ASD subjects (Pennisi et al., 2016). NAO is the most used humanoid robot (Fig. 4) (Alnajjar et al., 2020;Baraka et al., 2020;Amanatiadis et al., 2020;Petric and Kovacic, 2020;Qidwai et al., 2020;Billing et al., 2020;Chung, 2020;Korte et al., 2020;Barnes et al., 2021;So et al., 2020), probably because it is a commercial robot, thus more accessible. It has 25 degrees of freedom on the full body and sensors (touch sensors, microphones and two cameras), which is ideal for therapeutic sessions. ...
Robotic therapies are receiving growing interest in the autism field, especially for the improvement of social skills of children, enhancing traditional human interventions. In this work, we conduct a scoping review of the literature in robotics for autism, providing the largest review on this field from the last five years. Our work underlines the need to better characterize participants and to increase the sample size. It is also important to develop homogeneous training protocols to analyse and compare the results. Nevertheless, 7 out of the 10 Randomized control trials reported a significant impact of robotic therapy. Overall, robot autonomy, adaptability and personalization as well as more standardized outcome measures were pointed as the most critical issues to address in future research.
... To follow up on this question, the data collected in this work has been used to automatically generate action sequences following a decision-theoretic formalism (Baraka et al., 2020). These action sequences are aimed at providing a "just-right challenge" for an optimal learning experience. ...
Social robots have been shown to be promising tools for delivering therapeutic tasks for children with Autism Spectrum Disorder (ASD). However, their efficacy is currently limited by a lack of flexibility of the robot’s social behavior to successfully meet therapeutic and interaction goals. Robot-assisted interventions are often based on structured tasks where the robot sequentially guides the child towards the task goal. Motivated by a need for personalization to accommodate a diverse set of children profiles, this paper investigates the effect of different robot action sequences in structured socially interactive tasks targeting attention skills in children with different ASD profiles. Based on an autism diagnostic tool, we devised a robotic prompting scheme on a NAO humanoid robot, aimed at eliciting goal behaviors from the child, and integrated it in a novel interactive storytelling scenario involving screens. We programmed the robot to operate in three different modes: diagnostic-inspired ( Assess ), personalized therapy-inspired ( Therapy ), and random ( Explore ). Our exploratory study with 11 young children with ASD highlights the usefulness and limitations of each mode according to different possible interaction goals, and paves the way towards more complex methods for balancing short-term and long-term goals in personalized robot-assisted therapy.
... Baraka et al. [40] presented OAssistMe, an algorithm that generates cost-optimal action sequences given the action parameters. The authors instantiated their theoretical framework in the context of robot-assisted therapy tasks for children with ASD by determining action parameters based on a survey of domain experts and real child-robot interaction data. ...
Recent studies have shown that children with autism may be interested in playing with an interactive robot. Moreover, the robot can engage these children in ways that demonstrate essential aspects of human interaction, guiding them in therapeutic sessions to practice more complex forms of interaction found in social human-to-human interactions. We review published articles on robot-assisted autism therapy (RAAT) to understand the trends in research on this type of therapy for children with autism and to provide practitioners and researchers with insights and possible future directions in the field. Specifically, we analyze 38 articles, all of which are refereed journal articles, that were indexed on Web of Science from 2009 onward, and discuss the distribution of the articles by publication year, article type, database and journal, research field, robot type, participant age range, and target behaviors. Overall, the results show considerable growth in the number of journal publications on RAAT, reflecting increased interest in the use of robot technology in autism therapy as a salient and legitimate research area. Factors, such as new advances in artificial intelligence techniques and machine learning, have spurred this growth.
... In our solution to this problem, we devise OASssistMe, an algorithm that finds an optimal action sequence given a set of action success probabilities and costs, and rigorously analyze its mathematical properties [11]. We then present several extensions of the basic algorithm, relaxing the assumption that the action parameters are fixed [12]. In a second step, we instantiate our framework in the robot-assisted ASD therapy tasks from Chapter 2. We first estimate action costs using expert data collected through an online survey with psychologists. ...
Robots have the tremendous potential of assisting people in their lives, allowing them to
achieve goals that they would not be able to achieve by themselves. In particular, socially
assistive robots provide assistance primarily through social interaction, in healthcare,
therapy, and education contexts. Despite their potential, current socially assistive robots
still lack robust interactive capabilities to allow them to carry out assistive tasks flexibly
and autonomously. Some challenges for these robots include responding to and engaging
in multi-modal behavior, operating with minimal expert intervention, and accommodating
different user needs.
Motivated by these challenges, this thesis aims at augmenting the algorithmic capabilities
of such robots by leveraging the structure of existing standardized human-human
interactions in assistive domains. Using therapy for Autism Spectrum Disorder (ASD)
as a domain of focus, we explore two roles for a socially assistive robot: ‘provider’ and
‘receiver’.
In the provider role, the robot proactively engages in assistive tasks with a human receiver
(namely a child with ASD), following standardized interactive tasks. We contribute
a family of algorithms for automated action selection, whose goal is to build cost-optimal
robot action sequences that account for a range of receiver profiles. We further estimate
the action parameters needed to run these algorithms through empirical studies with
children with ASD and psychology experts, and show that the algorithms are able to
generate personalized action sequences according to different child profiles.
In the receiver role, the robot simulates common behavioral responses of children with
ASD to the standardized actions, acting as an aid for providers in training. By reversing
the standardized diagnosis pipeline, we first develop a simulation method that generates
behaviors consistent with user-controllable receiver profiles. In a second step, we develop
an interactive robot capable of responding to a therapist’s actions in an embodied fashion. Our evaluation studies conducted with therapists validate the designed robot behaviors
and show promising results for the integration of such robots in clinical training.
These contributions allow for a richer set of interactions with robots in assistive
contexts, and are expected to increase their autonomy, flexibility, and effectiveness when
dealing with diverse user populations.
Computational modeling of behavior has revolutionized psychology and neuroscience. By fitting models to experimental data we can probe the algorithms underlying behavior, find neural correlates of computational variables and better understand the effects of drugs, illness and interventions. But with great power comes great responsibility. Here, we offer ten simple rules to ensure that computational modeling is used with care and yields meaningful insights. In particular, we present a beginner-friendly, pragmatic and details-oriented introduction on how to relate models to data. What, exactly, can a model tell us about the mind? To answer this, we apply our rules to the simplest modeling techniques most accessible to beginning modelers and illustrate them with examples and code available online. However, most rules apply to more advanced techniques. Our hope is that by following our guidelines, researchers will avoid many pitfalls and unleash the power of computational modeling on their own data.
This work contributes an optimization framework in the context of structured interactions between an agent playing the role of a 'provider' and a human 'receiver'. Examples of provider/receiver interactions of interest include ones between occupational therapist and patient, or teacher and student. We specifically consider tasks where the provider agent needs to plan a sequence of actions with a fixed horizon, where actions are organized along a hierarchy with increasing probabilities of success and associated costs. The goal of the provider is to achieve a success with the lowest expected cost possible. In our application domains, a success may be for instance eliciting a desired behavior or a correct response from the receiver. We present a linear-time optimal planning algorithm that generates cost-optimal sequences for given action parameters. We also provide proofs for a number of properties of optimal solutions that align with typical human provider strategies. Finally, we instantiate our general formulation in the context of robot-assisted therapy tasks for children with Autism Spectrum Disorders (ASD). In this context, we present methods for determining action parameters , namely (1) an online survey with experts for determining action costs, and (2) a probabilistic model of child response based on data collected in a real child-robot interaction scenario. Our contributions may unlock increased levels of adaptivity for agents introduced in a variety of assistive contexts.
[Purpose] The aim of this study was to systematically investigate the effects of robot-assisted therapy on the upper extremity in acute and subacute stroke patients. [Subjects and Methods] The papers retrieved were evaluated based on the following inclusion criteria: 1) design: randomized controlled trials; 2) population: stroke patients 3) intervention: robot-assisted therapy; and 4) year of publication: May 2012 to April 2016. Databased searched were: EMBASE, PubMed and COCHRAN databases. The Physiotherapy Evidence Database (PEDro) scale was used to assess the methodological quality of the included studies. [Results] Of the 637 articles searched, six studies were included in this systematic review. The PEDro scores range from 7 to 9 points. [Conclusion] This review confirmed that the robot-assisted therapy with three-dimensional movement and a high degree of freedom had positive effects on the recovery of upper extremity motor function in patients with early-stage stroke. We think that the robot-assisted therapy could be used to improve upper extremity function for early stage stroke patients in clinical setting.
Robot-Assisted Therapy (RAT) has successfully been used to improve social skills in children with autism spectrum disorders (ASD) through remote control of the robot in so-called Wizard of Oz (WoZ) paradigms. However, there is a need to increase the autonomy of the robot both to lighten the burden on human therapists (who have to remain in control and, importantly, supervise the robot) and to provide a consistent therapeutic experience. This paper seeks to provide insight into increasing the autonomy level of social robots in therapy to move beyond WoZ. With the final aim of improved human-human social interaction for the children, this multidisciplinary research seeks to facilitate the use of social robots as tools in clinical situations by addressing the challenge of increasing robot autonomy. We introduce the clinical framework in which the developments are tested, alongside initial data obtained from patients in a first phase of the project using a WoZ setup mimicking the targeted supervised-autonomy behaviour. We further describe the implemented system architecture capable of providing the robot with supervised autonomy.
In the past decade the field of cognitive sciences has seen an exponential growth in the number of computational modeling studies. Previous work has indicated why and how candidate models of cognition should be compared by trading off their ability to predict the observed data as a function of their complexity. However, the importance of falsifying candidate models in light of the observed data has been largely underestimated, leading to important drawbacks and unjustified conclusions. We argue here that the simulation of candidate models is necessary to falsify models and therefore support the specific claims about cognitive function made by the vast majority of model-based studies. We propose practical guidelines for future research that combine model comparison and falsification. Computational modeling has grown exponentially in cognitive sciences in the past decade.Model selection most often relies on evaluating the ability of candidate models to predict the observed data.The ability of a candidate model to generate a behavioral effect of interest is rarely assessed, but can be used as an absolute falsification criterion.Recommended guidelines for model selection should combine the evaluation of both the predictive and generative performance of candidate models.
The proposed research is aimed towards developing models for control and assessment of child-robot interaction within a robot-assisted autism spectrum disorder diagnostic protocol. The robot-assisted protocol contains several tasks which consist of robot actions to elicit the interaction and observations through which the robot perceives the behaviour of the child. Tasks of the protocol are modelled using the partially observable Markov decision process (POMDP) model, which enables the robot to autonomously choose actions and assess the interaction. Assessment is based on the robot belief state related to the unobservable state of the child. The robot-assisted diagnostic protocol is modelled using hierarchical POMDP model enabling the robot to autonomously select the sequence of tasks within the protocol. The effectiveness of the developed models will be experimentally evaluated through examinations with multiple children in clinical settings.
Social robots are being used to create better educational scenarios, boosting children's motivation and engagement. The focus of the research is to explore new ways to support children in acquisition of their handwriting skills with the help of a social robot. With this perspective, three studies are discussed to investigate aspects related to the learning modes of child-robot interaction, children's impression of a social robot and classification of children's common handwriting difficulties.
Shared autonomy integrates user input with robot autonomy in order to control a robot and help the user to complete a task. Our work aims to improve the performance of such a human-robot team: the robot tries to guide the human towards an effective strategy, sometimes against the human's own preference, while still retaining his trust. We achieve this through a principled human-robot mutual adaptation formalism. We integrate a bounded-memory adaptation model of the human into a partially observable stochastic decision model, which enables the robot to adapt to an adaptable human. When the human is adaptable, the robot guides the human towards a good strategy, maybe unknown to the human in advance. When the human is stubborn and not adaptable, the robot complies with the human's preference in order to retain their trust. In the shared autonomy setting, unlike many other common human-robot collaboration settings, only the robot actions can change the physical state of the world, and the human and robot goals are not fully observable. We address these challenges and show in a human subject experiment that the proposed mutual adaptation formalism improves human-robot team performance, while retaining a high level of user trust in the robot, compared to the common approach of having the robot strictly following participants' preference.