Conference Paper

Task Detection of ASD Children by Analyzing Robotic Enhanced and Standard Human Therapy

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

An indispensable part of life is social interaction. Without any doubt, it can help to overcome Autism Spectrum Disorder (ASD). ASD is defined as an abnormality in social communication, which is not a disease. So, to face this disorder, involvement in social communication is required. It can be possible through therapy sessions. In this modern era, not only the human but also a robot can play the role of the interaction partner during these sessions. To examine the behavior of the child with ASD, data is required, which is not possible to get all the time. From this thought, the DREAM dataset paves the way to evaluate the Robot Enhanced Therapy and records the data of the 61 children with ASD. The dataset provides the skeleton-based, gaze-based features, which were recorded by the RGB-D cameras. Besides, characteristics of the therapy and age, gender, ID number of a child are also provided. In this paper, we have proposed a method to classify the tasks (Imitation, Joint Attention, and Turn-Taking) accomplished by those children provided in the dataset. Further analysis is also executed by us to assess if a robot can be a substitute for humans to conduct the therapy session. Skeleton joint positions and gaze-based vectors are utilized to draw out the angles, distances between joints, and the coordinate direction angles, Direction Gaze Zone (DGZ), respectively. From the skeleton-based approach, statistical features (such as mean, median, standard deviation, minimum, and maximum), and from the gaze-based approach, mean frequency, the number of peaks of a signal of the frequency domain are obtained. Therefore, the entire analysis is evaluated from two perspectives. The ensemble methods such as Random Forest classifier, XGBoost classifier, Extra Trees classifier are deployed to obtain the predicted result from the test data. The results are satisfactory considering the challenge and complexity of the ASD domain. We have explored how robots can be an alternative to humans for the improvement of social communication among children with ASD. https://ieeexplore.ieee.org/abstract/document/9638874

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... However, they only provided the classification of good and bad therapy states while considering the standard humanbased therapy sessions as the good ones, and Robot-based therapy sessions as bad ones. Similarly, Saha et al. [41] have proposed machine learning-based models for the analysis of autism therapy tasks which mainly include Imitation, Joint Attention, and Turn-taking, using Random Forest, Extra Trees, and XBoost Classifiers. Skeleton-based data has been used from the DREAM dataset after extracting the statistical features using the window method, which include mean, standard deviation, Average, skewness, and kurtosis. ...
... They have achieved a maximum accuracy of 82.1 for the test dataset. Their work can analyze the therapy tasks performed during human-based therapy sessions and robot-based therapy sessions and provide the conclusion that robots can replace traditional humans in therapy settings [41]. ...
... The term h t−1 is the output from the previous block, whereas the x t is the input sequence given to the memory cell. The term W h f is the weight matrices for the output vector of the previous cell, whereas W x f represents the input vector of the current memory cell in the forget gate [41]. Finally, sigma is the activation function sigmoid and can be represented by a given equation. ...
Article
Full-text available
Autism Spectrum Disorder affects the overall growth and development of children by limiting their social and cognitive skills, where a child can have low, medium, and high-functioning autism. Depending upon the level of autism, different applied behavioral and cognitive therapies are performed by therapists that may continue for months or years depending upon the severity of autism in a child. Applied Behavior Analysis (ABA) therapy is provided to improve a child’s social, communicational, and behavioral skills. Nevertheless, locating highly skilled therapists for an extended duration proves challenging in the realm of ABA therapy. Moreover, the progress of the child needs to be monitored at every stage of the therapy, which further increases the overall cost and time of the therapy. During the ABA sessions, various tasks e.g., imitation, joint attention, and turn taking are key measurements to analyze a child's progress. It's very challenging to monitor these tasks manually by the therapist and predict the overall child's progress. To overcome this problem, the paper presents a novel deep learning framework based on a Long Short-Term Memory network for the classification of ABA therapy tasks (ALATT-Network) i.e., Imitation, Joint Attention, and Turn-Taking. The proposed framework is trained on a large-scale skeleton dataset consisting of spatial and temporal information on autism therapy sessions. The DREAM dataset is employed for the experiments which is the largest publically available dataset for autism therapy sessions. The framework assists caregivers, doctors, and therapists to monitor the ongoing therapy sessions and predict the child’s progress. The proposed ALATT network employs different five optimizers and analyzes the performance of the network. In the experiment, the results show that the proposed ALATT-Network with Adam optimizer effectively learns the temporal and spatial features of skeleton movements and provides an accuracy of 79.3% for classification tasks.
... In the literature, there are few works which adopt the use of machine learning to detect different categories of activity using skeleton-based and gazetacking. In Saha et al. [15]'s study features were extracted individually from both skeleton and gaze data by computing the angles of the bone joints and estimating the coordinate direction angles of the head and eye gaze vectors. For the classification level, the article adopts the use of the Extra Trees classifier, eXtreme Gradient Boosting classifier, and Random Forest classifier as a commonly used algorithm in machine learning. ...
... This means that the number of features utilized for the gaze data source is 6. Fig. 2. Distances between skeletal joints [15] In this case, non-overlapping windows of size 32 frames are used. For each window, statistical metrics including mean, standard deviation, sum, minimum, maximum, skewness, and kurtosis are calculated. ...
... Overall, training with RET tends to outperform training with SHT, and the SMOTE technique is generally more effective than under-sampling in improving classification accuracy. To compare our results with the state-of-the-art (SOTA) achievements, we utilized the result of the work of [15] who adopted the concatenation form of both RET and SHT in one experiment and a crossdata source by training the model with the RET. The results are shown in Table III is a comparison summary of the accuracy and F1 Score metrics achieved by the proposed method in detecting tasks on the test set, utilizing both RET and SHT data sources. ...
Conference Paper
Children with autism spectrum disorder (ASD) require long-term care, support, and empathy from experienced therapists. However, there is a shortage num-ber of highly experienced therapists available to provide consistent and high-quality care for all ASD children and teach them effectively. The Robotic-based assessment and treatment offer promising advantages for ASD children. However, many robotic and virtual therapies rely on pre-programmed that may not consider individual needs and cultural differences. This study aims to develop and test the hypothesis that robots can effectively assist in conducting therapy sessions for children with ASD. A Support Vector Machine (SVM) classifier has been used to predict out-comes from sckeleton and gaze basaed features of data generated during both robot-child and therapist-child in-teractions. A comprehensive evaluation across five distinct scenarios, incorporating various combinations of Robot-Enhanced Therapy (RET), Standard Human Treatment (SHT), and their concatenation have been conducted. The article results for both skeleton-based features and gaze-based data reveal a significant performance advantage over existing state-of-the-art methodologies.
... Abnormal gait [29] and the behavioral [28] pattern can be easily identified from motion data. Purnata et al. [30] initially extracted linear and angular joint characteristics from skeleton data. Various machine learning techniques were utilized for analysis. ...
Article
Full-text available
Navigating the complexities of Autism Spectrum Disorder (ASD) diagnosis and intervention requires a nuanced approach that addresses both the inherent variability in therapeutic practices and the imperative for scalable solutions. This paper presents a transformative Robot-Enhanced Therapy (RET) framework, leveraging an intricate amalgamation of an Adaptive Boosted 3D biomarker approach and Saliency Maps generated through Kernel Density Estimation. By seamlessly integrating these methodologies through majority voting, the framework pioneers a new frontier in automating the assessment of ASD levels and Autism Diagnostic Observation Schedule (ADOS) scores, offering unprecedented precision and efficiency. Drawing upon the rich tapestry of the DREAM Dataset, encompassing data from 61 children, this study meticulously crafts novel features derived from diverse modalities including body skeleton, head movement, and eye gaze data. Our 3D bio-marker approach achieves a remarkable predictive prowess, boasting a staggering 95.59% accuracy and an F1 score of 92.75% for ASD level prediction, alongside an RMSE of 1.78 and an R-squared value of 0.74 for ADOS score prediction. Furthermore, the introduction of a pioneering saliency map generation method, harnessing gaze data, further enhances predictive models, elevating ASD level prediction accuracy to an impressive 97.36%, with a corresponding F1 score of 95.56%. Beyond technical achievements, this study underscores RET’s transformative potential in reshaping ASD intervention paradigms, offering a promising alternative to Standard Human Therapy (SHT) by mitigating therapist variability and providing scalable therapeutic approaches. While acknowledging limitations in the research, such as sample constraints and model generalizability, our findings underscore RET’s capacity to revolutionize ASD management.
... Or assist any therapy-related supports? In this line, MAR Ahad and his team [31] have proposed a system based on another real-dataset on robot-enhanced therapy. The dataset consists of 61 kids -covering Robot Enhanced Therapy (RET) session, and Standard Human Therapy (SHT) session. ...
Chapter
Now-a-days, signal processing is ubiquitous. This broad electrical engineering discipline is concerned with extracting, manipulating, and storing information embedded in complex signals and images. From the early days of the FFT to today’s machine/computer vision industry, signal processing has driven many of the products and devices that have benefited society. This chapter will inform the readers about the current strength of the department in terms of curriculum and research activities in this field, the contribution of the department to society, global trends and future research directions in this field and finally, the measures that need to be taken to meet the upcoming goals and challenges.
Conference Paper
In our research, we are attempting to predict Autism Spectrum Disorder (ASD) and the associated Autism Diagnostic Observation Schedule (ADOS) scores using data from the body skeleton, head movement, and eye gaze. To the best of our knowledge, no such prior work has been completed. ASD is a neurological and developmental disorder that affects how people interact with others, communicate, learn, and behave. Scores from the Autism Diagnostic Observation Schedule (ADOS) are regarded as a standard tool for making an early diagnosis of autism. Successful treatment of ASD requires proper diagnosis and methodical therapy plans. Conventional treatments of ASD usually involve diverse intervention techniques designed by professional therapists. Unfortunately, highly trained therapists are not always readily available. Accessible therapists may sometimes lack experience and observational skills, making it difficult to assist ASD children effectively. So the question is, can we find an alternative to Standard Human Therapy (SHT) in the form of Robot Assisted or Robot Enhanced Therapy (RET)? Our work contributes by proposing a RET system based on 3D body joints and gaze information. We investigated the publicly available "DREAM" dataset, having bio-marker information on 61 children diagnosed with ASD. We propose a feature vector that is based on traditional directly connected body joints as well as some unconventional non-attached body joints with close association. We attempted to predict the severity of the disorder based on our predicted ASD levels and ados scores. The goal of our developed system is to effectively assist RET in ASD diagnosis and therapy.
Article
Full-text available
Abstract Background/ Introduction: Autism Spectrum Disorder (ASD) is a neuro-developmental disorder that limits social and cognitive abilities. ASD has no cure so early diagnosis is important for reducing its impact. The current behavioural observation-based subjective-diagnosis systems (e.g., DSM-5 or ICD-10) frequently misdiagnose subjects. Therefore, researchers are attempting to develop automated diagnosis-systems with minimal human intervention, quicker screening time, and better outreach. Method: This paper is a PRISMA-based systematic review examining the potential of automated autism detection system with Human Activity Analysis (HAA) to look for distinctive ASD-characteristics such as repetitive behaviour, abnormal gait and visual saliency. The literature from 2011 onward is qualitatively and quantitatively analysed to investigate whether HAA can identify the features of ASD, the level of its classification accuracy, the degree of human intervention, and screening time. Based on these findings we discuss the approaches, challenges, resources, and future directions in this area. Result: According to our quantitative assessment of the dataset[1], Inception v3 and LSTM [1] give the highest accuracy (89%) for repetitive behavior. For abnormal gait-based approach, the Multilayer Perceptron gives 98% accuracy based on 18 features from dataset [2]. For gaze pattern, a saliency-metric feature-based learning [3] gives 99% accuracy on dataset[4], while an algorithm involving statistical features and Decision Trees yields an accuracy of 76% on dataset [5]. Conclusion: In terms of the state-of-the-art, fully automated HAA systems for ASD diagnosis show promise but are still in developmental stages. However, this is an active research field, and HAA has good prospects for helping to diagnose ASD objectively in less time with better accuracy.
Article
Full-text available
Action recognition is a very widely explored research area in computer vision and related fields. We propose Kinematics Posture Feature (KPF) extraction from 3D joint positions based on skeleton data for improving the performance of action recognition. In this approach, we consider the skeleton 3D joints as kinematics sensors. We propose Linear Joint Position Feature (LJPF) and Angular Joint Position Feature (AJPF) based on 3D linear joint positions and angles between bone segments. We then combine these two kinematics features for each video frame for each action to create the KPF feature sets. These feature sets encode the variation of motion in the temporal domain as if each body joint represents kinematics position and orientation sensors. In the next stage, we process the extracted KPF feature descriptor by using a low pass filter, and segment them by using sliding windows with optimized length. This concept resembles the approach of processing kinematics sensor data. From the segmented windows, we compute the Position-based Statistical Feature (PSF). These features consist of temporal domain statistical features (e.g., mean, standard deviation, variance, etc.). These statistical features encode the variation of postures (i.e., joint positions and angles) across the video frames. For performing classification, we explore Support Vector Machine (Linear), RNN, CNNRNN, and ConvRNN model. The proposed PSF feature sets demonstrate prominent performance in both statistical machine learning- and deep learning-based models. For evaluation, we explore five benchmark datasets namely UTKinect-Action3D, Kinect Activity Recognition Dataset (KARD), MSR 3D Action Pairs, Florence 3D, and Office Activity Dataset (OAD). To prevent overfitting, we consider the leave-one-subject-out framework as the experimental setup and perform 10-fold cross-validation. Our approach outperforms several existing methods in these benchmark datasets and achieves very promising classification performance.
Article
Full-text available
We present a dataset of behavioral data recorded from 61 children diagnosed with Autism Spectrum Disorder (ASD). The data was collected during a large-scale evaluation of Robot Enhanced Therapy (RET). The dataset covers over 3000 therapy sessions and more than 300 hours of therapy. Half of the children interacted with the social robot NAO supervised by a therapist. The other half, constituting a control group, interacted directly with a therapist. Both groups followed the Applied Behavior Analysis (ABA) protocol. Each session was recorded with three RGB cameras and two RGBD (Kinect) cameras, providing detailed information of children’s behavior during therapy. This public release of the dataset comprises body motion, head position and orientation, and eye gaze variables, all specified as 3D data in a joint frame of reference. In addition, metadata including participant age, gender, and autism diagnosis (ADOS) variables are included. We release this data with the hope of supporting further data-driven studies towards improved therapy methods as well as a better understanding of ASD in general.
Article
Full-text available
Robot-assisted therapy (RAT) offers potential advantages for improving the social skills of children with autism spectrum disorders (ASDs). This article provides an overview of the developed technology and clinical results of the EC-FP7-funded Development of Robot-Enhanced therapy for children with AutisM spectrum disorders (DREAM) project, which aims to develop the next level of RAT in both clinical and technological perspectives, commonly referred to as robot-enhanced therapy (RET). Within this project, a supervised autonomous robotic system is collaboratively developed by an interdisciplinary consortium including psychotherapists, cognitive scientists, roboticists, computer scientists, and ethicists, which allows robot control to exceed classical remote control methods, e.g., Wizard of Oz (WoZ), while ensuring safe and ethical robot behavior. Rigorous clinical studies are conducted to validate the efficacy of RET. Current results indicate that RET can obtain an equivalent performance compared to that of human standard therapy for children with ASDs. We also discuss the next steps of developing RET robotic systems.
Article
Full-text available
Autism spectrum disorder is a developmental disorder that describes certain challenges associated with communication (verbal and non-verbal), social skills, and repetitive behaviors. Typically, autism spectrum disorder is diagnosed in a clinical environment by licensed specialists using procedures which can be lengthy and cost-ineffective. Therefore, scholars in the medical, psychology, and applied behavioral science fields have in recent decades developed screening methods such as the Autism Spectrum Quotient and Modified Checklist for Autism in Toddlers for diagnosing autism and other pervasive developmental disorders. The accuracy and efficiency of these screening methods rely primarily on the experience and knowledge of the user, as well as the items designed in the screening method. One promising direction to improve the accuracy and efficiency of autism spectrum disorder detection is to build classification systems using intelligent technologies such as machine learning. Machine learning offers advanced techniques that construct automated classifiers that can be exploited by users and clinicians to significantly improve sensitivity, specificity, accuracy, and efficiency in diagnostic discovery. This article proposes a new machine learning method called Rules-Machine Learning that not only detects autistic traits of cases and controls but also offers users knowledge bases (rules) that can be utilized by domain experts in understanding the reasons behind the classification. Empirical results on three data sets related to children, adolescents, and adults show that Rules-Machine Learning offers classifiers with higher predictive accuracy, sensitivity, harmonic mean, and specificity than those of other machine learning approaches such as Boosting, Bagging, decision trees, and rule induction.
Conference Paper
Full-text available
Autism is a neuro-developmental condition that is characterized by a number of unconventional behaviors such as restricted and repetitive activities. It is often largely attributed to deficiency in communication and social interaction. Therefore, it is difficult to make autistic individuals, especially children, to comply with researches that aim at comprehending this condition. However, with the availability of non-invasive eye-tracking technology, this problem has become easier to deal with. The following research probes into the visual face scanning patterns and emotion recognition between 21 autistic and 21 control or TD (typically developing) children when displayed pictures of 6 basic emotions (happy, sad, angry, disgusted, fearful and surprised). Tobii EyeX Controller was used to attain the gaze data and the data was processed and analyzed in MATLAB. The results revealed that children with autism look less at the core features of the face (eyes, nose and mouth) while scanning faces and have more difficulty in perceiving the correct emotion compared to the typically developing children. This atypical face scanning and lack of preference to the core features of the face can be the reason why autistic individuals have trouble understanding others' emotions and an overall incompetency in communication and social interaction. To delve more into this, further eye-tracking, neuroimaging and behavioral studies should be done in integration.
Article
Full-text available
Due to the imbalanced distribution of business data, missing of user features and many other reasons, directly using big data techniques on realistic business data tends to deviate from the business goals. It is difficult to model the insurance business data by classification algorithms like Logistic Regression and SVM etc. In this paper, we exploit a heuristic bootstrap sampling approach combined with the ensemble learning algorithm on the large-scale insurance business data mining, and proposes an ensemble random forest algorithm which used the parallel computing capability and memory-cache mechanism optimized by Spark. We collected the insurance business data from China Life Insurance Company to analyze the potential customers using the proposed algorithm. We use F-Measure and G-mean to evaluate the performance of the algorithm. Experiment result shows that the ensemble random forest algorithm outperformed SVM and other classification algorithms in both performance and accuracy within the imbalanced data, and it is useful for improving the accuracy of product marketing compare to the traditional artificial approach.
Article
Full-text available
Robot-Assisted Therapy (RAT) has successfully been used to improve social skills in children with autism spectrum disorders (ASD) through remote control of the robot in so-called Wizard of Oz (WoZ) paradigms. However, there is a need to increase the autonomy of the robot both to lighten the burden on human therapists (who have to remain in control and, importantly, supervise the robot) and to provide a consistent therapeutic experience. This paper seeks to provide insight into increasing the autonomy level of social robots in therapy to move beyond WoZ. With the final aim of improved human-human social interaction for the children, this multidisciplinary research seeks to facilitate the use of social robots as tools in clinical situations by addressing the challenge of increasing robot autonomy. We introduce the clinical framework in which the developments are tested, alongside initial data obtained from patients in a first phase of the project using a WoZ setup mimicking the targeted supervised-autonomy behaviour. We further describe the implemented system architecture capable of providing the robot with supervised autonomy.
Article
Full-text available
Autism spectrum disorder (ASD) is a permanent neurodevelopmental disorder that can be recognised during the first few years of life and is further supported by the existence of gait impairments. Automated classification of ASD gait could provide assistance in diagnosis and ensure rapid quantitative clinical judgement. This study proposes an automated classification of ASD gait patterns based on kinematic and kinetic gait features with the application of machine learning approaches. Gait analysis of 24 ASD and 24 typical healthy children were recorded using a state-of-the-art three-dimensional (3D) motion analysis system and two force platforms during barefoot self-selected normal walking. Nine kinematic and sixteen kinetic gait features were statistically selected using the independent t-tests and Mann-Whitney U tests, which grouped into two types of datasets. Linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA) were employed to perform the recognition task. Overall, the results of the proposed study suggest that LDA classifier with kinetic gait features as input predictors produces better classification performance with 82.50% of accuracy and lower misclassification rate.
Article
Full-text available
We propose a 3D gaze-tracking method that combines accurate 3D eye- and facial-gaze vectors estimated from a Kinect v2 high-definition face model. Using accurate 3D facial and ocular feature positions, gaze positions can be calculated more accurately than with previous methods. Considering the image resolution of the face and eye regions, two gaze vectors are combined as a weighted sum, allocating more weight to facial-gaze vectors. Hence, the facial orientation mainly determines the gaze position, and eye-gaze vectors then perform minor manipulations. The 3D facial-gaze vector is first defined, and the 3D rotational center of the eyeball is then estimated; together, these define the 3D eye-gaze vector. Finally, the intersection point between the 3D gaze vector and the physical display plane is calculated as the gaze position. Experimental results show that the average gaze estimation root-mean-square error was approximately 23 pixels from the desired position at a resolution of 1920×10801920\times 1080.
Article
Full-text available
The aim of Active and Assisted Living is to develop tools to promote the ageing in place of elderly people, and human activity recognition algorithms can help to monitor aged people in home environments. Different types of sensors can be used to address this task and the RGBD sensors, especially the ones used for gaming, are cost-effective and provide much information about the environment. This work aims to propose an activity recognition algorithm exploiting skeleton data extracted by RGBD sensors. The system is based on the extraction of key poses to compose a feature vector, and a multiclass Support Vector Machine to perform classification. Computation and association of key poses are carried out using a clustering algorithm, without the need of a learning algorithm. The proposed approach is evaluated on five publicly available datasets for activity recognition, showing promising results especially when applied for the recognition of AAL related actions. Finally, the current applicability of this solution in AAL scenarios and the future improvements needed are discussed.
Chapter
Full-text available
Mean frequency (MNF) and median frequency (MDF) are two useful and popular frequency-domain features for electromyography analysis both in clinical and engineering applications. MNF and MDF are frequently used as the gold standard tool to detect fatigue in the target muscles using EMG signals. The effectiveness of MNF and MDF under many experimental conditions is presented and confirmed in this chapter, although the effects of muscle force and muscle geometry on MNF and MDF are inconclusive. However, the possible reasons for the conflicting results in both effects have been described and discussed in detail together with the possible techniques to make the consistent results for MNF and MDF with the both effects, as mentioned in the following. For the effect of muscle force, the selection of time-dependent MNF and MDF should be applied to the raw EMG data. As a result, MNF and MDF should increase as the muscle force or load increases. For the effect of muscle geometry or joint angle, the normalization technique should be applied to the raw EMG data. As a result, MNF and MDF should increase as the muscle length or joint angle (degrees of extension) decreases. However, the question remains whether the conflicting results, i.e. subject dependent, are found for the effect of both muscle force and muscle geometry on MNF and MDF. To address this question, two further works should be investigated: (1) finding the correlation between related anthropometric variables obtained from the subjects and MNF (or MDF), and (2) requesting all interested information to complete all components in Tables 2 and 3, and finding the possible reasons from the complete experimental conditions. In total, MNF and MDF features extracted from the EMG signal are the optimal variables to identify muscle fatigue, particularly for static muscle contraction. However, for dynamic muscle contraction, applying instantaneous MNF and MDF are recommended. The recommendations above can be useful to apply for most electromyography applications, such as human-computer interaction (HCI), ergonomics, occupational therapy and sport science. In addition, applying both techniques can make the MNF and MDF features to be the universal indices than can identify all factors including muscle force, muscle geometry, and muscle fatigue.
Chapter
Full-text available
Random Decision Forest-based approaches have previously shown promising performance in the domain of brain tumor segmentation. We extend this idea by using an ExtraTree-classifier. Several features are calculated based on normalized T1, T2, T1 with contrast agentand T2 Flair MR-images. With these features an ExtraTree-classifier is trained and used to predict different tissue classes on voxel level. The results are compared to other state-of-the-art approaches by participating at the BraTS 2013 challenge.
Article
Full-text available
Through this meta-analysis we aimed to provide an estimation of the overall effect of robot-enhanced therapy on psychological outcome for different populations, to provide average effect sizes on different outcomes, such as cognitive, behavioral and subjective, and to test possible moderators of effect size. From a total of 861 considered studies for this meta-analysis, only 12 were included because of the lack of studies that have reported quantitative data in this area and because of their primary focus on describing the process of robotic development rather than measuring psychological outcomes. We calculated Cohen's d effect sizes for every outcome measure for which sufficient data were reported. The results show that robot-enhanced therapy yielded a medium effect size overall and, specifically on the behavioral level, indicating that 69% of patients in the control groups did worse than the average number of participants in the intervention group. More studies are needed with regard to specific outcomes to prove the efficacy of robot-enhanced therapy, but the overall results clearly support the use of robot-enhanced therapy for different populations.
Article
Full-text available
Eye tracking has the potential to characterize autism at a unique intermediate level, with links 'down' to underlying neurocognitive networks, as well as 'up' to everyday function and dysfunction. Because it is non-invasive and does not require advanced motor responses or language, eye tracking is particularly important for the study of young children and infants. In this article, we review eye tracking studies of young children with autism spectrum disorder (ASD) and children at risk for ASD. Reduced looking time at people and faces, as well as problems with disengagement of attention, appear to be among the earliest signs of ASD, emerging during the first year of life. In toddlers with ASD, altered looking patterns across facial parts such as the eyes and mouth have been found, together with limited orienting to biological motion. We provide a detailed discussion of these and other key findings and highlight methodological opportunities and challenges for eye tracking research of young children with ASD. We conclude that eye tracking can reveal important features of the complex picture of autism.
Conference Paper
Full-text available
This article presents the mechatronic design of the autonomous humanoid robot called NAO that is built by the French company Aldebaran-Robotics. With its height of 0.57 m and its weight about 4.5 kg, this innovative robot is lightweight and compact. It distinguishes itself from existing humanoids thanks to its pelvis kinematics design, its proprietary actuation system based on brush DC motors, its electronic, computer and distributed software architectures. This robot has been designed to be affordable without sacrificing quality and performance. It is an open and easy-to-handle platform. The comprehensive and functional design is one of the reasons that helped select NAO to replace the AIBO quadrupeds in the 2008 RoboCup standard league.
Article
Full-text available
An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often real-world data sets are predominately composed of "normal" examples with only a small percentage of "abnormal" or "interesting" examples. It is also the case that the cost of misclassifying an abnormal (interesting) example as a normal example is often much higher than the cost of the reverse error. Under-sampling of the majority (normal) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class. This paper shows that a combination of our method of over-sampling the minority (abnormal) class and under-sampling the majority (normal) class can achieve better classifier performance (in ROC space) than only under-sampling the majority class. This paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space) than varying the loss ratios in Ripper or class priors in Naive Bayes. Our method of over-sampling the minority class involves creating synthetic minority class examples. Experiments are performed using C4.5, Ripper and a Naive Bayes classifier. The method is evaluated using the area under the Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy.
Article
Among social skills that are core symptoms of autism spectrum disorder, turn-taking plays a fundamental role in regulating social interaction and communication. Our main focus in this study is to investigate the effectiveness of a robot-enhanced intervention on turn-taking abilities. We aim to identify to what degree social robots can improve turn-taking skills and whether this type of intervention provides similar or better gains than standard intervention. This study presents a series of 5 single-subject experiments with children with autism spectrum disorder aged between 3 and 5 years. Each child receives 20 intervention sessions (8 robot-enhanced sessions—robot-enhanced treatment (RET), 8 standard human sessions—standard human treatment, and 4 sessions with the intervention that was more efficient). Our findings show that most children reach similar levels of performance on turn-taking skills across standard human treatment and RET, meaning that children benefit to a similar extent from both interventions. However, in the RET condition, children seemed to see their robotic partner as being more interesting than their human partner, due to the fact that they looked more at the robotic partner compared with the human partner.
Article
Noninvasive behavior observation techniques allow more natural human behavior assessment experiments with higher ecological validity. We propose the use of gaze ethograms in the context of user interaction with a computer display to characterize the user's behavioral activity. A gaze ethogram is a time sequence of the screen regions the user is looking at. It can be used for the behavioral modeling of the user. Given a rough partition of the display space, we are able to extract gaze ethograms that allow discrimination of three common user behavioral activities: reading a text, viewing a video clip, and writing a text. A gaze tracking system is used to build the gaze ethogram. User behavioral activity is modeled by a classifier of gaze ethograms able to recognize the user activity after training. Conventional commercial gaze tracking for research in the neurosciences and psychology science are expensive and intrusive, sometimes impose wearing uncomfortable appliances. For the purposes of our behavioral research, we have developed an open source gaze tracking system that runs on conventional laptop computers using their low quality cameras. Some of the gaze tracking pipeline elements have been borrowed from the open source community. However, we have developed innovative solutions to some of the key issues that arise in the gaze tracker. Specifically, we have proposed texture-based eye features that are quite robust to low quality images. These features are the input for a classifier predicting the screen target area, the user is looking at. We report comparative results of several classifier architectures carried out in order to select the classifier to be used to extract the gaze ethograms for our behavioral research. We perform another classifier selection at the level of ethogram classification. Finally, we report encouraging results of user behavioral activity recognition experiments carried out over an inhouse dataset.
Article
It is evident that recently reported robot-assisted therapy systems for assessment of children with autism spectrum disorder (ASD) lack autonomous interaction abilities and require significant human resources. This paper proposes a sensing system that automatically extracts and fuses sensory features such as body motion features, facial expressions, and gaze features, further assessing the children behaviours by mapping them to therapist-specified behavioural classes. Experimental results show that the developed system has a capability of interpreting characteristic data of children with ASD, thus has the potential to increase the autonomy of robots under the supervision of a therapist and enhance the quality of the digital description of children with ASD. The research outcomes pave the way to a feasible machine-assisted system for their behaviour assessment.
Conference Paper
The classification of imbalanced data is a common practice in the context of medical imaging intelligence. The synthetic minority oversampling technique (SMOTE) is a powerful approach to tackling the operational problem. This paper presents a novel approach to improving the conventional SMOTE algorithm by incorporating the locally linear embedding algorithm (LLE). The LLE algorithm is first applied to map the high-dimensional data into a low-dimensional space, where the input data is more separable, and thus can be oversampled by SMOTE. Then the synthetic data points generated by SMOTE are mapped back to the original input space as well through the LLE. Experimental results demonstrate that the underlying approach attains a performance superior to that of the traditional SMOTE
Article
The need for intelligent HCI has been reinforced by the increasing numbers of human-centered applications in our daily life. However, in order to respond adequately, intelligent applications must first interpret users’ actions. Identifying the context in which users’ interactions occur is an important step toward automatic interpretation of behavior. In order to address a part of this context-sensing problem, we propose a generic and application-independent framework for activity recognition of users interacting with a computer interface. Our approach uses Layered Hidden Markov Models (LHMM) and is based on eye-gaze movements along with keyboard and mouse interactions. The main contribution of the proposed framework is the ability to relate users’ interactions to a task model in variant applications and for different monitoring purposes. Experimental results from two user studies show that our activity recognition technique is able to achieve good predictive accuracy with a relatively small amount of training data.
Article
The use of the fast Fourier transform in power spectrum analysis is described. Principal advantages of this method are a reduction in the number of computations and in required core storage, and convenient application in nonstationarity tests. The method involves sectioning the record and averaging modified periodograms of the sections.
dream2020/dream: Dream ret system
R. J. Homewood and dream2020, "dream2020/dream: Dream ret system," Dec. 2019. [Online]. Available: https://doi.org/10.5281/zenodo.3571992