Conference PaperPDF Available

Recognizing Mimicked Autistic Self-Stimulatory Behaviors Using HMMs.

Authors:

Abstract and Figures

Children with autism often exhibit self-stimulatory (or "stimming") behaviors. We present an on-body sensing system for continuous recognition of slimming activity. By creating a system to recognize and monitor stimming behaviors, we hope to provide autism researchers with detailed, quantitative data. In this paper, we compare isolated and continuous recognition rates of emulated autistic stimming behaviors using hidden Markov models (HMMs). We achieved an overall system accuracy 68.57% in continuous recognition tests. However, the occurrence of slimming events can be detected with 100% accuracy by allowing minor frame-level insertion errors.
Content may be subject to copyright.
A preview of the PDF is not available
... Research on computer-aided screening of ASD has focused primarily on evaluating visual attention [7,13,65,70,71], vocalization [25,60,69] stereotyped movements [2,30,42,55,68], and motor coordination [5,23,64,67] being the last one the less explored in the machine learning literature despite its importance and in part due to the lack of appropriate technology to collect such data. ...
... Research on computer-aided screening of ASD , has shown that it is possible to use small datasets of voice, gaze, and stereotypical movements to classify children with ASD and reach a precision above 90% [2,5,7,13,23,25,30,42,55,60,64,65,[67][68][69][70][71] . Data in the above research comes from less than 30 children with ASD and 30 NT children. ...
... The resulting model classify five out of six children with ASD (83%) with confidence in the results of up to 95%. Small data sets were used for detecting stereotyped movements [2,30,42,55,68], reaching a precision of 90%. To identify hand flapping and rocking in one study [2], researchers proposed placing an accelerometer on each wrist and on the participant's back to collect data of stereotyped movements. ...
Article
Full-text available
Health data collection of children with autism spectrum disorder (ASD) is challenging, time-consuming, and expensive; thus, working with small datasets is inevitable in this area. The diagnosis rate in ASD is low, leading to several challenges, including imbalance classes, potential overfitting, and sampling bias, making it difficult to show its potential in real-life situations. This paper presents a data analytics pilot-case study using a small dataset leveraging domain-specific knowledge to uncover differences between the gestural patterns of children with ASD and neurotypicals. We collected data from 59 children using an elastic display we developed during a sensing campaign and 9 children using the elastic display as part of a therapeutic program. We extracted strength-related features and selected the most relevant ones based on how the motor atypicality of children with ASD influences their interactions: children with ASD make smaller and narrower gestures and experience variations in the use of strength. The proposed machine learning models can correctly classify children with ASD with 97.3% precision and recall even if the classes are unbalanced. Increasing the size of the dataset via synthetic data improved the model precision to 99%. We finish discussing the importance of leveraging domain-specific knowledge in the learning process to successfully cope with some of the challenges faced when working with small datasets in a concrete, real-life scenario.
... The author's investigation revealed that the repeated models described signs of tension or emotion. They demonstrated that both the wrist-based accelerometer and the sound sensor provide self-stimulating ways [20]. Here, the next content in related work is about works in ML area are of concern: ...
Article
Full-text available
Background and Objectives: Autism is the most well-known disease that occurs in any age people. There is an increasing concern in appealing machine learning techniques to diagnose these incurable conditions. But, the poor quality of most datasets contains the production of efficient models for the forecast of autism. The lack of suitable pre-processing methods outlines inaccurate and unstable results. For diagnosing the disease, the techniques handled to improve the classification performance yielded better results, and other computerized technologies were applied. Methods: An effective and high performance model was introduced to address pre-processing problems such as missing values and outliers. Several based classifiers applied on a well-known autism data set in the classification stage. Among many alternatives, we remarked that combine replacement with the mean and improvement selection with Random Forest and Decision Tree technologies provide our obtained highest results. Results: The best-obtained accuracy, precision, recall, and F-Measure values of the MVO-Autism suggested model were the same, and equal 100% outperforms their counterparts. Conclusion: The obtained results reveal that the suggested model can increase classification performance in terms of evaluation metrics. The results are evidence that the MVO-Autism model outperforms its counterparts. The reason is that this model overcomes both problems.
... Smartwatch-based systems and sensors have been used to detect repetitive behaviors to aid intervention for those with autism. Westeyn et al. used a Hidden Markov Model to detect stimming using accelerometer data of 7 different types of self-stimulatory behaviors [52]. They reached 69% accuracy with this approach. ...
Preprint
Full-text available
A formal autism diagnosis is an inefficient and lengthy process. Families often have to wait years before receiving a diagnosis for their child; some may not receive one at all due to this delay. One approach to this problem is to use digital technologies to detect the presence of behaviors related to autism, which in aggregate may lead to remote and automated diagnostics. One of the strongest indicators of autism is stimming, which is a set of repetitive, self-stimulatory behaviors such as hand flapping, headbanging, and spinning. Using computer vision to detect hand flapping is especially difficult due to the sparsity of public training data in this space and excessive shakiness and motion in such data. Our work demonstrates a novel method that overcomes these issues: we use hand landmark detection over time as a feature representation which is then fed into a Long Short-Term Memory (LSTM) model. We achieve a validation accuracy and F1 Score of about 72% on detecting whether videos from the Self-Stimulatory Behaviour Dataset (SSBD) contain hand flapping or not. Our best model also predicts accurately on external videos we recorded of ourselves outside of the dataset it was trained on. This model uses less than 26,000 parameters, providing promise for fast deployment into ubiquitous and wearable digital settings for a remote autism diagnosis.
... To do away with this, some automatic systems have been proposed (as below) that can save cost as well as human effort. [Westeyn et al. 2005] presents an on-body sensing system for monitoring stimming (repetitive body movement) activities. 68.57% accuracy is achieved by using HMM which distinguishes between stimming and non-stimming behavior. ...
Preprint
Pervasive healthcare is an emerging technology that aims to provide round-the-clock monitoring of several vital signs of patients using various health sensors, specialized communication protocols, and intelligent context-aware applications. Pervasive healthcare applications proactively contact the caregiver provided any abnormality arises in the health condition of a monitored patient. It has been a boon to the patients suffering from different diseases and requiring continuous monitoring and care, such as, disabled individuals, elderly and weak persons living alone, children of different ages, and adults who are susceptible to near-fatal falls or sudden increases in blood pressure, heart rates, stress level, etc. Existing surveys on pervasive healthcare cover generic techniques or a particular application, like fall detection. In this paper, we carry out a comprehensive coverage of several common disorders addressed by pervasive healthcare in recent years. We roughly classify different diseases by age groups of patients and then discuss various hardware and software tools and techniques to detect or treat them. We have also included a detailed tabular classification of a large selection of significant research articles in pervasive healthcare.
... In the last 20 years, authors have raised the problem of being able to categorize and recognize motor abnormalities in autism, taking advantage of new technologies and new methods of artificial learning. In particular, they focused on the recognition and anticipation of stereotypical motor movements (SMM) (Westeyn et al., 2005;Albinali et al., 2009Albinali et al., , 2012Min and Tewfik, 2010;Goodwin et al., 2011Goodwin et al., , 2014Goncalves et al., 2012;Rodrigues et al., 2013;Großekathöfer et al., 2017;Milano et al., 2019). Using a variety of different features and semi-supervised classification approaches (orthogonal matching pursuit, linear predictive coding, all-pole autoregressive model, higher order statistics, ordinary least squares, and K-VSD algorithm), recognition rates of 86/95% for SMM and no-SMM have been documented. ...
Article
Full-text available
Autism is a neurodevelopmental disorder typically assessed and diagnosed through observational analysis of behavior. Assessment exclusively based on behavioral observation sessions requires a lot of time for the diagnosis. In recent years, there is a growing need to make assessment processes more motivating and capable to provide objective measures of the disorder. New evidence showed that motor abnormalities may underpin the disorder and provide a computational marker to enhance assessment and diagnostic processes. Thus, a measure of motor patterns could provide a means to assess young children with autism and a new starting point for rehabilitation treatments. In this study, we propose to use a software tool that through a smart tablet device and touch screen sensor technologies could be able to capture detailed information about children’s motor patterns. We compared movement trajectories of autistic children and typically developing children, with the aim to identify autism motor signatures analyzing their coordinates of movements. We used a smart tablet device to record coordinates of dragging movements carried out by 60 children (30 autistic children and 30 typically developing children) during a cognitive task. Machine learning analysis of children’s motor patterns identified autism with 93% accuracy, demonstrating that autism can be computationally identified. The analysis of the features that most affect the prediction reveals and describes the differences between the groups, confirming that motor abnormalities are a core feature of autism.
Article
Background A formal autism diagnosis can be an inefficient and lengthy process. Families may wait several months or longer before receiving a diagnosis for their child despite evidence that earlier intervention leads to better treatment outcomes. Digital technologies that detect the presence of behaviors related to autism can scale access to pediatric diagnoses. A strong indicator of the presence of autism is self-stimulatory behaviors such as hand flapping. Objective This study aims to demonstrate the feasibility of deep learning technologies for the detection of hand flapping from unstructured home videos as a first step toward validation of whether statistical models coupled with digital technologies can be leveraged to aid in the automatic behavioral analysis of autism. To support the widespread sharing of such home videos, we explored privacy-preserving modifications to the input space via conversion of each video to hand landmark coordinates and measured the performance of corresponding time series classifiers. Methods We used the Self-Stimulatory Behavior Dataset (SSBD) that contains 75 videos of hand flapping, head banging, and spinning exhibited by children. From this data set, we extracted 100 hand flapping videos and 100 control videos, each between 2 to 5 seconds in duration. We evaluated five separate feature representations: four privacy-preserved subsets of hand landmarks detected by MediaPipe and one feature representation obtained from the output of the penultimate layer of a MobileNetV2 model fine-tuned on the SSBD. We fed these feature vectors into a long short-term memory network that predicted the presence of hand flapping in each video clip. Results The highest-performing model used MobileNetV2 to extract features and achieved a test F1 score of 84 (SD 3.7; precision 89.6, SD 4.3 and recall 80.4, SD 6) using 5-fold cross-validation for 100 random seeds on the SSBD data (500 total distinct folds). Of the models we trained on privacy-preserved data, the model trained with all hand landmarks reached an F1 score of 66.6 (SD 3.35). Another such model trained with a select 6 landmarks reached an F1 score of 68.3 (SD 3.6). A privacy-preserved model trained using a single landmark at the base of the hands and a model trained with the average of the locations of all the hand landmarks reached an F1 score of 64.9 (SD 6.5) and 64.2 (SD 6.8), respectively. Conclusions We created five lightweight neural networks that can detect hand flapping from unstructured videos. Training a long short-term memory network with convolutional feature vectors outperformed training with feature vectors of hand coordinates and used almost 900,000 fewer model parameters. This study provides the first step toward developing precise deep learning methods for activity detection of autism-related behaviors.
Chapter
Assistive technology can be defined as any device or equipment that assists in teaching new skills, augments existing skills, or reduces the impact of disability on daily functioning. Assistive technology is the technology used by people with disabilities to achieve functions that can be difficult or impossible without it. Some examples of using assistive technologies to assist children with disabilities include using robot therapists in intervention and the use of laminated picture cards for communication purposes.
Article
The analysis and evaluation of manual assembly processes is related to high efforts and expenditures. Traditionally, assessments use visual and empirical methods with a low level of digitalization. These are often not cost-covering for small production quantities. This paper presents an approach to recognize assembly steps from individually detected sensory events. The approach can be integrated in a system for automatic analysis of manual assembly processes and is applicable when little training data is available. It is based on a hidden Markov model and combined with a decision logic. The methodology is tested on an exemplary use case.
Conference Paper
Full-text available
Gesture recognition is becoming a more common interaction tool in the fields of ubiquitous and wearable computing. Designing a system to perform gesture recognition, however, can be a cumbersome task. Hidden Markov models (HMMs), a pattern recognition technique commonly used in speech recognition, can be used for recognizing certain classes of gestures. Existing HMM toolkits for speech recognition can be adapted to perform gesture recognition, but doing so requires significant knowledge of the speech recognition literature and its relation to gesture recognition. This paper introduces the Georgia Tech Gesture Toolkit GT2k which leverages Cambridge University's speech recognition toolkit, HTK, to provide tools that support gesture recognition research. GT2k provides capabilities for training models and allows for both real--time and off-line recognition. This paper presents four ongoing projects that utilize the toolkit in a variety of domains.
Conference Paper
Full-text available
We explore the social and technical design issues involved in tracking the effectiveness of educational and therapeutic interventions for children with au- tism (CWA). Automated capture can be applied in a variety of settings to provide a means of keeping valuable records of interventions. We present the findings from qualitative studies and the designs of capture prototypes. These experiences lead to conclusions about specific considerations for building technologies to as- sist in the treatment of CWA, as well as other fragile demographics. Our work also reflects back on the automated capture problem itself, informing us as com- puter scientists how that class of applications must be reconsidered when the analysis of data in the access phase continually influences the capture needs and when social and practical constraints conflict with data collection needs.
Conference Paper
Full-text available
Gesture recognition is becoming a more common interaction tool in the fields of ubiquitous and wearable computing. Designing a system to perform gesture recognition, however, can be a cumbersome task. Hidden Markov models (HMMs), a pattern recognition technique commonly used in speech recognition, can be used for recognizing certain classes of gestures. Existing HMM toolkits for speech recognition can be adapted to perform gesture recognition, but doing so requires significant knowledge of the speech recognition literature and its relation to gesture recognition. This paper introduces the Georgia Tech Gesture Toolkit GT2k which leverages Cambridge University's speech recognition toolkit, HTK, to provide tools that support gesture recognition research. GT2k provides capabilities for training models and allows for both real--time and off-line recognition. This paper presents four ongoing projects that utilize the toolkit in a variety of domains.
Conference Paper
In this work, algorithms are developed and evaluated to de- tect physical activities from data acquired using five small biaxial ac- celerometers worn simultaneously on different parts of the body. Ac- celeration data was collected from 20 subjects without researcher su- pervision or observation. Subjects were asked to perform a sequence of everyday tasks but not told specifically where or how to do them. Mean, energy, frequency-domain entropy, and correlation of acceleration data was calculated and several classifiers using these features were tested. De- cision tree classifiers showed the best performance recognizing everyday activities with an overall accuracy rate of 84%. The results show that although some activities are recognized well with subject-independent training data, others appear to require subject-specific training data. The results suggest that multiple accelerometers aid in recognition because conjunctions in acceleration feature values can effectively discriminate many activities. With just two biaxial accelerometers - thigh and wrist - the recognition performance dropped only slightly. This is the first work to investigate performance of recognition algorithms with multiple, wire-free accelerometers on 20 activities using datasets annotated by the subjects themselves.
Telemetric assessment of stress in indi-viduals with autism
  • M Goodwin
M. Goodwin. Telemetric assessment of stress in indi-viduals with autism. In Spectrum Disorders. American Psychological Association, 2004.