Conference PaperPDF Available

CTRL-Labs: Hand Activity Estimation and Real-time Control from Neuromuscular Signals

Authors:

Abstract and Figures

CTRL-Labs has developed algorithms for determination of hand movements and forces and real-time control from neuromuscular signals. This technology enables users to create their own control schemes at run-time -- dynamically mapping neuromuscular activity to continuous (real-valued) and discrete (categorical/integer-valued) machine-input signals. To demonstrate the potential of this approach to enable novel interactions, we have built three example applications. One displays an ongoing visualization of the current posture/rotation of the hand and each finger as determined from neuromuscular signals. The other two showcase dynamic mapping of neuromuscular signals to continuous and discrete input controls for a two-player competitive target acquisition game and a single-player space shooter game.
Content may be subject to copyright.
A preview of the PDF is not available
... Recently, high-performance myoelectric systems have been sought for accurate identification of finger movements and forces [6][7][8][9] . Such systems would enable more intuitive and streamlined human-computer interactions (HCI) in modern applications of surgical robotics 10 , tele-operated robotics 11,12 , or virtual reality environments 13,14 . However, as the sEMG signal is measured at the skin surface, the source signals produced by motor neurons not only undergo low-frequency filtering caused by muscle, fat and subcutaneous tissues, but they are also contaminated by elusive artefacts such as those associated with electromagnetic interference and electrode displacement 15 . ...
Article
Full-text available
Surface electromyography (sEMG) is commonly used to observe the motor neuronal activity within muscle fibers. However, decoding dexterous body movements from sEMG signals is still quite challenging. In this paper, we present a high-density sEMG (HD-sEMG) signal database that comprises simultaneously recorded sEMG signals of intrinsic and extrinsic hand muscles. Specifically, twenty able-bodied participants performed 12 finger movements under two paces and three arm postures. HD-sEMG signals were recorded with a 64-channel high-density grid placed on the back of hand and an 8-channel armband around the forearm. Also, a data-glove was used to record the finger joint angles. Synchronisation and reproducibility of the data collection from the HD-sEMG and glove sensors were ensured. The collected data samples were further employed for automated recognition of dexterous finger movements. The introduced dataset offers a new perspective to study the synergy between the intrinsic and extrinsic hand muscles during dynamic finger movements. As this dataset was collected from multiple participants, it also provides a resource for exploring generalized models for finger movement decoding.
Preprint
Full-text available
The performance of medical devices that record electrophysiological activity to diagnose epilepsy and cardiac arrythmia depend on consistently low impedance interfaces between conductors and the skin. Clinically standard devices, wet electrodes, use hydrogels and skin abrasion to improve the interface and recorded signal quality. These electrodes and their required preparation are challenging to self-administer which reduces the frequency of disease monitoring and impedes in-home care. Wearable dry electrodes are more practical; however, they show higher impedance relative to wet electrodes and are costly to customize. In this work, a fabrication method for producing anatomically fit, dry electrodes that can be optimized for individuals and/or specific recording locations on the body is presented. Electroless gold plating is used in combination with 3D printing to enable anatomically fit, high-performance, 3D dry electrodes that do not require any skin preparation and are comfortable to wear. The performance of the example 3D dry electrodes is compared to clinically standard devices. The resulting electrodes exhibited an average electrode-skin impedance of 71.2 k{\Omega} at 50Hz and DC offset of -20 mV, which is within the range achieved by clinical wet electrodes.
Conference Paper
Full-text available
We present TongueBoard, a retainer form-factor device for recognizing non-vocalized speech. TongueBoard enables absolute position tracking of the tongue by placing capacitive touch sensors on the roof of the mouth. We collect a dataset of 21 common words from four user study participants (two native American English speakers and two non-native speakers with severe hearing loss). We train a classifier that is able to recognize the words with 91.01% accuracy for the native speakers and 77.76% accuracy for the non-native speakers in a user dependent, offline setting. The native English speakers then participate in a user study involving operating a calculator application with 15 non-vocalized words and two tongue gestures at a desktop and with a mobile phone while walking. TongueBoard consistently maintains an information transfer rate of 3.78 bits per decision (number of choices = 17, accuracy = 97.1%) and 2.18 bits per second across stationary and mobile contexts, which is comparable to our control conditions of mouse (desktop) and touchpad (mobile) input.
Conference Paper
Full-text available
We propose a novel sensing technique called proactive sensing. Proactive sensing continually repositions a camera-based sensor as a way to improve hand pose estimation. Our core contribution is a scheme that effectively learns how to move the sensor to improve pose estimation confidence while requiring no ground truth hand poses. We demonstrate this concept using a low-cost rapid swing arm system built around the state-of-the-art commercial sensing system Leap Motion. The results from our user study show that proactive sensing helps estimate users' hand poses with higher confidence compared to both static and random sensing. We further present an online model update to improve performance for each user.
Conference Paper
Full-text available
We present a new real-time hand tracking system based on a single depth camera. The system can accurately reconstruct complex hand poses across a variety of subjects. It also allows for robust tracking, rapidly recovering from any temporary failures. Most uniquely, our tracker is highly flexible, dramatically improving upon previous approaches which have focused on front-facing close-range scenarios. This flexibility opens up new possibilities for human-computer interaction with examples including tracking at distances from tens of centimeters through to several meters (for controlling the TV at a distance), supporting tracking using a moving depth camera (for mobile scenarios), and arbitrary camera placements (for VR headsets). These features are achieved through a new pipeline that combines a multi-layered discriminative reinitialization strategy for per-frame pose estimation, followed by a generative model-fitting stage. We provide extensive technical details and a detailed qualitative and quantitative analysis.
Conference Paper
Full-text available
This paper investigates an emerging input method enabled by progress in hand tracking: input by free motion of fingers. The method is expressive, potentially fast, and usable across many settings as it does not insist on physical contact or visual feedback. Our goal is to inform the design of high-performance input methods by providing detailed analysis of the performance and anatomical characteristics of finger motion. We conducted an experiment using a commercially available sensor to report on the speed, accuracy, individuation, movement ranges, and individual differences of each finger. Findings show differences of up to 50% in movement times and provide indices quantifying the individuation of single fingers. We apply our findings to text entry by computational optimization of multi-finger gestures in mid-air. To this end, we define a novel objective function that considers performance, anatomical factors, and learnability. First investigations of one optimization case show entry rates of 22 words per minute (WPM). We conclude with a critical discussion of the limitations posed by human factors and performance characteristics of existing markerless hand trackers.
Article
Full-text available
Mechanomyography (MMG) has been extensively applied in clinical and experimental practice to examine muscle characteristics including muscle function (MF), prosthesis and/or switch control, signal processing, physiological exercise, and medical rehabilitation. Despite several existing MMG studies of MF, there has not yet been a review of these. This study aimed to determine the current status on the use of MMG in measuring the conditions of MFs. Five electronic databases were extensively searched for potentially eligible studies published between 2003 and 2012. Two authors independently assessed selected articles using an MS-Word based form created for this review. Several domains (name of muscle, study type, sensor type, subject's types, muscle contraction, measured parameters, frequency range, hardware and software, signal processing and statistical analysis, results, applications, authors' conclusions and recommendations for future work) were extracted for further analysis. From a total of 2184 citations 119 were selected for full-text evaluation and 36 studies of MFs were identified. The systematic results find sufficient evidence that MMG may be used for assessing muscle fatigue, strength, and balance. This review also provides reason to believe that MMG may be used to examine muscle actions during movements and for monitoring muscle activities under various types of exercise paradigms. Overall judging from the increasing number of articles in recent years, this review reports sufficient evidence that MMG is increasingly being used in different aspects of MF. Thus, MMG may be applied as a useful tool to examine diverse conditions of muscle activity. However, the existing studies which examined MMG for MFs were confined to a small sample size of healthy population. Therefore, future work is needed to investigate MMG, in examining MFs between a sufficient number of healthy subjects and neuromuscular patients.
Conference Paper
Recent improvements in ultrasound imaging enable new opportunities for hand pose detection using wearable devices. Ultrasound imaging has remained under-explored in the HCI community despite being non-invasive, harmless and capable of imaging internal body parts, with applications including smart-watch interaction, prosthesis control and instrument tuition. In this paper, we compare the performance of different forearm mounting positions for a wearable ultrasonographic device. Location plays a fundamental role in ergonomics and performance since the anatomical features differ among positions. We also investigate the performance decrease due to cross-session position shifts and develop a technique to compensate for this misalignment. Our gesture recognition algorithm combines image processing and neural networks to classify the flexion and extension of 10 discrete hand gestures with an accuracy above 98%. Furthermore, this approach can continuously track individual digit flexion with less than 5% NRMSE, and also differentiate between digit flexion at different joints.
Conference Paper
Practical wearable gesture tracking requires that sensors align with existing ergonomic device forms. We show that combining EMG and pressure data sensed only at the wrist can support accurate classification of hand gestures. A pilot study with unintended EMG electrode pressure variability led to exploration of the approach in greater depth. The EMPress technique senses both finger movements and rotations around the wrist and forearm, covering a wide range of gestures, with an overall 10-fold cross validation classification accuracy of 96%. We show that EMG is especially suited to sensing finger movements, that pressure is suited to sensing wrist and forearm rotations, and their combination is significantly more accurate for a range of gestures than either technique alone. The technique is well suited to existing wearable device forms such as smart watches that are already mounted on the wrist.
Conference Paper
In this paper we present our results on using electromyographic (EMG) sensor arrays for finger gesture recognition. Sensing muscle activity allows to capture finger motion without placing sensors directly at the hand or fingers and thus may be used to build unobtrusive body-worn interfaces. We use an electrode array with 192 electrodes to record a high-density EMG of the upper forearm muscles. We present in detail a baseline system for gesture recognition on our dataset, using a naive Bayes classifier to discriminate the 27 gestures. We recorded 25 sessions from 5 subjects. We report an average accuracy of 90% for the within-session scenario, showing the feasibility of the EMG approach to discriminate a large number of subtle gestures. We analyze the effect of the number of used electrodes on the recognition performance and show the benefit of using high numbers of electrodes. Cross-session recognition typically suffers from electrode position changes from session to session. We present two methods to estimate the electrode shift between sessions based on a small amount of calibration data and compare it to a baseline system with no shift compensation. The presented methods raise the accuracy from 59% baseline accuracy to 75% accuracy after shift compensation. The dataset is publicly available.
Conference Paper
We present a machine learning technique to recognize gestures and estimate metric depth of hands for 3D interaction, relying only on monocular RGB video input. We aim to enable spatial interaction with small, body-worn devices where rich 3D input is desired but the usage of conventional depth sensors is prohibitive due to their power consumption and size. We propose a hybrid classification-regression approach to learn and predict a mapping of RGB colors to absolute, metric depth in real time. We also classify distinct hand gestures, allowing for a variety of 3D interactions. We demonstrate our technique with three mobile interaction scenarios and evaluate the method quantitatively and qualitatively.
Article
The Myo band turns electrical activity in the muscles of a user's forearm into gestures for controlling computers and other devices
Article
Upper limb prostheses are increasingly resembling the limbs they seek to replace in both form and functionality, including the design and development of multifingered hands and wrists. Hence, it becomes necessary to control large numbers of degrees of freedom (DOFs), required for individuated finger movements, preferably using noninvasive signals. While existing control paradigms are typically used to drive a single-DOF hook-based configurations, dexterous tasks such as individual finger movements would require more elaborate control schemes. We show that it is possible to decode individual flexion and extension movements of each finger (ten movements) with greater than 90% accuracy in a transradial amputee using only noninvasive surface myoelectric signals. Further, comparison of decoding accuracy from a transradial amputee and able-bodied subjects shows no statistically significant difference ( p < 0.05) between these subjects. These results are encouraging for the development of real-time control strategies based on the surface myoelectric signal to control dexterous prosthetic hands.