Conference PaperPDF Available

CTRL-Labs: Hand Activity Estimation and Real-time Control from Neuromuscular Signals

Authors:

Abstract and Figures

CTRL-Labs has developed algorithms for determination of hand movements and forces and real-time control from neuromuscular signals. This technology enables users to create their own control schemes at run-time -- dynamically mapping neuromuscular activity to continuous (real-valued) and discrete (categorical/integer-valued) machine-input signals. To demonstrate the potential of this approach to enable novel interactions, we have built three example applications. One displays an ongoing visualization of the current posture/rotation of the hand and each finger as determined from neuromuscular signals. The other two showcase dynamic mapping of neuromuscular signals to continuous and discrete input controls for a two-player competitive target acquisition game and a single-player space shooter game.
Content may be subject to copyright.
A preview of the PDF is not available
... The field of Neurorobotics, for example, is making advances by merging neural mechanisms with robotic systems to produce robots capable of more complex and human-like behavior (Hwu & Krichmar, 2022). Innovations such as the muscle-signal-reading wristband from CTRL-Labs, now a project of Meta Platforms (Melcer et al., 2018), exemplify breakthroughs in robot controllability and communication. Additionally, Soft Robotics introduces a new approach with robots made from flexible materials that can navigate and adapt to their surroundings more effectively than traditional robots. ...
Conference Paper
Full-text available
In the new era of Industry, characterized by transformative technological shifts, robots have become integral to manufacturing. This paper delves into human-robot interaction (HRI), specifically emphasizing human-robot collaboration (HRC), through a comprehensive bibliometric analysis. As robots play pivotal roles in sectors like manufacturing, understanding collaboration dynamics becomes imperative. HRC involves humans and robots jointly pursuing shared objectives, emphasizing the development of cognitive models for enhanced performance. Simultaneously, the field of cognitive robotics aims to create intelligent robots with adaptive behaviors, integrating insights from AI, cognitive science, and robotics. This paper provides an overview of current trends, challenges, and research directions at the intersection of HRI and cognitive robotics. The analysis uses bibliometric tools to map the thematic landscape and identify key areas of focus. The findings reveal research trends, outline various challenges, and propose future directions for advancing HRC in industrial contexts, thereby offering insights into the transformative potential of this dynamic field. Sinergia Colaborativa en la Industria: Explorando la Interacción entre Humanos y Robots y la Robótica Cognitiva En la nueva era de la Industria, caracterizada por cambios tecnológicos transformadores, los robots se han vuelto fundamentales para la fabricación. Este artículo profundiza en la interacción humano-robot (HRI), haciendo hincapié en la colaboración humano-robot (HRC), a través de un análisis bibliométrico. A medida que los robots desempeñan roles clave en sectores como la fabricación, comprender la dinámica de la colaboración se vuelve imperativo. La HRC implica que humanos y robots persigan objetivos compartidos, haciendo hincapié en el desarrollo de modelos cognitivos para un rendimiento mejorado. Simultáneamente, el campo de la robótica cognitiva tiene como objetivo crear robots inteligentes con comportamientos adaptables, integrando conocimientos de IA, ciencia cognitiva y robótica. Este artículo proporciona una visión general de las tendencias actuales, desafíos y direcciones de investigación en la intersección de HRI y robótica cognitiva. Utilizando herramientas bibliométricas, el análisis mapea el paisaje temático e identifica áreas clave de enfoque. Los hallazgos revelan tendencias de investigación, esbozan varios desafíos y proponen direcciones futuras para avanzar en la HRC en contextos industriales, ofreciendo así perspectivas sobre el potencial transformador de este campo dinámico.
... In 2019, the largest investment in the history of myoelectric control was made when Meta (i.e., Facebook) acquired Ctrl Labs (the company then owning the intellectual property of the Myo Armband) for reports of between $500 million to $1 billion (Rodriguez, 2019;Statt, 2019). This came around the same time that Ctrl Labs first revealed their technology to the research community, enabling neuromuscular control (i.e., myoelectric control) (Melcer et al., 2018). Recently, after nearly 5 years of closed-door development, Ctrl Labs (now Meta) has claimed that zero-shot cross-user myoelectric control (i.e., an EMG control system that does not require any training data from the end user) is possible for a variety of applications, including (1) 1D cursor control, (2) discrete control through thumb swipes and (3) continuous handwriting (Labs et al., 2024). ...
Article
Full-text available
Myoelectric control, the use of electromyogram (EMG) signals generated during muscle contractions to control a system or device, is a promising input, enabling always-available control for emerging ubiquitous computing applications. However, its widespread use has historically been limited by the need for user-specific machine learning models because of behavioural and physiological differences between users. Leveraging the publicly available 612-user EMG-EPN612 dataset, this work dispels this notion, showing that true zero-shot cross-user myoelectric control is achievable without user-specific training. By taking a discrete approach to classification (i.e., recognizing the entire dynamic gesture as a single event), a classification accuracy of 93.0% for six gestures was achieved on a set of 306 unseen users, showing that big data approaches can enable robust cross-user myoelectric control. By organizing the results into a series of mini-studies, this work provides an in-depth analysis of discrete cross-user models to answer unknown questions and uncover new research directions. In particular, this work explores the number of participants required to build cross-user models, the impact of transfer learning for fine-tuning these models, and the effects of under-represented end-user demographics in the training data, among other issues. Additionally, in order to further evaluate the performance of the developed cross-user models, a completely new dataset was created (using the same recording device) that includes known covariate factors such as cross-day use and limb-position variability. The results show that the large data models can effectively generalize to new datasets and mitigate the impact of common confounding factors that have historically limited the adoption of EMG-based inputs.
... Wrist-worn sensing approaches have been explored to recognize hand gestures and reconstruct hand posture by utilizing Surface Electromyography (sEMG) [32], electrical impedance tomography (EIT) [64], computer vision [60], pressure [9], capacitive [41], IMU [31,56], and acoustic sensing [12,19]. ...
... By comparison, the SNR from surface EMG is typically in the range of 2-20 and varies depending on tissue thickness and the use of dry vs. gelled adhesive electrodes [67,68,91]. There are large efforts already underway to better interpret surface EMG [92][93][94]. Algorithms developed using surface EMG will likely perform better, or at minimum more reliably, when used with implantable EMG. Simultaneously, the development of more advanced prosthetic hands with added hardware features such as additional DoFs at the thumb and wrist [95,96] has the potential to reduce compensatory movements and prevent overuse injuries. ...
Article
Full-text available
Objective. Extracting signals directly from the motor system poses challenges in obtaining both high amplitude and sustainable signals for upper-limb neuroprosthetic control. To translate neural interfaces into the clinical space, these interfaces must provide consistent signals and prosthetic performance. Approach. Previously, we have demonstrated that the Regenerative Peripheral Nerve Interface (RPNI) is a biologically stable, bioamplifier of efferent motor action potentials. Here, we assessed the signal reliability from electrodes surgically implanted in RPNIs and residual innervated muscles in humans for long-term prosthetic control. Main results. RPNI signal quality, measured as signal-to-noise ratio, remained greater than 15 for up to 276 and 1054 d in participant 1 (P1), and participant 2 (P2), respectively. Electromyography from both RPNIs and residual muscles was used to decode finger and grasp movements. Though signal amplitude varied between sessions, P2 maintained real-time prosthetic performance above 94% accuracy for 604 d without recalibration. Additionally, P2 completed a real-world multi-sequence coffee task with 99% accuracy for 611 d without recalibration. Significance. This study demonstrates the potential of RPNIs and implanted EMG electrodes as a long-term interface for enhanced prosthetic control.
... Muscles are natural amplifiers of the neural drive (Ruff et al., 2010). Thus, with advanced signal analysis methods, e.g., motor unit decomposition and machine learning, muscle signals can be used as a control source in various MMIs, e.g., prostheses (Naeem et al., 2012;Bergmeister et al., 2017), wheelchairs (Jang et al., 2016), exoskeleton (Singh et al., 2012;Kawase et al., 2017;Lyu et al., 2019), and human-robot collaboration (Melcer et al., 2018), Compared with the brain-machine interfaces (Grush, 2016), an MMI can obtain cleaner motor and intention-related signals in terms of signal-to-noise ratio (SNR) (Grush, 2016). ...
Article
Full-text available
Muscles are the actuators of all human actions, from daily work and life to communication and expression of emotions. Myography records the signals from muscle activities as an interface between machine hardware and human wetware, granting direct and natural control of our electronic peripherals. Regardless of the significant progression as of late, the conventional myographic sensors are still incapable of achieving the desired high-resolution and non-invasive recording. This paper presents a critical review of state-of-the-art wearable sensing technologies that measure deeper muscle activity with high spatial resolution, so-called super-resolution. This paper classifies these myographic sensors according to the different signal types (i.e., biomechanical, biochemical, and bioelectrical) they record during measuring muscle activity. By describing the characteristics and current developments with advantages and limitations of each myographic sensor, their capabilities are investigated as a super-resolution myography technique, including: (i) non-invasive and high-density designs of the sensing units and their vulnerability to interferences, (ii) limit-of-detection to register the activity of deep muscles. Finally, this paper concludes with new opportunities in this fast-growing super-resolution myography field and proposes promising future research directions. These advances will enable next-generation muscle-machine interfaces to meet the practical design needs in real-life for healthcare technologies, assistive/rehabilitation robotics, and human augmentation with extended reality.
... Another method to detect finger movement based on neuromuscular signals is electromyography (EMG) (see, e.g. Melcer et al. 2018). Despite all the numerous alternatives that have been introduced, QWERTY has not changed yet. ...
Article
Full-text available
According to the Collingridge dilemma, technology is easy to control when its consequences are not yet manifest; once they appear, the technology is difficult to control. This article examines the development of keyboard layout design from the perspective of the Collingridge dilemma. For this purpose, unlike related studies that focus on a limited period of time, the history of keyboard development is explored from the invention of the typewriter and the QWERTY to brain–computer interfaces. Today, there is no mechanical problem of the typewriter for which the QWERTY was designed. On the other hand, better layouts have been designed for various situations so far, that can be easily implemented especially on virtual keyboards, but QWERTY has not been replaced. The present study shows how various factors as heterogeneous engineering have shaped QWERTY, prevented the prevalence of superior layouts, and led to Lock-in. Then, unlike other studies related to the Collingridge dilemma, which provide a qualitative description of it, a quantitative description is proposed that helps to better understand the Collingridge dilemma and Lock-in. Finally, the case study of the QWERTY keyboard illustrates that the theory of human-technology co-construction can provide a more comprehensive explanation of technology development, while the Collingridge dilemma can better provide some details of technology development.
Preprint
Full-text available
Myoelectric control, the use of electromyogram (EMG) signals generated during muscle contractions to control a system or device, is a promising modality for enabling always-available control of emerging ubiquitous computing applications. However, its widespread use has historically been limited by the need for user-specific machine learning models because of behavioural and physiological differences between users. Leveraging the publicly available 612-user EMG-EPN612 dataset, this work dispels this notion, showing that true zero-shot cross-user myoelectric control is achievable without user-specific training. By taking a discrete approach to classification (i.e., recognizing the entire dynamic gesture as a single event), a classification accuracy of 93.0% for six gestures was achieved on a set of 306 unseen users (who provided no training data), showing that big data approaches (compared to most EMG studies, which typically employ only 10-20 users) can enable robust cross-user myoelectric control. By organizing the results into a series of mini-studies, this work provides an in-depth analysis of discrete cross-user models to answer unknown questions and uncover new research directions. In particular, this work explores the number of participants required to build cross-user models, the impact of transfer learning for fine-tuning these models, and the effects of under-represented end-user demographics in the training data, among other issues. Additionally, in order to further evaluate the performance of the created cross-user models, a completely new data set was created (using the same recording device) that includes known covariate factors such as cross-day use and limb-position variability. The results show that the large data models can effectively generalize to new datasets and mitigate the impact of common confounding factors that have historically limited the adoption of EMG-based inputs.
Article
Full-text available
In the relentless pursuit of precision medicine, the intersection of cutting-edge technology and healthcare has given rise to a transformative era. At the forefront of this revolution stands the burgeoning field of wearable and implantable biosensors, promising a paradigm shift in how we monitor, analyze, and tailor medical interventions. As these miniature marvels seamlessly integrate with the human body, they weave a tapestry of real-time health data, offering unprecedented insights into individual physiological landscapes. This log embarks on a journey into the realm of wearable and implantable biosensors, where the convergence of biology and technology heralds a new dawn in personalized healthcare. Here, we explore the intricate web of innovations, challenges, and the immense potential these bioelectronics sentinels hold in sculpting the future of precision medicine.
Article
Full-text available
The journey of a prosthetic user is characterized by the opportunities and the limitations of a device that should enable activities of daily living (ADL). In particular, experiencing a bionic hand as a functional (and, advantageously, embodied) limb constitutes the premise for promoting the practice in using the device, mitigating the risk of its abandonment. In order to achieve such a result, different aspects need to be considered for making the artificial limb an effective solution to accomplish ADL. According to such a perspective, this review aims at presenting the current issues and at envisioning the upcoming breakthroughs in upper limb prosthetic devices. We first define the sources of input and feedback involved in the system control (at user-level and device-level), alongside the related algorithms used in signal analysis. Moreover, the paper focuses on the user-centered design challenges and strategies that guide the implementation of novel solutions in this area in terms of technology acceptance, embodiment, and, in general, human-machine integration based on co-adaptive processes. We here provide the readers (belonging to the target communities of researchers, designers, developers, clinicians, industrial stakeholders, and end-users) with an overview of the state-of-the-art and the potential innovations in bionic hands features, hopefully promoting interdisciplinary efforts for solving current issues of upper limb prostheses. The integration of different perspectives should be the premise to a transdisciplinary intertwining leading to a truly holistic comprehension and improvement of the bionic hands design. Overall, this paper aims to move the boundaries in prosthetic innovation beyond the development of a tool and toward the engineering of human-centered artificial limbs.
Conference Paper
Full-text available
We propose a novel sensing technique called proactive sensing. Proactive sensing continually repositions a camera-based sensor as a way to improve hand pose estimation. Our core contribution is a scheme that effectively learns how to move the sensor to improve pose estimation confidence while requiring no ground truth hand poses. We demonstrate this concept using a low-cost rapid swing arm system built around the state-of-the-art commercial sensing system Leap Motion. The results from our user study show that proactive sensing helps estimate users' hand poses with higher confidence compared to both static and random sensing. We further present an online model update to improve performance for each user.
Conference Paper
Full-text available
We present a new real-time hand tracking system based on a single depth camera. The system can accurately reconstruct complex hand poses across a variety of subjects. It also allows for robust tracking, rapidly recovering from any temporary failures. Most uniquely, our tracker is highly flexible, dramatically improving upon previous approaches which have focused on front-facing close-range scenarios. This flexibility opens up new possibilities for human-computer interaction with examples including tracking at distances from tens of centimeters through to several meters (for controlling the TV at a distance), supporting tracking using a moving depth camera (for mobile scenarios), and arbitrary camera placements (for VR headsets). These features are achieved through a new pipeline that combines a multi-layered discriminative reinitialization strategy for per-frame pose estimation, followed by a generative model-fitting stage. We provide extensive technical details and a detailed qualitative and quantitative analysis.
Conference Paper
Full-text available
This paper investigates an emerging input method enabled by progress in hand tracking: input by free motion of fingers. The method is expressive, potentially fast, and usable across many settings as it does not insist on physical contact or visual feedback. Our goal is to inform the design of high-performance input methods by providing detailed analysis of the performance and anatomical characteristics of finger motion. We conducted an experiment using a commercially available sensor to report on the speed, accuracy, individuation, movement ranges, and individual differences of each finger. Findings show differences of up to 50% in movement times and provide indices quantifying the individuation of single fingers. We apply our findings to text entry by computational optimization of multi-finger gestures in mid-air. To this end, we define a novel objective function that considers performance, anatomical factors, and learnability. First investigations of one optimization case show entry rates of 22 words per minute (WPM). We conclude with a critical discussion of the limitations posed by human factors and performance characteristics of existing markerless hand trackers.
Article
Full-text available
Mechanomyography (MMG) has been extensively applied in clinical and experimental practice to examine muscle characteristics including muscle function (MF), prosthesis and/or switch control, signal processing, physiological exercise, and medical rehabilitation. Despite several existing MMG studies of MF, there has not yet been a review of these. This study aimed to determine the current status on the use of MMG in measuring the conditions of MFs. Five electronic databases were extensively searched for potentially eligible studies published between 2003 and 2012. Two authors independently assessed selected articles using an MS-Word based form created for this review. Several domains (name of muscle, study type, sensor type, subject's types, muscle contraction, measured parameters, frequency range, hardware and software, signal processing and statistical analysis, results, applications, authors' conclusions and recommendations for future work) were extracted for further analysis. From a total of 2184 citations 119 were selected for full-text evaluation and 36 studies of MFs were identified. The systematic results find sufficient evidence that MMG may be used for assessing muscle fatigue, strength, and balance. This review also provides reason to believe that MMG may be used to examine muscle actions during movements and for monitoring muscle activities under various types of exercise paradigms. Overall judging from the increasing number of articles in recent years, this review reports sufficient evidence that MMG is increasingly being used in different aspects of MF. Thus, MMG may be applied as a useful tool to examine diverse conditions of muscle activity. However, the existing studies which examined MMG for MFs were confined to a small sample size of healthy population. Therefore, future work is needed to investigate MMG, in examining MFs between a sufficient number of healthy subjects and neuromuscular patients.
Conference Paper
Recent improvements in ultrasound imaging enable new opportunities for hand pose detection using wearable devices. Ultrasound imaging has remained under-explored in the HCI community despite being non-invasive, harmless and capable of imaging internal body parts, with applications including smart-watch interaction, prosthesis control and instrument tuition. In this paper, we compare the performance of different forearm mounting positions for a wearable ultrasonographic device. Location plays a fundamental role in ergonomics and performance since the anatomical features differ among positions. We also investigate the performance decrease due to cross-session position shifts and develop a technique to compensate for this misalignment. Our gesture recognition algorithm combines image processing and neural networks to classify the flexion and extension of 10 discrete hand gestures with an accuracy above 98%. Furthermore, this approach can continuously track individual digit flexion with less than 5% NRMSE, and also differentiate between digit flexion at different joints.
Conference Paper
Practical wearable gesture tracking requires that sensors align with existing ergonomic device forms. We show that combining EMG and pressure data sensed only at the wrist can support accurate classification of hand gestures. A pilot study with unintended EMG electrode pressure variability led to exploration of the approach in greater depth. The EMPress technique senses both finger movements and rotations around the wrist and forearm, covering a wide range of gestures, with an overall 10-fold cross validation classification accuracy of 96%. We show that EMG is especially suited to sensing finger movements, that pressure is suited to sensing wrist and forearm rotations, and their combination is significantly more accurate for a range of gestures than either technique alone. The technique is well suited to existing wearable device forms such as smart watches that are already mounted on the wrist.
Conference Paper
In this paper we present our results on using electromyographic (EMG) sensor arrays for finger gesture recognition. Sensing muscle activity allows to capture finger motion without placing sensors directly at the hand or fingers and thus may be used to build unobtrusive body-worn interfaces. We use an electrode array with 192 electrodes to record a high-density EMG of the upper forearm muscles. We present in detail a baseline system for gesture recognition on our dataset, using a naive Bayes classifier to discriminate the 27 gestures. We recorded 25 sessions from 5 subjects. We report an average accuracy of 90% for the within-session scenario, showing the feasibility of the EMG approach to discriminate a large number of subtle gestures. We analyze the effect of the number of used electrodes on the recognition performance and show the benefit of using high numbers of electrodes. Cross-session recognition typically suffers from electrode position changes from session to session. We present two methods to estimate the electrode shift between sessions based on a small amount of calibration data and compare it to a baseline system with no shift compensation. The presented methods raise the accuracy from 59% baseline accuracy to 75% accuracy after shift compensation. The dataset is publicly available.
Conference Paper
We present a machine learning technique to recognize gestures and estimate metric depth of hands for 3D interaction, relying only on monocular RGB video input. We aim to enable spatial interaction with small, body-worn devices where rich 3D input is desired but the usage of conventional depth sensors is prohibitive due to their power consumption and size. We propose a hybrid classification-regression approach to learn and predict a mapping of RGB colors to absolute, metric depth in real time. We also classify distinct hand gestures, allowing for a variety of 3D interactions. We demonstrate our technique with three mobile interaction scenarios and evaluate the method quantitatively and qualitatively.
Article
The Myo band turns electrical activity in the muscles of a user's forearm into gestures for controlling computers and other devices
Article
Upper limb prostheses are increasingly resembling the limbs they seek to replace in both form and functionality, including the design and development of multifingered hands and wrists. Hence, it becomes necessary to control large numbers of degrees of freedom (DOFs), required for individuated finger movements, preferably using noninvasive signals. While existing control paradigms are typically used to drive a single-DOF hook-based configurations, dexterous tasks such as individual finger movements would require more elaborate control schemes. We show that it is possible to decode individual flexion and extension movements of each finger (ten movements) with greater than 90% accuracy in a transradial amputee using only noninvasive surface myoelectric signals. Further, comparison of decoding accuracy from a transradial amputee and able-bodied subjects shows no statistically significant difference ( p < 0.05) between these subjects. These results are encouraging for the development of real-time control strategies based on the surface myoelectric signal to control dexterous prosthetic hands.