Article

What's Next For Wearable Sensing?

Authors:
  • HRV4Training
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The commercial explosion of wearable sensing devices in the early 2010s forever changed the landscape of wearable computing. In a few short years, wrist-mounted devices such as wristbands and smart watches dominated the market.1 In 2017, this department featured an article titled “What will we wear after smartphones?” highlighting potential pathways for wearable computing as the early enthusiasm for commercial wearable sensors began to wane, and new form factors like on-skin devices gained traction in the research community.2 In the past few years, we have witnessed substantial changes in many of the domains discussed in that article. Sensor validation and comparison with other state of the art or reference systems has become of paramount importance in a saturated wearables market. Similarly, FDA approval or CE marking of smartphone or sensor-based medical applications is now a priority of many of the players targeting healthcare applications. For traditional form factors such as wristbands and other accessories, large improvements have also been made in hardware, thanks to further miniaturization and improved design (see Figure 1). Figure 1. Phone cameras, watches, and rings have become widespread sensing modalities for accurate monitoring of biometric data. Figure 2. Graphs show mean deviation from baseline (lines) with 95% CIs (shaded areas) for daily resting heart rate (RHR), sleep quantity, and step count during −7 to 133 days after symptom onset for COVID-19–positive versus COVID-19–negative participants (panels (a), (c), and (e)) and for COVID-19–positive participants grouped by mean change in RHR during days 28 to 56 after symptom onset (panels (b), (d), and (f)). Acquired with permission from Radin et al.8

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
Full-text available
Wearable health devices (WHDs) are rapidly gaining ground in the biomedical field due to their ability to monitor the individual physiological state in everyday life scenarios, while providing a comfortable wear experience. This study introduces a novel wearable biomedical device capable of synchronously acquiring electrocardiographic (ECG), photoplethysmographic (PPG), galvanic skin response (GSR) and motion signals. The device has been specifically designed to be worn on a finger, enabling the acquisition of all biosignals directly on the fingertips, offering the significant advantage of being very comfortable and easy to be employed by the users. The simultaneous acquisition of different biosignals allows the extraction of important physiological indices, such as heart rate (HR) and its variability (HRV), pulse arrival time (PAT), GSR level, blood oxygenation level (SpO2), and respiratory rate, as well as motion detection, enabling the assessment of physiological states, together with the detection of potential physical and mental stress conditions. Preliminary measurements have been conducted on healthy subjects using a measurement protocol consisting of resting states (i.e., SUPINE and SIT) alternated with physiological stress conditions (i.e., STAND and WALK). Statistical analyses have been carried out among the distributions of the physiological indices extracted in time, frequency, and information domains, evaluated under different physiological conditions. The results of our analyses demonstrate the capability of the device to detect changes between rest and stress conditions, thereby encouraging its use for assessing individuals’ physiological state. Furthermore, the possibility of performing synchronous acquisitions of PPG and ECG signals has allowed us to compare HRV and pulse rate variability (PRV) indices, so as to corroborate the reliability of PRV analysis under stationary physical conditions. Finally, the study confirms the already known limitations of wearable devices during physical activities, suggesting the use of algorithms for motion artifact correction.
Article
Respiratory rate (RR) is an important vital sign to monitor outside the clinic, particularly during physiological challenges such as exercise; unfortunately, ambulatory measurement devices for RR are typically obtrusive and inaccurate. The objective of this work is to allow for accurate and robust RR monitoring with a convenient and small chest-worn wearable patch during walking and exercise recovery periods. Methods: To estimate RR from the wearable patch, respiratory signals were first extracted from electrocardiogram (ECG), photoplethysmogram (PPG), and seismocardiogram (SCG) signals. The optimal channel in each signal was adaptively selected using the respiratory quality index based on fast Fourier transform (RQIFFT). Next, we proposed modality attentive (MA) fusion—which merged spectral–temporal information from different modalities—to address motion artifacts during walking. The fused output was subsequently denoised using a U-Net-based deep learning model and used for final estimation. A dataset of N{N} = 17 subjects was collected to validate the RR estimated during three types of activities: stationary activities, walking (including 6-minute walk test), and running. Major results: Combining and denoising ECG and PPG data using MA fusion and the U-Net achieved the lowest mean absolute error (MAE) (2.21 breaths per minute [brpm]) during walking. After rejecting a small portion of the data (coverage = 84.43%) using RQIFFT, this error was further reduced to 1.59 brpm, which was comparable to the state-of-the-art methods. Conclusion: Applying adaptive channel selection, MA fusion, and U-Net denoising achieved accurate RR estimation from a small chest-worn wearable patch. Significance: This work can enable cardiopulmonary monitoring applications in less controlled settings.
Article
The COVID-19 pandemic has already ravaged the world for two years and infected more than 600 million people, having an irreparable impact on the health, economic, and political dimensions of human society. There have been many papers proposing various Internet of Things (IoT) solutions to stop the spread of COVID-19, such as virus tracking, infection control, medical treatment, and health management. Due to the high popularity of IoT devices, many studies have systematically investigated related technologies, evaluating the development and application of IoT solutions from different aspects such as electronic security, personal privacy, disaster prediction, and artificial intelligence. However, the usefulness of IoT in combating COVID-19 has not been thoroughly evaluated. Therefore, this paper systematically analyzes high-quality articles published in well-known international journals and conferences from 2019 to 2022 and divides them into the technical development of prevention and treatment according to the application field of the solution, cross-comparing the research results. It provides a comprehensive assessment of the maturity of the current IoT solutions through the Standard Technology Readiness Level (TRL). Finally, through the technical efficacy matrix, the application status and development direction of the international IoT technology are analyzed by patent. Our results reveal the maturity and limitations of existing technologies to fight COVID-19. Possible future research directions are also discussed.
Article
Full-text available
This cohort study examines the duration and variation of recovery among COVID-19–positive verses COVID-19–negative individuals.
Article
Full-text available
Consumer-grade sleep trackers represent a promising tool for large scale studies and health management. However, the potential and limitations of these devices remain less well quantified. Addressing this issue, we aim at providing a comprehensive analysis of the impact of accelerometer, autonomic nervous system (ANS)-mediated peripheral signals, and circadian features for sleep stage detection on a large dataset. Four hundred and forty nights from 106 individuals, for a total of 3444 h of combined polysomnography (PSG) and physiological data from a wearable ring, were acquired. Features were extracted to investigate the relative impact of different data streams on 2-stage (sleep and wake) and 4-stage classification accuracy (light NREM sleep, deep NREM sleep, REM sleep, and wake). Machine learning models were evaluated using a 5-fold cross-validation and a standardized framework for sleep stage classification assessment. Accuracy for 2-stage detection (sleep, wake) was 94% for a simple accelerometer-based model and 96% for a full model that included ANS-derived and circadian features. Accuracy for 4-stage detection was 57% for the accelerometer-based model and 79% when including ANS-derived and circadian features. Combining the compact form factor of a finger ring, multidimensional biometric sensory streams, and machine learning, high accuracy wake-sleep detection and sleep staging can be accomplished.
Article
Full-text available
Introduction Diabetes prevalence continues to grow and there remains a significant diagnostic gap in one-third of the US population that has pre-diabetes. Innovative, practical strategies to improve monitoring of glycemic health are desperately needed. In this proof-of-concept study, we explore the relationship between non-invasive wearables and glycemic metrics and demonstrate the feasibility of using non-invasive wearables to estimate glycemic metrics, including hemoglobin A1c (HbA1c) and glucose variability metrics. Research design and methods We recorded over 25 000 measurements from a continuous glucose monitor (CGM) with simultaneous wrist-worn wearable (skin temperature, electrodermal activity, heart rate, and accelerometry sensors) data over 8–10 days in 16 participants with normal glycemic state and pre-diabetes (HbA1c 5.2–6.4). We used data from the wearable to develop machine learning models to predict HbA1c recorded on day 0 and glucose variability calculated from the CGM. We tested the accuracy of the HbA1c model on a retrospective, external validation cohort of 10 additional participants and compared results against CGM-based HbA1c estimation models. Results A total of 250 days of data from 26 participants were collected. Out of the 27 models of glucose variability metrics that we developed using non-invasive wearables, 11 of the models achieved high accuracy (<10% mean average per cent error, MAPE). Our HbA1c estimation model using non-invasive wearables data achieved MAPE of 5.1% on an external validation cohort. The ranking of wearable sensor’s importance in estimating HbA1c was skin temperature (33%), electrodermal activity (28%), accelerometry (25%), and heart rate (14%). Conclusions This study demonstrates the feasibility of using non-invasive wearables to estimate glucose variability metrics and HbA1c for glycemic monitoring and investigates the relationship between non-invasive wearables and the glycemic metrics of glucose variability and HbA1c. The methods used in this study can be used to inform future studies confirming the results of this proof-of-concept study.
Article
Full-text available
Advancements in e-textiles, sensors, and actuators have propelled wearable technologies toward wide-spread market use, however the physical interface between these technologies and the human body has remained a functional challenge. Prior research has found that system–body interface challenges produce wearing variability, or variation in system placement, orientation, and tightness in relation to a body both between use trials and between users, resulting in large variation and deterioration in system performance. We break down the mechanics of common system–body interface challenges through a summary of design principles critical to any system interfacing with the human body. Additionally, we present an active interface based on shape memory materials that dimensionally adapts to its user’s dimensions. An experimental investigation of these active system interfaces considers the impact of design variables often overlooked in the design process. Recommendations are provided to optimize interfaces for the requirements for a given wearable technology. Additionally, we illuminate methods to reduce wearing variability for a range of users to produce consistent system–body interaction across a user population. Through these active interfaces, we advance a broad range of wearable technologies, including wearable sensing, motion tracking, haptics, and wearable robotic devices.
Article
Full-text available
Background Monitoring heart rate variability (HRV) as an indicator of daily variations in the functioning of the autonomic nervous system (ANS) may assist in individualizing endurance training to produce more pronounced physiological adaptations in performance. Aims To systematically perform a meta-analysis of the scientific literature to determine whether the outcomes of endurance training based on HRV are more favourable than those of predefined training. Methods PubMed, and Web of Science were searched systematically in March of 2020 using keywords related to endurance, the ANS, and training. To compare the outcomes of HRV-guided and predefined training, Hedges’ g effect size and associated 95% confidence intervals were calculated. Results A total of 8 studies (198 participants) were identified encompassing 9 interventions involving a variety of approaches. Compared to predefined training, most HRV-guided interventions included fewer moderate- and/or high-intensity training sessions. Fixed effects meta-analysis revealed a significant medium-sized positive effect of HRV-guided training on submaximal physiological parameters (g=0.296, 95% CI 0.031 to 0.562, p=0.028), but its effects on performance (g =0.079, 95% CI −0.050 to 0.393 p= 0.597) and V̇O (g =0.171, 95% CI −0.213 to 0.371, p= 0.130) are small and not statistically significant. Moreover, with regard to performance, HRV-guided training is associated with fewer non-responders and more positive responders. Conclusions In comparison to predefined training, HRV-guided endurance training has a medium-sized effect on submaximal physiological parameters, but only a small and non-significant influence on performance and V̇O2peak. With regard to performance, there are fewer non-responders with HRV-based training.
Article
Full-text available
Traditional screening for COVID-19 typically includes survey questions about symptoms and travel history, as well as temperature measurements. Here, we explore whether personal sensor data collected over time may help identify subtle changes indicating an infection, such as in patients with COVID-19. We have developed a smartphone app that collects smartwatch and activity tracker data, as well as self-reported symptoms and diagnostic testing results, from individuals in the United States, and have assessed whether symptom and sensor data can differentiate COVID-19 positive versus negative cases in symptomatic individuals. We enrolled 30,529 participants between 25 March and 7 June 2020, of whom 3,811 reported symptoms. Of these symptomatic individuals, 54 reported testing positive and 279 negative for COVID-19. We found that a combination of symptom and sensor data resulted in an area under the curve (AUC) of 0.80 (interquartile range (IQR): 0.73–0.86) for discriminating between symptomatic individuals who were positive or negative for COVID-19, a performance that is significantly better (P < 0.01) than a model¹ that considers symptoms alone (AUC = 0.71; IQR: 0.63–0.79). Such continuous, passively captured data may be complementary to virus testing, which is generally a one-off or infrequent sampling assay.
Conference Paper
Full-text available
In this work, we use data acquired longitudinally, in free-living, to provide accurate estimates of running performance. In particular, we used the HRV4Training app and integrated APIs (e.g. Strava and TrainingPeaks) to acquire different sets of parameters, either via user input, morning measurements of resting physiology, or running workouts to estimate running 10 km running time. Our unique dataset comprises data on 2113 individuals, from world class triathletes to individuals just getting started with running, and it spans over 2 years. Analyzed predictors of running performance include anthropometrics, resting heart rate (HR) and heart rate variability (HRV), training physiology (heart rate during exercise), training volume, training patterns (training intensity distribution over multiple workouts, or training polarization) and previous performance. We build multiple linear regression models and highlight the relative impact of different predictors as well as trade-offs between the amount of data required for features extraction and the models accuracy in estimating running performance (10 km time). Cross-validated root mean square error (RMSE) for 10 km running time estimation was 2.6 minutes (4% mean average error, MAE, 0.87 R^2), an improvement of 58% with respect to estimation models using anthropometrics data only as predictors. Finally, we provide insights on the relationship between training and performance, including further evidence of the importance of training volume and a polarized training approach to improve performance.
Article
With wearable computing research recently passing the 20-year mark, this survey looks back at how the field developed and explores where it’s headed. According to the authors, wearable computing is entering its most exciting phase yet, as it transitions from demonstrations to the creation of sustained markets and industries, which in turn should drive future research and innovation.
Conference Paper
The field of wearable technology has undergone significant growth in the last few years. Along with this growth come trends in consumer application development, which may be indicative of structural influences on the design and development of products as well as social influences among consumers. Here, we present a two-phase survey of the application space of wearable technology, by assessing applications observed in research or industrial activities from two time periods: initially with a historical focus (mapping the space up to 2014) and subsequently through a 1-year snapshot by mapping developments between 2014 and 2015. We evaluate distribution in application instances over the body surface, over a range of product types, and within a set of application categories, as well as differences in product price and gender target. The implications of changes in product focus areas over these two time frames are discussed.