Article

SOSW: Stress Sensing With Off-the-Shelf Smartwatches in the Wild

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Recent advances in wearable technology have led to the development of various methods for stress sensing in both controlled laboratory and real-life environments. However, existing methods often rely on specialized or expensive sensors that may not be easily accessible to the general population. In this study, we investigate the feasibility of using off-the-shelf smartwatches for stress detection in real-life scenarios. To achieve this, we propose SOSW, a comprehensive methodology for robust sensor data processing by considering both physiological and contextual data. SOSW employs a two-layer machine learning (ML) architecture. The first-layer ML model is trained and validated using carefully collected data under controlled laboratory conditions. The second-layer ML model is trained and validated using data collected in real-life settings. We conducted evaluations with 26 and 18 participants in controlled laboratory and real-life conditions, respectively. The results indicate that our methodology can successfully detect stressful events with an F-1 score of up to 0.84 in laboratory conditions and 0.71 in real-life scenarios using off-the-shelf smartwatches. The results are comparable to those achieved by the state of the art methods that rely on dedicated wearables.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
Recent efforts to predict stress in the wild using mobile technology have increased; however, the field lacks a common pipeline for assessing the impact of factors such as label encoding and feature selection on prediction performance. This gap hinders replication, especially because of a lack of common guidelines for reporting results or privacy concerns that limit access to open codes and datasets. Our study introduces a common pipeline based on a comprehensive literature review and offers comprehensive evaluations of key pipeline factors, promoting independent reproducibility. Our systematic evaluation aimed to validate the findings of previous studies. We identified overfitting and distribution shifts across users as the major reasons for performance limitations. We used K-EmoPhone, a public dataset, for experimentation and a new public dataset---DeepStress---to validate the findings. Furthermore, our results suggest that researchers should carefully consider temporal order in cross-validation settings. Additionally, self-report labels for target users are key to enhancing performance in user-independent scenarios.
Article
Full-text available
Wearable medical technology has become increasingly popular in recent years. One function of wearable health devices is stress detection, which relies on sensor inputs to determine a patient’s mental state. This continuous, real-time monitoring can provide healthcare professionals with vital physiological data and enhance the quality of patient care. Current methods of stress detection lack: (i) robustness—wearable health sensors contain high levels of measurement noise that degrades performance, and (ii) adaptation—static architectures fail to adapt to changing contexts in sensing conditions. We propose to address these deficiencies with SELF-CARE, a generalized selective sensor fusion method of stress detection that employs novel techniques of context identification and ensemble machine learning. SELF-CARE uses a learning-based classifier to process sensor features and model the environmental variations in sensing conditions known as the noise context. SELF-CARE uses noise context to selectively fuse different sensor combinations across an ensemble of models to perform robust stress classification. Our findings suggest that for wrist-worn devices, sensors that measure motion are most suitable to understand noise context, while for chest-worn devices, the most suitable sensors are those that detect muscle contraction. We demonstrate SELF-CARE’s state-of-the-art performance on the WESAD dataset. Using wrist-based sensors, SELF-CARE achieves 86.34% and 94.12% accuracy for the 3-class and 2-class stress classification problems, respectively. For chest-based wearable sensors, SELF-CARE achieves 86.19% (3-class) and 93.68% (2-class) classification accuracy. This work demonstrates the benefits of utilizing selective, context-aware sensor fusion in mobile health sensing that can be applied broadly to Internet of Things applications.
Article
Full-text available
Data from consumer smartwatches can improve the detection of COVID-19 when combined with symptom self-reporting, and can also detect the disease in pre-symptomatic individuals.
Article
Full-text available
The ubiquitous deployment of smart wearable devices brings promises for an effective implementation of various healthcare applications in our everyday living environments. However, given that these applications ask for accurate and reliable sensing results of vital signs, there is a need to understand the accuracy of commercial-off-the-shelf wearable devices' healthcare sensing components (e.g., heart rate sensors). This work presents a thorough investigation on the accuracy of heart rate sensors equipped on three different widely used smartwatch platforms. We show that heart rate readings can easily diverge from the ground truth when users are actively moving. Moreover, we show that the accelerometer is not an effective secondary sensing modality of predicting the accuracy of such smartwatch-embedded sensors. Instead, we show that the photoplethysmography (PPG) sensor's light intensity readings are an plausible indicator for determining the accuracy of optical sensor-based heart rate readings. Based on such observations, this work presents a light-weight Viterbi-algorithm-based Hidden Markov Model to design a filter that identifies reliable heart rate measurements using only the limited computational resources available on smartwatches. Our evaluations with data collected from four participants show that the accuracy of our proposed scheme can be as high as 98%. By enabling the smartwatch to self-filter misleading measurements from being healthcare application inputs, we see this work as an essential module for catalyzing novel ubiquitous healthcare applications.
Article
Full-text available
An automatic stress detection system that uses unobtrusive smart bands will contribute to human health and wellbeing by alleviating the effects of high stress levels. However, there are a number of challenges for detecting stress in unrestricted daily life which results in lower performances of such systems when compared to semi-restricted and laboratory environment studies. The addition of contextual information such as physical activity level, activity type and weather to the physiological signals can improve the classification accuracies of these systems. We developed an automatic stress detection system that employs smart bands for physiological data collection. In this study, we monitored the stress levels of 16 participants of an EU project training every day throughout the eight days long event by using our system. We collected 1440 hours of physiological data and 2780 self-report questions from the participants who are from diverse countries. The project midterm presentations (see Figure 3) in front of a jury at the end of the event were the source of significant real stress. Different types of contextual information, along with the physiological data, were recorded to determine the perceived stress levels of individuals. We further analyze the physiological signals in this event to infer long term perceived stress levels which we obtained from baseline PSS- 14 questionnaires. Session-based, daily and long-term perceived stress levels could be identified by using the proposed system successfully.
Article
Full-text available
Researchers strive hard to develop effective ways to detect and cope with enduring highlevel daily stress as early as possible to prevent serious health consequences. Although research has traditionally been conducted in laboratory settings, a set of new studies have recently begun to be conducted in ecological environments with unobtrusive wearable devices. Since patterns of stress are ideographic, person-independent models have generally lower accuracies. On the contrary, person-specific models have higher accuracies but they require a long-term data collection period. In this study, we developed a hybrid approach of personal level stress clustering by using baseline stress self-reports to increase the success of person-independent models without requiring a substantial amount of personal data. We further added decision level smoothing to our unobtrusive smartwatch based stress level differentiation system to increase the performance by correcting false labels assigned by the machine learning algorithm. In order to test and evaluate our system, we collected physiological data from 32 participants of a summer school with wrist-worn unobtrusive wearable devices. This event is comprised of baseline, lecture, exam and recovery sessions. In the recovery session, a stress management method was applied to alleviate the stress of the participants. The perceived stress in the form of NASA-TLX questionnaires collected from the users as self-reports and physiological stress levels extracted using wearable sensors are examined separately. By using our system, we were able to differentiate the 3-levels of stress successfully. We further substantially increase our performance by personal stress level clustering and by applying high-level accuracy calculation and decision level smoothing methods. We also demonstrated the success of the stress reduction methods by analyzing physiological signals and self-reports.
Article
Full-text available
As wearable technologies are being increasingly used for clinical research and healthcare, it is critical to understand their accuracy and determine how measurement errors may affect research conclusions and impact healthcare decision-making. Accuracy of wearable technologies has been a hotly debated topic in both the research and popular science literature. Currently, wearable technology companies are responsible for assessing and reporting the accuracy of their products, but little information about the evaluation method is made publicly available. Heart rate measurements from wearables are derived from photoplethysmography (PPG), an optical method for measuring changes in blood volume under the skin. Potential inaccuracies in PPG stem from three major areas, includes (1) diverse skin types, (2) motion artifacts, and (3) signal crossover. To date, no study has systematically explored the accuracy of wearables across the full range of skin tones. Here, we explored heart rate and PPG data from consumer- and research-grade wearables under multiple circumstances to test whether and to what extent these inaccuracies exist. We saw no statistically significant difference in accuracy across skin tones, but we saw significant differences between devices, and between activity types, notably, that absolute error during activity was, on average, 30% higher than during rest. Our conclusions indicate that different wearables are all reasonably accurate at resting and prolonged elevated heart rate, but that differences exist between devices in responding to changes in activity. This has implications for researchers, clinicians, and consumers in drawing study conclusions, combining study results, and making health-related decisions using these devices.
Conference Paper
Full-text available
Stress detection is becoming a popular field in machine learning and this study focuses on recognizing stress using the sensors of commercially available smartwatches. In most of the previous studies, stress detection is based on partly or fully on electrodermal activity sensor (EDA). However, if the final aim of the study is to build a smartwatch application, using EDA signal is problematic as the smartwatches currently in the market do not include sensor to measure EDA signal. Therefore, this study surveys what sensors the smartwatches currently in the market include, and which of them 3rd party developers have access to. Moreover, it is studied how accurately stress can be detected user-independently using different sensor combinations. In addition, it is studied how detection rates vary between study subjects and what kind of effect window size has to the recognition rates. All of the experiments are based on publicly available WESAD dataset. The results show that, indeed, EDA signal is not necessary when detecting stress user-independently, and therefore, commercial smartwatches can be used for recognizing stress when the used window length is big enough. However, it is also noted that recognition rate varies a lot between the study subjects.
Article
Full-text available
There is a rich repertoire of methods for stress detection using various physiological signals and algorithms. However, there is still a gap in research efforts moving from laboratory studies to real-world settings. A small number of research has verified when a physiological response is a reaction to an extrinsic stimulus of the participant's environment in real-world settings. Typically, physiological signals are correlated with the spatial characteristics of the physical environment, supported by video records or interviews. The present research aims to bridge the gap between laboratory settings and real-world field studies by introducing a new algorithm that leverages the capabilities of wearable physiological sensors to detect moments of stress (MOS). We propose a rule-based algorithm based on galvanic skin response and skin temperature, combing empirical findings with expert knowledge to ensure transferability between laboratory settings and real-world field studies. To verify our algorithm, we carried out a laboratory experiment to create a "gold standard" of physiological responses to stressors. We validated the algorithm in real-world field studies using a mixed-method approach by spatially correlating the participant's perceived stress, geo-located questionnaires, and the corresponding real-world situation from the video. Results show that the algorithm detects MOS with 84% accuracy, showing high correlations between measured (by wearable sensors), reported (by questionnaires and eDiary entries), and recorded (by video) stress events. The urban stressors that were identified in the real-world studies originate from traffic congestion, dangerous driving situations, and crowded areas such as tourist attractions. The presented research can enhance stress detection in real life and may thus foster a better understanding of circumstances that bring about physiological stress in humans.
Article
Full-text available
Endurance athletes, particularly competitive runners, are using wrist worn devices with the heart rate (HR) feature to guide their training. However, few studies have assessed the effectiveness of these at high levels of exertion. The purpose of this study was to measure the accuracy of the HR monitor feature in four watches at six different treadmill speeds. This prospective study recruited 50 healthy, athletic adults (68% male, mean age of 29, and mean BMI of 23 kg/m2). All subjects wore a three lead ECG and Polar H7 chest strap monitor and two different randomly assigned wrist worn HR monitors. These included the Apple Watch III, Fitbit Iconic, Garmin Vivosmart HR, and Tom Tom Spark 3. Once all devices were on, they were asked to run at the following speeds on a treadmill (in mph): 4, 5, 6, 7, 8, and 9 for two min. HR was assessed on all devices and agreement among measurements determined with Lin's concordance correlation coefficient (CCC) (rc). The Polar H7 chest strap had the greatest agreement with the ECG (rc=98). This was followed by the Apple Watch III (rc=96). The Fitbit Iconic, Garmin Vivosmart HR, and Tom Tom Spark 3 all had the same level of agreement (rc=89). The Polar H7 chest strap was the most accurate, and the Apple Watch was superior among watches. For endurance athletes and their coaches, a chest strap device or Apple Watch may be the best choice for guiding workouts and performance.
Article
Full-text available
A critical aspect of mobile just-in-time (JIT) health intervention is proper delivery timing, which correlates with successfully promoting target behaviors. Despite extensive prior studies on interruptibility, however, our understanding of the receptivity of mobile JIT health intervention is limited. This work extends prior interruptibility models to capture the JIT intervention process by including multiple stages of conscious and subconscious decisions. We built BeActive, a mobile intervention system for preventing prolonged sedentary behaviors, and we collected users' responses to a given JIT support and relevant contextual factors and cognitive/physical states for three weeks. Using a multi-stage model, we systematically analyzed the responses to deepen our understanding of receptivity using a mixed methodology. Herein, we identify the key factors relevant to each stage outcome and show that the receptivity of JIT intervention is nuanced and context-dependent. We propose several practical design implications for mobile JIT health intervention and context-aware computing.
Article
Full-text available
Background: To assess the accuracy of four wearable heart rate (HR) monitors in patients with established cardiovascular disease enrolled in phase II or III cardiac rehabilitation (CR). Methods: Eighty adult patients enrolled in phase II or III CR were monitored during a CR session that included exercise on a treadmill and/or stationary cycle. Participants underwent HR monitoring with standard ECG limb leads, an electrocardiographic (ECG) chest strap monitor (Polar H7), and two randomly assigned wrist-worn HR monitors (Apple Watch, Fitbit Blaze, Garmin Forerunner 235, TomTom Spark Cardio), one on each wrist. HR was recorded at rest and at 3, 5, and 7 minutes of steady-state exercise on the treadmill and stationary cycle. Results: Across all exercise conditions, the chest strap monitor (Polar H7) had the best agreement with ECG (rc=0.99) followed by the Apple Watch (rc=0.80), Fitbit Blaze (rc=0.78), TomTom Spark (rc=0.76) and Garmin Forerunner (rc=0.52). There was variability in accuracy under different exercise conditions. On the treadmill, only the Fitbit Blaze performed well (rc=0.76), while on the stationary cycle, Apple Watch (rc=0.89) and TomTom Spark (rc=0.85) were most accurate. Conclusions: In cardiac patients, the accuracy of wearable, optically based HR monitors varies, and none of those tested was as accurate as an electrode-containing chest monitor. This observation has implications for in-home CR, as electrode-containing chest monitors should be used when accurate HR measurement is imperative.
Article
Full-text available
The negative effects of mental stress on human health has been known for decades. High-level stress must be detected at early stages to prevent these negative effects. After the emergence of wearable devices that could be part of our lives, researchers have started detecting extreme stress of individuals with them during daily routines. Initial experiments were performed in laboratory environments and recently a number of works took a step outside the laboratory environment to the real-life. We developed an automatic stress detection system using physiological signals obtained from unobtrusive smart wearable devices which can be carried during the daily life routines of individuals. This system has modality-specific artifact removal and feature extraction methods for real-life conditions. We further tested our system in a real-life setting with collected physiological data from 21 participants of an algorithmic programming contest for nine days. This event had lectures, contests as well as free time. By using heart activity, skin conductance and accelerometer signals, we successfully discriminated contest stress, relatively higher cognitive load (lecture) and relaxed time activities by using different machine learning methods
Article
Full-text available
Active adaptation to acute stress is essential for coping with daily life challenges. The stress hormone cortisol, as well as large scale re-allocations of brain resources have been implicated in this adaptation. Stress-induced shifts between large-scale brain networks, including salience (SN), central executive (CEN) and default mode networks (DMN), have however been demonstrated mainly under task-conditions. It remains unclear whether such network shifts also occur in the absence of ongoing task-demands, and most critically, whether these network shifts are predictive of individual variation in the magnitude of cortisol stress-responses. In a sample of 335 healthy participants, we investigated stress-induced functional connectivity changes (delta-FC) of the SN, CEN and DMN, using resting-state fMRI data acquired before and after a socially evaluated cold-pressor test and a mental arithmetic task. To investigate which network changes are associated with acute stress, we evaluated the association between cortisol increase and delta-FC of each network. Stress-induced cortisol increase was associated with increased connectivity within the SN, but with decreased coupling of DMN at both local (within network) and global (synchronization with brain regions also outside the network) levels. These findings indicate that acute stress prompts immediate connectivity changes in large-scale resting-state networks, including the SN and DMN in the absence of explicit ongoing task-demands. Most interestingly, this brain reorganization is coupled with individuals’ cortisol stress-responsiveness. These results suggest that the observed stress-induced network reorganization might function as a neural mechanism determining individual stress reactivity and, therefore, it could serve as a promising marker for future studies on stress resilience and vulnerability.
Article
Full-text available
Background Wrist-worn activity monitors are often used to monitor heart rate (HR) and energy expenditure (EE) in a variety of settings including more recently in medical applications. The use of real-time physiological signals to inform medical systems including drug delivery systems and decision support systems will depend on the accuracy of the signals being measured, including accuracy of HR and EE. Prior studies assessed accuracy of wearables only during steady-state aerobic exercise. Objective The objective of this study was to validate the accuracy of both HR and EE for 2 common wrist-worn devices during a variety of dynamic activities that represent various physical activities associated with daily living including structured exercise. Methods We assessed the accuracy of both HR and EE for two common wrist-worn devices (Fitbit Charge 2 and Garmin vívosmart HR+) during dynamic activities. Over a 2-day period, 20 healthy adults (age: mean 27.5 [SD 6.0] years; body mass index: mean 22.5 [SD 2.3] kg/m²; 11 females) performed a maximal oxygen uptake test, free-weight resistance circuit, interval training session, and activities of daily living. Validity was assessed using an HR chest strap (Polar) and portable indirect calorimetry (Cosmed). Accuracy of the commercial wearables versus research-grade standards was determined using Bland-Altman analysis, correlational analysis, and error bias. Results Fitbit and Garmin were reasonably accurate at measuring HR but with an overall negative bias. There was more error observed during high-intensity activities when there was a lack of repetitive wrist motion and when the exercise mode indicator was not used. The Garmin estimated HR with a mean relative error (RE, %) of −3.3% (SD 16.7), whereas Fitbit estimated HR with an RE of −4.7% (SD 19.6) across all activities. The highest error was observed during high-intensity intervals on bike (Fitbit: −11.4% [SD 35.7]; Garmin: −14.3% [SD 20.5]) and lowest error during high-intensity intervals on treadmill (Fitbit: −1.7% [SD 11.5]; Garmin: −0.5% [SD 9.4]). Fitbit and Garmin EE estimates differed significantly, with Garmin having less negative bias (Fitbit: −19.3% [SD 28.9], Garmin: −1.6% [SD 30.6], P<.001) across all activities, and with both correlating poorly with indirect calorimetry measures. Conclusions Two common wrist-worn devices (Fitbit Charge 2 and Garmin vívosmart HR+) show good HR accuracy, with a small negative bias, and reasonable EE estimates during low to moderate-intensity exercise and during a variety of common daily activities and exercise. Accuracy was compromised markedly when the activity indicator was not used on the watch or when activities involving less wrist motion such as cycle ergometry were done.
Article
Full-text available
In modern life, the nonstop and pervasive stress tends to keep us on long-lasting high alert, which over time, could lead to a broad range of health problems from depression, metabolic disorders to heart diseases. However, there is a stunning lack of practical tools for effective stress management that can help people navigate through their daily stress. This paper presents the feasibility evaluation of StressHacker, a smartwatch-based system designed to continuously and passively monitor one's stress level using bio-signals obtained from the on-board sensors. With the proliferation of smartwatches, StressHacker is highly accessible and suited for daily use. Our preliminary evaluation is based on 300 hours of data collected in a real-life setting (12 subjects, 29 days). The result suggests that StressHacker is capable of reliably capturing daily stress dynamics (precision = 86.1%, recall = 91.2%), thus with great potential to enable seamless and personalized stress management.
Article
Full-text available
Objective: Physical or mental imbalance caused by harmful stimuli can induce stress to maintain homeostasis. During chronic stress, the sympathetic nervous system is hyperactivated, causing physical, psychological, and behavioral abnormalities. At present, there is no accepted standard for stress evaluation. This review aimed to survey studies providing a rationale for selecting heart rate variability (HRV) as a psychological stress indicator. Methods: Term searches in the Web of Science®, National Library of Medicine (PubMed), and Google Scholar databases yielded 37 publications meeting our criteria. The inclusion criteria were involvement of human participants, HRV as an objective psychological stress measure, and measured HRV reactivity. Results: In most studies, HRV variables changed in response to stress induced by various methods. The most frequently reported factor associated with variation in HRV variables was low parasympathetic activity, which is characterized by a decrease in the high-frequency band and an increase in the low-frequency band. Neuroimaging studies suggested that HRV may be linked to cortical regions (e.g., the ventromedial prefrontal cortex) that are involved in stressful situation appraisal. Conclusion: In conclusion, the current neurobiological evidence suggests that HRV is impacted by stress and supports its use for the objective assessment of psychological health and stress.
Article
Full-text available
Introduction: The use of wearable activity monitors has seen rapid growth; however, the mode and intensity of exercise could affect validity of heart rate (HR) and caloric (energy) expenditure (EE) readings. There is a lack of data regarding the validity of wearable activity monitors during graded cycling regimen and a standard resistance exercise. The present study determined the validity of eight monitors for HR compared to an ECG and seven monitors for EE compared to a metabolic analyzer during graded cycling and resistance exercise. Methods: Fifty subjects (28 women, 22 men) completed separate trials of graded cycling and three sets of four resistance exercises at a 10-repetition maximum (RM) load. Monitors included: Apple Watch Series 2 (AWS2), Fitbit Blaze, Fitbit Charge 2, Polar H7 (PH7), Polar A360, Garmin Vivosmart HR, TomTom Touch, and Bose SoundSport Headphones (BSP). HR was recorded after each cycling intensity and following each resistance exercise set. EE was recorded following both protocols. Validity was established as having a mean absolute percent error (MAPE) value of <10%. Results: The PH7 and BSP were valid during both exercise modes (Cycling: MAPE=6.87%, R=0.79; Resistance Exercise: MAPE=6.31%, R=0.83). During cycling, the AWS2 revealed the greatest HR validity (MAPE=4.14%, R=0.80). The BSP revealed the greatest HR accuracy during resistance exercise (MAPE=6.24%, R=0.86). Across all devices, as exercise intensity increased, there was greater underestimation of HR. No device was valid for EE during cycling or resistance exercise. Conclusion: HR from wearable devices differed at different exercise intensities; EE estimates from wearable devices were inaccurate. Wearable devices are not medical devices and users should employ caution when utilizing these devices for monitoring physiological responses to exercise.
Article
Full-text available
Recent advances in mobile health have produced several new models for inferring stress from wearable sensors. But, the lack of a gold standard is a major hurdle in making clinical use of continuous stress measurements derived from wearable sensors. In this paper, we present a stress model (called cStress) that has been carefully developed with attention to every step of computational modeling including data collection, screening, cleaning, filtering, feature computation, normalization, and model training. More importantly, cStress was trained using data collected from a rigorous lab study with 21 participants and validated on two independently collected data sets - in a lab study on 26 participants and in a week-long field study with 20 participants. In testing, the model obtains a recall of 89% and a false positive rate of 5% on lab data. On field data, the model is able to predict each instantaneous self-report with an accuracy of 72%.
Conference Paper
Full-text available
Smartphone usage has tremendously increased and most users keep their smartphones close throughout the day. Smartphones have a broad variety of sensors, that could automatically map and track the user's life and behaviour. In this work we investigate whether automatically collected smartphone usage and sensor data can be employed to predict the experienced stress levels of a user using a customized brief version of the Perceived Stress Scale (PSS). To that end we have conducted a user study in which smartphone data and stress (as measured by the PSS seven times a day) were recorded for two weeks. We found significant correlations between stress scores and smartphone usage as well as sensor data, pointing to innovative ways for automatic stress measurements via smartphone technology. Stress is a prevalent risk factor for multiple diseases. Thus accurate and efficient prediction of stress levels could provide means for targeted prevention and intervention.
Article
Full-text available
The function of the heart is to contract and pump oxygenated blood to the body and deoxygenated blood to the lungs. To achieve this goal, a normal human heart must beat regularly and continuously for one's entire life. Heartbeats originate from the rhythmic pacing discharge from the sinoatrial (SA) node within the heart itself. In the absence of extrinsic neural or hormonal influences, the SA node pacing rate would be about 100 beats per minute. Heart rate and cardiac output, however, must vary in response to the needs of the body's cells for oxygen and nutrients under varying conditions. In order to respond rapidly to the changing requirements of the body's tissues, the heart rate and contractility are regulated by the nervous system, hormones, and other factors. Here we review how the cardiovascular system is controlled and influenced by not only a unique intrinsic system, but is also heavily influenced by the autonomic nervous system as well as the endocrine system.
Article
Full-text available
Stress can lead to headaches and fatigue, precipitate addictive be-haviors (e.g., smoking, alcohol and drug use), and lead to cardio-vascular diseases and cancer. Continuous assessment of stress from sensors can be used for timely delivery of a variety of interven-tions to reduce or avoid stress. We investigate the feasibility of continuous stress measurement via two field studies using wireless physiological sensors — a four-week study with illicit drug users (n = 40), and a one-week study with daily smokers and social drinkers (n = 30). We find that 11+ hours/day of usable data can be obtained in a 4-week study. Significant learning effect is ob-served after the first week and data yield is seen to be increasing over time even in the fourth week. We propose a framework to an-alyze sensor data yield and find that losses in wireless channel is negligible; the main hurdle in further improving data yield is the attachment constraint. We show the feasibility of measuring stress minutes preceding events of interest and observe the sensor-derived stress to be rising prior to self-reported stress and smoking events.
Article
Full-text available
Background For the last decade, mHealth has constantly expanded as a part of eHealth. Mobile applications for health have the potential to target heterogeneous audiences and address specific needs in different situations, with diverse outcomes, and to complement highly developed health care technologies. The market is rapidly evolving, making countless new mobile technologies potentially available to the health care system; however, systematic research on the impact of these technologies on health outcomes remains scarce. Objective To provide a comprehensive view of the field of mHealth research to date and to understand whether and how the new generation of smartphones has triggered research, since their introduction 5 years ago. Specifically, we focused on studies aiming to evaluate the impact of mobile phones on health, and we sought to identify the main areas of health care delivery where mobile technologies can have an impact. MethodsA systematic literature review was conducted on the impact of mobile phones and smartphones in health care. Abstracts and articles were categorized using typologies that were partly adapted from existing literature and partly created inductively from publications included in the review. ResultsThe final sample consisted of 117 articles published between 2002 and 2012. The majority of them were published in the second half of our observation period, with a clear upsurge between 2007 and 2008, when the number of articles almost doubled. The articles were published in 77 different journals, mostly from the field of medicine or technology and medicine. Although the range of health conditions addressed was very wide, a clear focus on chronic conditions was noted. The research methodology of these studies was mostly clinical trials and pilot studies, but new designs were introduced in the second half of our observation period. The size of the samples drawn to test mobile health applications also increased over time. The majority of the studies tested basic mobile phone features (eg, text messaging), while only a few assessed the impact of smartphone apps. Regarding the investigated outcomes, we observed a shift from assessment of the technology itself to assessment of its impact. The outcome measures used in the studies were mostly clinical, including both self-reported and objective measures. Conclusions Research interest in mHealth is growing, together with an increasing complexity in research designs and aim specifications, as well as a diversification of the impact areas. However, new opportunities offered by new mobile technologies do not seem to have been explored thus far. Mapping the evolution of the field allows a better understanding of its strengths and weaknesses and can inform future developments.
Conference Paper
Full-text available
We present a new integrated device for monitoring heart rate at the wrist using an optical measure. Motion robustness is obtained by using accurate motion reference signals of 3D low noise accelerometers together with dual channel optical sensing. Nonlinear modelling allows to remove the motion contributions in the optical signals and the spatial diversity of the sensors is used to remove reciprocal contributions in the two channels. Finally a statistical estimation, based on physiological properties of the heart, gives a robust estimation of the heart rate. Qualitative and quantitative performance evaluation of the performances on real signals clearly show that the proposed system gives an accurate estimation of the heart rate, even under intense physical activity.
Conference Paper
Full-text available
Repeated exposures to psychological stress can lead to or worsen diseases of slow accumulation such as heart diseases and cancer. The main challenge in addressing the growing epidemic of stress is a lack of robust methods to measure a person's exposure to stress in the natural environment. Periodic self-reports collect only subjective aspects, often miss stress episodes, and impose significant burden on subjects. Physiological sensors provide objective and continuous measures of stress response, but exhibit wide between-person differences and are easily confounded by daily activities (e.g., speaking, physical movements, coffee intake, etc.). In this paper, we propose, train, and test two models for continuous prediction of stress from physiological measurements captured by unobtrusive, wearable sensors. The first model is a physiological classifier that predicts whether changes in physiology represent stress. Since the effect of stress may persist in the mind longer than its acute effect on physiology, we propose a perceived stress model to predict perception of stress. It uses the output of the physiological classifier to model the accumulation and gradual decay of stress in the mind. To account for wide between-person differences, both models self-calibrate to each subject. Both models were trained using data collected from 21 subjects in a lab study, where they were exposed to cognitive, physical, and social stressors representative of that experienced in the natural environment. Our physiological classifier achieves 90% accuracy and our perceived stress model achieves a median correlation of 0.72 with self-reported rating. We also evaluate the perceived stress model on data collected from 17 participants in a two-day field study, and find that the average rating of stress obtained from our model has a correlation of 0.71 with that obtained from periodic self-reports.
Article
Background and Objective Acquiring accurate and reliable health information using a PPG signal in wearable devices requires suppressing motion artifacts. This paper presents a method based on the Fractional Fourier transform (FrFT) to effectively suppress the motion artifacts in a Photoplethysmogram (PPG) signal for an accurate estimation of heart rate (HR). Methods By analyzing various PPG signals recorded under various physiological conditions and sampling frequencies, the proposed work determines an optimal value of the fractional order of the proposed FrFT. The proposed FrFT-based algorithm separates the motion artifacts component from the acquired PPG signal. Finally, the HR estimation accuracy during the strong motion artifact-affected windows is improved using a post-processing technique. The efficacy of the proposed method is evaluated by computing the root mean square error (RMSE). Results The performance of the proposed algorithm is compared with methods in recent studies using test and training datasets from the IEEE Signal Processing Cup (SPC). The proposed method provides the mean absolute error of 1.88 beats per minute (BPM) on all twenty-three recordings. Conclusions The proposed method uses the Fourier method in the fractional domain. A noisy signal is rotated into an intermediate plane between the time and frequency domains to separate the signal from the noise. The algorithm incorporates FrFT analysis to suppress motion artifacts from PPG signals to estimate HR accurately. Further, a post-processing step is used to track the HR for accurate and reliable HR estimation. The proposed FrFT-based algorithm doesn't require additional reference accelerometers or hardware to estimate HR in real-time. The noise and signal separation is optimum for a fractional order (a) value in the vicinity of 0.6. The optimized value of fractional order is constant irrespective of the physical activity and sampling frequency.
Article
The process of monitoring mental health has relied on methods such as invasive sensing and self-reporting. The use of these methods has been limited because of the invasiveness of sensing devices, or the subjective nature of patients' responses. Recent research focuses on the contactless sensing methods used to objectively monitor mental health issues. These methods allow continuous collection of real-time data in a non-disruptive manner. Machine learning methods are then applied to the sensed data to predict information such as physical activity, gestures, and heart rate. This information can be then used to assess mental health issues such as depression, stress, anxiety, among others. This paper presents a comprehensive review of contactless sensing methods for mental health monitoring. It investigates the published research that focuses on contactless sensing methods to predict mental health condition. Moreover, this review categorizes the applications of contactless sensing methods into detection, recognition, and monitoring of vital signs. Furthermore, a comparison of recent studies on contactless sensing methods is presented, which shows the effectiveness and reliability of these methods. This study also highlights the existing challenges in contactless sensing methods and provides future research directions to mitigate these challenges.
Article
Recent advances in wearable technology have facilitated the non-obtrusive monitoring of physiological signals, creating opportunities to monitor and predict stress. Researchers have utilized machine learning methods using these physiological signals to develop stress prediction models. Many of these prediction models have utilized objective stressor tasks (e.g., a public speaking task or solving math problems). Alternatively, the subjective user responses with self-reports have also been used for measuring stress. In this paper, we describe a methodological approach (a) to compare the prediction performance of models developed using objective markers of stress with participant-reported subjective markers of stress from self-reports; and (b) to develop personalized stress models by accounting for inter-individual differences. Towards this end, we conducted a laboratory-based study with 32 healthy volunteers. Participants completed a series of stressor tasks—social, cognitive and physical—wearing an instrumented commercial smartwatch that collected physiological signals and participant responses using timed self-reports. After extensive data preprocessing using a combination of signal processing techniques, we developed two types of models: objective stress models using the stressor tasks as labels; and subjective stress models using participant responses to each task as the label for that stress task. We trained and tested several machine learning algorithms—support vector machine (SVM), random forest (RF), gradient boosted trees (GBT), AdaBoost, and Logistic Regression (LR)—and evaluated their performance. SVM had the best performance for the models using the objective stressor (i.e., stressor tasks) with an AUROC of 0.790 and an F-1 score of 0.623. SVM also had the highest performance for the models using the subjective stress (i.e., participant self-reports) with an AUROC of 0.726 and an F-1 score of 0.520. Model performance improved with a personalized threshold model to an AUROC of 0.775 and an F-1 score of 0.599. The performance of the stress models using an instrumented commercial smartwatch was comparable to similar models from other state-of-the-art laboratory-based studies. However, the subjective stress models had a lower performance, indicating the need for further research on the use of self-reports for stress-related studies. The improvement in performance with the personalized threshold-based models provide new directions for building stress prediction models.
Article
Recent advances in wearable sensor technologies have led to a variety of approaches for detecting physiological stress. Even with over a decade of research in the domain, there still exist many significant challenges, including a near-total lack of reproducibility across studies. Researchers often use some physiological sensors (custom-made or off-the-shelf), conduct a study to collect data, and build machine-learning models to detect stress. There is little effort to test the applicability of the model with similar physiological data collected from different devices, or the efficacy of the model on data collected from different studies, populations, or demographics. This paper takes the first step towards testing reproducibility and validity of methods and machine-learning models for stress detection. To this end, we analyzed data from 90 participants, from four independent controlled studies, using two different types of sensors, with different study protocols and research goals. We started by evaluating the performance of models built using data from one study and tested on data from other studies. Next, we evaluated new methods to improve the performance of stress-detection models and found that our methods led to a consistent increase in performance across all studies, irrespective of the device type, sensor type, or the type of stressor. Finally, we developed and evaluated a clustering approach to determine the stressed/not-stressed classification when applying models on data from different studies, and found that our approach performed better than selecting a threshold based on training data. This paper's thorough exploration of reproducibility in a controlled environment provides a critical foundation for deeper study of such methods, and is a prerequisite for tackling reproducibility in free-living conditions.
Article
Stress has become a major health concern and there is a need to study and develop new digital means for real-time stress detection. Currently, majority of stress detection research is using population based approaches that lack the capability to adapt to individual differences. They also use supervised learning methods, requiring extensive labeling of training data, and they are typically tested on data collected in a laboratory and thus do not generalize to field conditions. To address these issues, we present multiple personalized models based on an unsupervised algorithm, the Self-Organizing Map (SOM), and we propose an algorithmic pipeline to apply the method for both laboratory and field data. The performance is evaluated on a dataset of physiological measurements from a laboratory test and on a field dataset consisting of four weeks of physiological and smartphone usage data. In these tests, the performance on the field data was steady across the different personalization levels (accuracy around 60%) and a fully personalized model performed the best on the laboratory data, achieving accuracy of 92% which is comparable to state-of-the-art supervised classifiers. These results demonstrate the feasibility of SOM in personalized mental stress detection both in constrained and free-living environment.
Article
Timely detection of an individual's stress level has the potential to improve stress management, thereby reducing the risk of adverse health consequences that may arise due to mismanagement of stress. Recent advances in wearable sensing have resulted in multiple approaches to detect and monitor stress with varying levels of accuracy. The most accurate methods, however, rely on clinical-grade sensors to measure physiological signals; they are often bulky, custom made, and expensive, hence limiting their adoption by researchers and the general public. In this article, we explore the viability of commercially available off-the-shelf sensors for stress monitoring. The idea is to be able to use cheap, nonclinical sensors to capture physiological signals and make inferences about the wearer's stress level based on that data. We describe a system involving a popular off-the-shelf heart rate monitor, the Polar H7; we evaluated our system with 26 participants in both a controlled lab setting with three well-validated stress-inducing stimuli and in free-living field conditions. Our analysis shows that using the off-the-shelf sensor alone, we were able to detect stressful events with an F1-score of up to 0.87 in the lab and 0.66 in the field, on par with clinical-grade sensors.
Article
High levels of stress during pregnancy increase the chances of having a premature or low-birthweight baby. Perceived self-reported stress does not often capture or align with the physiological and behavioral response. But what if there was a self-report measure that could better capture the physiological response? Current perceived stress self-report assessments require users to answer multi-item scales at different time points of the day. Reducing it to one question, using microinteraction-based ecological momentary assessment (micro-EMA, collecting a single in situ self-report to assess behaviors) allows us to identify smaller or more subtle changes in physiology. It also allows for more frequent responses to capture perceived stress while at the same time reducing burden on the participant. We propose a framework for selecting the optimal micro-EMA that combines unbiased feature selection and unsupervised Agglomerative clustering. We test our framework in 18 women performing 16 activities in-lab wearing a Biostamp, a NeuLog, and a Polar chest strap. We validated our results in 17 pregnant women in real-world settings. Our framework shows that the question "How worried were you?" results in the highest accuracy when using a physiological model. Our results provide further in-depth exposure to the challenges of evaluating stress models in real-world situations.
Conference Paper
In recent years, large-scale data collection has become crucial in Human-Computer Interaction (HCI) research. With a sharp climb of the amount of data being gathered due to an increasing number of mobile and wearable devices, real-time maintenance of Data Quality (DQ) of data-collection campaigns has already become an overwhelming task, especially in large-scale experiments. This paper proposes EasyTrack, a platform that collects large-scale data in an automatized manners. We describe how our proposed solution detects and tackles issues in data collection campaigns in an automated manner.
Article
Photoplethysmography (PPG) is a low-cost, non-invasive, optical technique used to detect blood volume changes in the microvascular tissue bed, measured from the skin surface. It has traditionally been used in commercial medical devices for oxygen saturation, blood pressure monitoring and cardiac activity for assessing peripheral vascular disease and autonomic function. There has been a growing interest to incorporate PPG sensors in daily life, capable of use in ambulatory settings. However, inferring cardiac information (e.g. heart rate) from PPG traces in such situations is extremely challenging, because of interferences caused by motion. Following the IEEE Signal Processing Cup in 2015, numerous methods have been proposed for estimating particularly the average heart rate using wrist-worn PPG during physical activity. Details on PPG technology, sensor development and applications have been well documented in literature. Hence, in this paper, we have presented a comprehensive review of state-of-the-art research on heart rate estimation from wrist-worn photoplethysmography (PPG) signals. Our review also encompasses brief theoretical details about PPG sensing and other potential applications – biometric identification, disease diagnosis using wrist PPG. This article will set a platform for future research on pervasive monitoring using wrist PP.G.
Article
Stress has become a significant cause for many diseases in the modern society. Recently, smartphones, smartwatches and smart wrist-bands have become an integral part of our lives and have reached a widespread usage. This raised the question of whether we can detect and prevent stress with smartphones and wearable sensors. In this survey, we will examine the recent works on stress detection in daily life which are using smartphones and wearable devices. Although there are a number of works related to stress detection in controlled laboratory conditions, the number of studies examining stress detection in daily life is limited. We will divide and investigate the works according to used physiological modality and their targeted environment such as office, campus, car and unrestricted daily life conditions. We will also discuss promising techniques, alleviation methods and research challenges.
Conference Paper
The advances in mobile and wearable sensing have led to a myriad of approaches for stress detection in both laboratory and free-living settings. Most of these methods, however, rely on the usage of some combination of physiological signals measured by the sensors to detect stress. While these solutions work great in a lab or a controlled environment, the performance in free-living situations leaves much to be desired. In this work, we explore the role of context of the user in free-living conditions, and how that affects users' perceived stress levels. To this end, we conducted an 'in-the-wild' study with 23 participants, where we collected physiological data from the users, along with 'high-level' contextual labels, and perceived stress levels. Our analysis shows that context plays a significant role in the users' perceived stress levels, and when used in conjunction with physiological signals leads to much higher stress detection results, as compared to relying on just physiological data.
Article
Wearable commercial-off-the-shelf (COTS) devices have become popular during the last years to monitor sports activities, primarily among young people. These devices include sensors to gather data on physiological signals such as heart rate, skin temperature or galvanic skin response. By applying data analytics techniques to these kinds of signals, it is possible to obtain estimations of higher-level aspects of human behavior. In the literature, there are several works describing the use of physiological data collected using clinical devices to obtain information on sleep patterns or stress. However, it is still an open question whether data captured using COTS wrist wearables is sufficient to characterize the learners' psychological state in educational settings. This paper discusses a protocol to evaluate stress estimation from data obtained using COTS wrist wearables. The protocol is carried out in two phases. The first stage consists of a controlled laboratory experiment, where a mobile app is used to induce different stress levels in a student by means of a relaxing video, a Stroop Color and Word test, a Paced Auditory Serial Addition test, and a hyperventilation test. The second phase is carried out in the classroom, where stress is analyzed while performing several academic activities, namely attending to theoretical lectures, doing exercises and other individual activities, and taking short tests and exams. In both cases, both quantitative data obtained from COTS wrist wearables and qualitative data gathered by means of questionnaires are considered. This protocol involves a simple and consistent method with a stress induction app and questionnaires, requiring a limited participation of support staff.
Article
Being able to detect stress as it occurs can greatly contribute to dealing with its negative health and economic consequences. However, detecting stress in real life with an unobtrusive wrist device is a challenging task. The objective of this study is to develop a method for stress detection that can accurately, continuously and unobtrusively monitor psychological stress in real life. First, we explore the problem of stress detection using machine learning and signal processing techniques in laboratory conditions, and then we apply the extracted laboratory knowledge to real-life data. We propose a novel context-based stress-detection method. The method consists of three machine-learning components: a laboratory stress detector that is trained on laboratory data and detects short-term stress every 2 minutes; an activity recognizer that continuously recognizes the user's activity and thus provides context information; and a context-based stress detector that uses the outputs of the laboratory stress detector, activity recognizer and other contexts, in order to provide the final decision on 20-minute intervals. Experiments on 55 days of real-life data showed that the method detects (recalls) 70% of the stress events with a precision of 95%.
Article
Purpose: The purpose of this study was to evaluate the accuracy of the Polar M600 optical heart rate (OHR) sensor compared with ECG heart rate (HR) measurement during various physical activities. Methods: Thirty-six subjects participated in a continuous 76-min testing session, which included rest, cycling warm-up, cycling intervals, circuit weight training, treadmill intervals, and recovery. HR was measured using a three-lead ECG configuration and a Polar M600 Sport Watch on the left wrist. Statistical analyses included OHR percent accuracy, mean difference, mean absolute error, Bland-Altman plots, and a repeated-measures generalized estimating equation design. OHR percent accuracy was calculated as the percentage of occurrences where OHR measurement was within and including ±5 bpm from the ECG HR value. Results: Of the four exercise phases performed, the highest OHR percent accuracy was found during cycle intervals (91.8%), and the lowest OHR percent accuracy occurred during circuit weight training (34.5%). OHR percent accuracy improved steadily within exercise transitions during cycle intervals to a maximum of 98.5% and during treadmill intervals to a maximum of 89.0%. Lags in HR calculated by the Polar M600 OHR sensor existed in comparison to ECG HR, when exercise intensity changed until steady state occurred. There was a tendency for OHR underestimation during intensity increases and overestimation during intensity decreases. No statistically significant interaction effect with device was found in this sample on the basis of sex, body mass index, V˙O2max, skin type, or wrist size. Conclusions: The Polar M600 was accurate during periods of steady-state cycling, walking, jogging, and running, but less accurate during some exercise intensity changes, which may be attributed to factors related to total peripheral resistance changes and pulse pressure.
Article
In this article, we offer a brief history summarizing the last century of neuroscientific study of emotion, highlighting dominant themes that run through various schools of thought. We then summarize the current state of the field, followed by six key points for scientific progress that are inspired by a multi-level constructivist theory of emotion.
Article
Background and objectives: In spite of the existence of a multitude of techniques that allow the estimation of stress from physiological indexes, its fine-grained assessment is still a challenge for biomedical engineering. The short-term assessment of stress condition overcomes the limits to stress characterization with long blocks of time and allows to evaluate the behaviour change in real-world settings and also the stress level dynamics. The aim of the present study was to evaluate time and frequency domain and nonlinear heart rate variability (HRV) metrics for stress level assessment using a short-time window. Methods: The electrocardiogram (ECG) signal from 14 volunteers was monitored using the Vital JacketTM while they performed the Trier Social Stress Test (TSST) which is a standardized stress-inducing protocol. Window lengths from 220 s to 50 s for HRV analysis were tested in order to evaluate which metrics could be used to monitor stress levels in an almost continuous way. Results: A sub-set of HRV metrics (AVNN, rMSSD, SDNN and pNN20) showed consistent differences between stress and non-stress phases, and showed to be reliable parameters for the assessment of stress levels in short-term analysis. Conclusions: The AVNN metric, using 50 s of window length analysis, showed that it is the most reliable metric to recognize stress level across the four phases of TSST and allows a fine-grained analysis of stress effect as an index of psychological stress and provides an insight into the reaction of the autonomic nervous system to stress.
Conference Paper
With the wide-distribution of smart wearables, it seems as though ubiquitous healthcare can finally permeate into our everyday lives, opening the possibility to realize clinical-grade applications. However, given that clinical applications require reliable sensing, there is a need to understand how accurate healthcare sensors on wearable devices (e.g., heart rate sensors) are. To answer this question, this work starts with a thorough investigation on the accuracy of widely used wearable devices' heart rate sensors. Specifically, we show that when actively moving, heart rate readings can diverge far from the ground truth, and also show that such inaccuracies cannot be easily correlated, nor predicted, using accelerometer and gyroscope measurements. Rather, we point out that the light intensity readings at the photoplethysmography (PPG) sensor can be an effective indicator of heart rate accuracy. Using a Viterbi algorithm-based Hidden Markov Model, we show that it is possible to design a filter that allows smartwatches to self-classify measurement quality with ~ 98% accuracy. Given that such capabilities allow the smartwatch to internally filter misleading values from being application input, we foresee this as an essential step in catalyzing novel clinical-grade wearable applications.
Article
Despite their enhanced marketplace visibility, validity of wearable photoplethysmographic heart rate monitoring is scarce. Forty-seven healthy participants performed seven, 6-min exercise bouts and completed a valid skin type scale. Participants wore an Omron HR500U (OHR) and a Mio Alpha (MA), two commercial wearable photoplethysmographic heart rate monitors. Data were compared to a Polar RS800CX (PRS). Means and error were calculated between devices using minutes 2-5. Compared to PRS, MA data was significantly different in walking, biking (2.41 ± 3.99 bpm and 3.26 ± 11.38 bpm, p < 0.05) and weight lifting (23.30 ± 31.94 bpm, p < 0.01). OHR differed from PRS in walking (4.95 ± 7.53 bpm, p < 0.05) and weight lifting (4.67 ± 8.95 bpm, p < 0.05). MA during elliptical, stair climbing and biking conditions demonstrated a strong correlation between jogging speed and error (r = 0.55, p < 0.0001), and showed differences in participants with less photosensitive skin.
Article
Wearable sensors can provide continuous biosignal measurements, which systems can use to infer psychological stress arousal. The authors deploy such sensors to monitor a public speaker, an on-stage musician, an Olympic ski jumper, and people during everyday life, quantifying stress arousal in varying contexts. Stress-arousal monitoring can use various human biosignals. Researchers have used sensors attached to or in close proximity to the body. Stress-arousal assistant systems can help assess and counteract the downsides of stress arousal. Speaking in front of an audience is a stressful situation for many people and can impair oral fluency and information recall. Options for assessing stress arousal during an on-stage performance have been limited to retrospective self-report and expert observation. Using sensor-based measurements to complement subjective impressions can help researchers better understand how stage fright can affect performance quality so they can develop coping strategies.
Article
A survey revealed that researchers still seem to encounter difficulties to cope with outliers. Detecting outliers by determining an interval spanning over the mean plus/minus three standard deviations remains a common practice. However, since both the mean and the standard deviation are particularly sensitive to outliers, this method is problematic. We highlight the disadvantages of this method and present the median absolute deviation, an alternative and more robust measure of dispersion that is easy to implement. We also explain the procedures for calculating this indicator in SPSS and R software.
Article
This study provides Heart Rate (HR) Estimation using wrist-type Photoplethysmogpraphy (PPG) sensor while the subject is running. We propose the algorithm to estimate heart rate for the wrist-type PPG sensor. Since body motion artifacts easily affect the arm portion, our method in this study also uses accelerometer built in the wrist-type sensor to improve the accuracy of heart rate estimation. Our method has two components. One is rejecting artifacts with the power spectrum's difference between PPG and acceleration obtained by frequency analysis. The other is the reliability of heart rate estimation, defined by the acceleration. Experimental results while our test subjects were running came closer to the holter Electrocardiogram (ECG) in high accuracy (r= 0.98, SD= 8.7 bpm). We, therefore, report the heart rate estimation method which has a higher degree of usability compared to existing methods using ECG.
Article
Electrocardiogram (ECG) based biometric matching suffers from high misclassification error with lower sampling frequency data. This situation may lead to an unreliable and vulnerable identity authentication process in high security applications. In this paper, quality enhancement techniques for ECG data with low sampling frequency has been proposed for person identification based on piecewise cubic Hermite interpolation (PCHIP) and piecewise cubic spline interpolation (SPLINE). A total of 70 ECG recordings from 4 different public ECG databases with 2 different sampling frequencies were applied for development and performance comparison purposes. An analytical method was used for feature extraction. The ECG recordings were segmented into two parts: the enrolment and recognition datasets. Three biometric matching methods, namely, Cross Correlation (CC), Percent Root-Mean-Square Deviation (PRD) and Wavelet Distance Measurement (WDM) were used for performance evaluation before and after applying interpolation techniques. Results of the experiments suggest that biometric matching with interpolated ECG data on average achieved higher matching percentage value of up to 4% for CC, 3% for PRD and 94% for WDM. These results are compared with the existing method when using ECG recordings with lower sampling frequency. Moreover, increasing the sample size from 56 to 70 subjects improves the results of the experiment by 4% for CC, 14.6% for PRD and 0.3% for WDM. Furthermore, higher classification accuracy of up to 99.1% for PCHIP and 99.2% for SPLINE with interpolated ECG data as compared of up to 97.2% without interpolation ECG data verifies the study claim that applying interpolation techniques enhances the quality of the ECG data.
Article
Concerns regarding certain fMRI data analysis practices have recently evoked lively debate. The principal concern regards the issue of non-independence, in which an initial statistical test is followed by further non-independent statistical tests. In this report, we propose a simple, practical solution to reduce bias in secondary tests due to non-independence using a leave-one-subject-out (LOSO) approach. We provide examples of this method, show how it reduces effect size inflation, and suggest that it can serve as a functional localizer when within-subject methods are impractical.