Conference Paper

RF-Care: Device-Free Posture Recognition for Elderly People Using A Passive RFID Tag Array

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Activity recognition is a fundamental research topic for a wide range of important applications such as fall detection for elderly people. Existing techniques mainly rely on wearable sensors, which may not be reliable and practical in real-world situations since people often forget to wear these sensors. For this reason, device-free activity recognition has gained the popularity in recent years. In this paper, we propose an RFID (radio frequency identification) based, device-free posture recognition system. More specifically , we analyze Received Signal Strength Indicator (RSSI) signal patterns from an RFID tag array, and systematically examine the impact of tag configuration on system performance. On top of selected optimal subset of tags, we study the challenges on posture recognition. Apart from exploring posture classification, we specially propose to infer posture transitions via Dirichlet Process Gaussian Mixture Model (DPGMM) based Hidden Markov Model (HMM), which effectively captures the nature of uncertainty caused by signal strength varieties during posture transitions. We run a pilot study to evaluate our system with 12 orientation-sensitive postures and a series of posture change sequences. We conduct extensive experiments in both lab and real-life home environments. The results demonstrate that our system achieves high accuracy in both environments, which holds the potential to support assisted living of elderly people.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... It is often severely affected by the propagation environment, the tagged object properties, or human movements in the signal coverage area. Moreover, the signal strength of a passive RFID tag is uncertain and non-linear [18], [30]. As shown in Figure 2 (a), the RSSI variations cannot be easily fitted using generic linear and polynomial regressions since the fitting residuals are quite large. ...
... In this way, we may perform more robust activity recognition with the collected full spectrum of RSSI variations. More technical details can refer to our previous work in [30]. ...
... To the best of our knowledge, our work is the very first of few on investigating the dictionary-based sparse representation in human activity recognition by learning signal strength stream. Compared to our previous work in [30], [49], we further develop the dictionary-based sparse learning algorithm for constructing activity dictionary, and explore multiple strategies of using the learned sparse coefficients of dictionaries under personindependent scenario. Moreover, we have conducted extensive and thorough evaluations in terms of person-independent along with person-dependent scenarios. ...
Article
Full-text available
Understanding and recognizing human activities is a fundamental research topic for a wide range of important applications such as fall detection and remote health monitoring and intervention. Despite active research in human activity recognition over the past years, existing approaches based on computer vision or wearable sensor technologies present several significant issues such as privacy (e.g., using video camera to monitor the elderly at home) and practicality (e.g., not possible for an older person with dementia to remember wearing devices). In this paper, we present a low-cost, unobtrusive and robust system that supports independent living of older people. The system interprets what a person is doing by deciphering signal fluctuations using radio-frequency identification (RFID) technology and machine learning algorithms. To deal with noisy, streaming, and unstable RFID signals, we develop a compressive sensing dictionary-based approach that can learn a set of compact and informative dictionaries of activities using an unsupervised subspace decomposition. In particular, we devise a number of approaches to explore the properties of sparse coefficients of the learned dictionaries for fully utilizing the emboddied discirminative information on the activity recognition task. Our approach achieves efficient and robust activity recognition via a more compact and robust representation of activities. Extensive experiments conducted in a real-life residential environment demonstrate that our proposed system offers a good overall performance and shows the promising practical potential to underpin the applications for the independent living of the elderly.
... We compare our work to the RF-Care system [41], which uses RSSI to recognize human activities. In RF-Care system, the user performs activities in front of an RFID tag array deployed as a square. ...
... In our experiment, we use collected RSSIs and implement RF-Care algorithms. All parameters in RF-Care are configured and optimized according to [41] to achieve its best performance in our experiments. In TACT system, we apply all the techniques in this paper. ...
... In contrast, our TACT system does not need extra devices and can monitor human activities within multiple meters sensing range. RF-Care [41] is a pioneering work which aims to build a device-free posture recognition for elderly people using passive RFID arrays. This work detects human activities by analyzing the RSSI patterns from an RFID tag array. ...
Article
Wireless sensing techniques for tracking human activities have been vigorously developed in recent years. Yet current RFID based human activity recognition techniques need either direct contact to human body (e.g., attaching RFIDs to users) or specialized hardware (e.g., software defined radios, antenna array). How to wirelessly track human activities using commodity RFID systems without attaching tags to users (i.e., a contact-free scenario) still faces lots of technical challenges. In this paper, we quantify the correlation between RF phase values and human activities by modeling intrinsic characteristics of signal reflection in contact-free scenarios. Based on the signal reflection model, we introduce TACT that can recognize human activities using commodity RFIDs without attaching any RFID tags to users. TACT first reliably detects the presence of human activities and segments phase values. Then, candidate phase segments are classified according to their coarse-grained features (e.g., moving speed, moving distance, activity duration) as well as their fine-grained feature of phase waveform. We deploy and leverage multiple tags to increase the coverage and enhance the robustness of the system. We implement TACT with commodity RFID systems. We invite 12 participants to evaluate our system in various scenarios. The experiment results show that TACT can recognize eight types of human activities with 93.5% precision under different and challenging experiment settings.
... [3][4][5][6][7] The environmental variable analysis method uses one or more sensors to detect environmental changes in a certain space to collect information of the body so as to determine whether the falls occur. The commonly used sensors include infrared sensors, 8 audio sensor, 9,10 vibration sensor, 11,12 radio frequency (RF) signal, 13,14 and so on. However, there are some problems in the previous methods, such as limited monitoring region, privacy exposure, cost inefficient, and vulnerable to the environment. ...
... When the system sends an alarm message, the monitoring mobile phone can receive a short message including the fall posture and the location. After the experiments are completed, the test results are presented in Tables 2 and 3 Among ADLs, walking is easily confused with fall postures and widely considered in other literatures [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20] when proposing fall detection systems. The accuracy for detecting walking in this study is 98%. ...
Article
Full-text available
The accidental fall is the major risk for elderly especially under unsupervised states. It is necessary to real-time monitor fall postures for elderly. This paper proposes the fall posture identifying scheme with wearable sensors including MPU6050 and flexible graphene/rubber. MPU6050 is located at the waist to monitor the attitude of the body with triaxial accelerometer and gyroscope. The graphene/rubber sensors are located at the knees to monitor the moving actions of the legs. A real-time fall postures identifying algorithm is proposed by the integration of triaxial accelerometer, tilt angles, and the bending angles from the graphene/rubber sensors. A volunteer is engaged to emulate elderly physical behaviors in performing four activities of daily living and six fall postures. Four basic fall down postures can be identified with MPU6050. Integrated with graphene/rubber sensors, two more fall postures are correctly identified by the proposed scheme. Test results show that the accuracy for activities of daily living detection is 93.5% and that for fall posture identifying is 90%. After the fall postures are identified, the proposed system transmits the fall posture to the smart phone carried by the elderly via Bluetooth. Finally, the posture and location are transmitted to the specified mobile phone by short message.
... The RSSI of multipath signals are depicted in Figure 5. In [13], an RFID-based device-free posture recognition system is proposed. For recognizing human activities, it deploys passive RFID tags as an array in the environment. ...
... For instance, if two tags are put at a certain distance, significant signal loss or fading can occur. For this reason, and to achieve the best performance, an investigation is performed in [13] for tag placement and selection to determine an optimal tag array. ...
Article
Full-text available
Human Activity Recognition (HAR) has attracted much attention in the last two decades with applications such as remote health monitoring, security and surveillance, and smart environments. Specifically, for well-being assessment, HAR systems give us the possibility of recognizing important physical activities in the patient's daily living. For instance, using motion sensors to monitor and record the physical situations and postures of patients with chronic conditions such as arthritis and cardiovascular disease which cause limitations in mobility can be useful in behavior assessment [1]. These physical records, especially for people with disabilities or elderly people, provide caregivers with useful information for treatment. Another example is fall detection that could notify caregivers instantly when a person falls. Today, many types of sensors are used for human activity recognition, including vision-based, wearable, object-tagged and device-free. In this article, we focus on device-free sensors and give and overview of their applications in HAR for well-being assessment in smart homes, including examples from the existing literature and our own test results for simple activity recognition with some of these sensors. We will see that device-free sensors are used predominantly for fall detection, although other well-being applications such as cognitive assessment, respiration monitoring, and dementia detection have emerged as well. Let us begin by looking at various types of HAR sensors and identifying the characteristics of each of them.
... Because, the high-resolution camera based systems are considered to be privacy invasive and the wearable devices are not practical for the long-term application [16]. The most preferable option for the indoor posture/fall recognition is to use the device-free and non-privacy invasive systems that employ infrared sensors, radio signal strength etc.; however, there are few of researches [17]- [19] on posture or fall recognition using device-free non-privacy invasive sensors. Actually, the devicefree non-privacy invasive sensing technology is quite immature and needs to be developed. ...
... Yao et al. [19] have proposed a device-free unobtrusive posture recognition system using a passive RFID tag array and RFID antenna. The system recognizes the postures by analyzing RSSI signal patterns of nine RFID tags in the array that are received at RFID antenna while the person is between the RFID tag array and antenna. ...
... Different from the above work, HeadSee realizes passive head gesture recognition through commercial RFID, which does not require special SDR hardware equipment. There is also a lot of existing perceptive recognition work based on commercial RFID equipment [11,17,20,30,31,34,37,38,42,43,47,47]. The RF-ECG [34] deploys a Tag Array on the human chest to perform HRV monitoring on human subjects. ...
... This work focuses on the track of hands without discussing the influence of other disturbance sources. RF-Care [43] analyzes Received Signal Strength Indicator(RSSI) signal patterns from an RFID tag array and proposes a Dirichlet Process Gaussian Mixture Model (DPGMM) to detect postures. Different from HeadSee, it is a posture recognition, whereas HeadSee is for the gesture of the head, thus the reflection signals from other parts of the body also interfere with the system recognition for HeadSee. ...
Article
Full-text available
Research shows that the head posture not only contains important interpersonal information but also is an external manifestation of human psychological activities. Head posture plays an important role in automotive safety, smart home, and other intelligent environments. Rf-based posture recognition method provides a non-contact and privacy protection method to detect and monitor human activities. However, how to separate weak activity state information from reflected signals has been a big challenge for this kind of method. This paper proposes HeadSee, a passive human head gesture sensing system built on a cheap commodity RFID device. Without attaching any device to the human body, HeadSee using ICA extracts the weak reflected RF signals from the human body for gesture sensing. And then HeadSee carefully models the head movement by utilizing the signal’s phase/RSS (received signal strength) changes and successfully quantifies the head gesture with continuous sequences of movement states. Extensive experiments show that even with interfering movements from other body parts, HeadSee can still achieve around 91% recognition accuracy of the head gestures.
... Recently, researchers also pay attention to employing infrared device [9], radars [10] and RF-based devices [11], [12], [13]. But the infrared equipment and radar employ • * Corresponding author • LiYao Li · Rui Bai · Binbin Xie · Yao Peng* · Anwen Wang · Wei Wang · Bo Jiang · XiaoJiang Chen Fig. 1: Conceptual diagram of RFID system the expensive equipment, leading to a high cost. ...
... RSS Based: RSS based recognition [11], [42], [43], [44], [45] mainly uses the changes in wireless signals which is caused by people's activities. In some degree, it can only recognize this small scale activities like gestures, when RSS come into the condition where activity has a broad range, they are unavailable, especially for running , walking. ...
Article
Activity recognition is important for taking care of patients and old men especially in e-Health. The activity recognition system without carrying any wearable devices is widely used in our daily life. Current methods using uneconomical equipment or even dedicated devices lead to cost-inefficiency for large-scale deployments. This paper introduces R&P , a device-free activity recognition system only using cheap RFID tags. Based on the analysis of RFID signals, we extract RSS fingerprints and phase fingerprints for each activity and synthesize these two kinds of fingerprints to accurately recognize activities. Moreover, we modify the DTW algorithm and propose T-DTW method to improve the recognition efficiency. We use commercial passive RFID hardware and verify R&P in three different environments with different targets and six activities. The results demonstrate that our solution can recognize activities with an average accuracy of 87.9%.
... In this paper, we design HOI-Loc, an RFID-based devicefree localization system to achieve high accuracy in clustered living environments using Human-Object Interaction events. With the rapid expansion of smart devices, we can easily access, retrieve and monitor the HOI events in our daily lives [9], [10], [11]. We also observe that such HOI can be served as a coarse-grained location indicator. ...
... The non-intrusive sensors such as laser, pyroelectric infrared (PIR) and pressure sensors, are the most preferable options for indoor human recognition systems [5] [7]. However, some challenges remain unsolved for distributed sensors based sensing systems, for example: 1) limited classification accuracy due to the low sensing resolution; 2) less-effective feature extraction methods from raw sensor data. ...
Conference Paper
Due to the rapid development of sensing technology and the increasing ratio of elderly population, many research activities have been performed to develop human motion detection and recognition systems. Various camera and wearable sensor-based human recognition systems have been developed; however, they are either not privacy protective or not practical for long-term monitoring. In this paper, we present a non-intrusive indoor human recognition system using distributed sensors and Genetic algorithm (GA) based neural network. Pyroelectric infrared (PIR) sensors are chosen using masks with random sampling windows to sense the human body thermal variations. The time domain statistical features are extracted to train classification algorithm in order to recognize human motion. Total of 200 samples are collected from volunteers performing two actions, i.e., walking normal and abnormally. A number of classification algorithms have been trained to recognize human motion. The outcome indicates that the QFAM-GA method outperforms other state-of-the-art methods, such as KNN, SVM, CART, NB and Fuzzy Min-Max.
... By attaching a cheap RFID tag on the finger of operator, Wang et al. [30] designed an antenna array consisted of eight antennas in subtle ways, and used the Angle of Arrival (AoA) of RF signal to locate the finger's position, and recognized the gestures via trajectory matching. On the contrary, RF-Care [31] tried to get the gestures via tag array. Alqaness et al. [10] designed a fast Dynamic Time Warping (DTW) algorithm to identify different gestures with the received WiFi signals. ...
Article
As a universal interaction method, hand and finger gesture can express people’s intention directly and clearly in daily life, and has been one of the hotspots in human-computer interaction community. In this paper, we present DMT, a device-free finger gesture tracking system that can track and recognize the finger motion accurately. To achieve this, we transform the mobile device, such as a smart phone, into an active sonar system by establishing inaudible audio links between built-in speakers and microphone. The finger motion will have an effect (e.g., doppler-shift) on the audio signal, which makes it possible to track finger motion according to the received signal characteristics at microphone side. Due to the small reflection energy and slow moving speed of finger, existing methods cannot detect the doppler-shift accurately. To this end, a fourier fitting based method is proposed in DMT to accurately detect the doppler-shift. With the detected doppler-shift, DMT can track the finger motion with high accuracy. DMT supports all kinds of finger gestures interaction, including characters and shapes. Extensive experiments demonstrate the high accuracy and robustness of DMT in dynamic environments.
... Feature extraction: Different from common activity recognition systems which need either to train the model [29] or to prestore the signal profiles [22], we come up with an audacious idea for Tag-Controller: Why not design a system with no need for extensive preliminary work? Fortunately, benefiting from meticulous designs of actions and tag placement, there are indeed some action-specific features which exclusively belong to the corresponding actions. ...
Conference Paper
Innovative Human Machine Interface technologies are fundamentally reshaping the way people live, entertain and work. Passive RFID tags, benefiting from its wireless, inexpensive and battery-free sensing ability, are gradually being applied in new-style interaction interfaces, ranging from virtual touch screen to 3D mouse. This paper presents TagController, a universal wireless and battery-free remote controller with two types of interactive actions. The key insight is that the fine-grained phase information extracted from RF signals is capable of perceiving various actions. TagController can recognize 10 actions without any training or prestored profiles by executing a sequence of functional components, i.e. preprocessor, action detector and action recognizer. We have implemented TagController with COTS RFID devices and conducted substantial experiments in different scenarios. The results demonstrate that TagController can achieve an average recognition accuracy of 95.8% and 94.3% in the scenarios of one and two remote controllers, respectively, which promises its feasibility and robustness.
... In the past decade, many solutions have been proposed for posture recognition using the device-free approach. RF-Care [47] proposed a device-free solution for posture recognition based on RFID technology. The passive RFID tags arrays are placed in the environment to capture the activity information. ...
Preprint
Full-text available
Human activity recognition has gained importance in recent years due to its applications in various fields such as health, security and surveillance, entertainment, and intelligent environments. A significant amount of work has been done on human activity recognition and researchers have leveraged different approaches, such as wearable, object-tagged, and device-free, to recognize human activities. In this article, we present a comprehensive survey of the work conducted over the period 2010-2018 in various areas of human activity recognition with main focus on device-free solutions. The device-free approach is becoming very popular due to the fact that the subject is not required to carry anything, instead, the environment is tagged with devices to capture the required information. We propose a new taxonomy for categorizing the research work conducted in the field of activity recognition and divide the existing literature into three sub-areas: action-based, motion-based, and interaction-based. We further divide these areas into ten different sub-topics and present the latest research work in these sub-topics. Unlike previous surveys which focus only on one type of activities, to the best of our knowledge, we cover all the sub-areas in activity recognition and provide a comparison of the latest research work in these sub-areas. Specifically, we discuss the key attributes and design approaches for the work presented. Then we provide extensive analysis based on 10 important metrics, to give the reader, a complete overview of the state-of-the-art techniques and trends in different sub-areas of human activity recognition. In the end, we discuss open research issues and provide future research directions in the field of human activity recognition.
... At present, cameras [9][10][11] and Kinect [12,13] are two most popular types of visual devices used to detect falls; environment sensors generally use vibration [14], audio [15,16], or radio frequency technologies (such as RFID [17]) for fall detection. For example, the work in [18,19] analyzes the signal strength of the RFID devices to recognize activities of elderly people. ...
Article
Full-text available
Humanfall detection has attracted broad attentions as sensors and mobile devices are increasingly adopted in real-life scenarios such as smart homes. The complexity of activities in home environments pose severe challenges to the fall detection research with respect to the detection accuracy. We propose a collaborative detection platform that combines two subsystems: a threshold-based fall detection subsystem using mobile phones and a support vector machine (SVM)-based fall detection subsystem using Kinects. Both subsystems have their respective confidence models and the platform detects falls by fusing the data of both subsystems using two methods: the logical rules-based and D-S evidence fusion theory-based methods. We have validated the two confidence models based on mobile phone and Kinect, which achieve the accuracy of 84.17% and 97.08%, respectively. Our collaborative fall detection approach achieves the best accuracy of 100%. © 2018 Springer Science+Business Media, LLC, part of Springer Nature
... Most importantly, they all require regular maintenance such as battery replacement, thus hindering their practical deployment in the real world (Yang et al., 2013(Yang et al., , 2015a. In this regard, device-free tracking systems built on COTS (commercial off-the-shelf) passive RFID tags are more promising in terms of deployment convenience (commercialized product without any hardware or firmware modification), maintenance effort (no batteries needed and purely harvesting the in-air backscattered energy) and cost efficiency (≈5 cents each, still dropping quickly) (Han et al., 2015;Ruan et al., 2014;Yao et al., 2015;Ruan, 2016). As a result, in this paper, we design a DfP system that can unobtrusively localize, track a subject to high accuracy based on pure passive RFID tags. ...
Article
Localizing and tracking human movement in a device-free and passive manner is promising in two aspects: i) it neither requires users to wear any sensors or devices, ii) nor it needs them to consciously cooperate during the localization. Such indoor localization technique underpins many real-world applications such as shopping navigation, intruder detection, surveillance care of seniors etc. However, current passive localization techniques either need expensive/sophisticated hardware such as ultra-wideband radar or infrared sensors, or have an issue of invasion of privacy such as camera-based techniques, or need regular maintenance such as the replacement of batteries. In this paper, we build a novel data-driven localization and tracking system upon a set of commercial ultra-high frequency passive radio-frequency identification tags in an indoor environment. Specifically, we formulate human localization problem as finding a location with the maximum posterior probability given the observed received signal strength indicator from passive radio-frequency identification tags. In this regard, we design a series of localization schemes to capture the posterior probability by taking the advance of supervised-learning models including Gaussian Mixture Model, k Nearest Neighbor and Kernel-based Learning. For tracking a moving target, we mathematically model the task as searching a location sequence with the most likelihood, in which we first augment the probabilistic estimation learned in localization to construct the Emission Matrix and propose two human mobility models to approximate the Transmission Matrix in the Hidden Markov Model. The proposed tracking model is able to transfer the pattern learned in localization into tracking but also reduce the location-state candidates at each transmission iteration, which increases both the computation efficiency and tracking accuracy. The extensive experiments in two real-world scenarios reveal that our approach can achieve up to 94% localization accuracy and an average 0.64 m tracking error, outperforming other state-of-the-art radio-frequency identification based indoor localization systems.
... RFID sensing has drawn increasing attention recently, which has been used for drone navigation [7], [8], gesture recognition [9], [10], breathing monitoring [11], [12], temperature sensing [13], and localization [14]. Since RFID is a nearfield communication technique, interference from passengers or surroundings of the vehicle can hardly affect the sensing performance. ...
... These technologies have the computing ability to support a wide range of smart home applications that may enable people with AD to live independently for a longer time, thus reducing the costs associated with institutional care, particularly in the early stages. Most currently available smart home applications focus on monitoring the elderly with cognitive disabilities by providing intelligent ambient environments that can detect accidents or symptoms [3,6,16,20,26,28]. However, less attention has been paid to developing smart applications that support Alzheimer's patients by recommending the sequences of tasks needed to complete activities of daily living. ...
Chapter
Full-text available
Alzheimer’s disease (AD) affects large numbers of elderly people worldwide and represents a significant social and economic burden on society, particularly in relation to the need for long term care facilities. These costs can be reduced by enabling people with AD to live independently at home for a longer time. The use of recommendation systems for the Internet of Things (IoT) in the context of smart homes can contribute to this goal. In this paper, we present the Reminder Care System (RCS), a research prototype of a recommendation system for the IoT for elderly people with cognitive disabilities. RCS exploits daily activities that are captured and learned from IoT devices to provide personalised recommendations. The experimental results indicate that RCS can inform the development of real-world IoT applications.
... Several studies have used DCNN to analyze non-vision sensor signals such as inertial sensors [15]- [18], PIR sensors [19]- [21], ECG [22]- [24], and Doppler effect sensor [25] after the non-vision signals are converted into binary or RGB images for a single person activity recognition. However, the wearable devicebased approach may not be an ultimate solution for the real-life monitoring due to its natural weaknesses such as they require the regular maintenance, uncomfortableness of wearing them, and could be easily lost or forgotten [26]- [28]. Moreover, the most of the studies were aimed at a single resident/person activity recognition [29] in a smart home. ...
Article
Full-text available
In the last decade, unobtrusive (device-free and non-privacy invasive) recognition of activities of daily living (ADLs) for an individual in a smart home has been studied by many researchers. However, the unobtrusive recognition of multi-resident activities in a smart home is hardly studied. We propose a novel RGB activity image-based DCNN classifier for the unobtrusive recognition of the multi-resident activities (Bed_to_Toilet, Bed, Breakfast, Lunch, Leave_home, Laundry, Dinner, Night_wandering, R2_work, and R1_medicine) using Cairo open dataset provided by CASAS project. The open dataset is collected by environmental sensors (PIR and temperature sensors) in Cairo testbed while an adult couple with a dog was living for 55 days. The dataset is preprocessed with activity segmentation, sliding window, and RGB activity image conversion steps. The experimental results demonstrate that our classifier has the highest total accuracy of 95.2% among the previously developed machine learning classifiers that employed the same dataset. Moreover, the proposed RGB activity image was proven to be helpful for increasing the recognition rate. Therefore, we conclude that the proposed DCNN classifier is a useful tool for the unobtrusive recognition of the multi-resident activity in a home.
... UbiComp '16, September 12-16, 2016 [19], rehabilitation [32], intelligent assisted living [1,31]) and abnormal behavior identification [30] have been stimulating the demand for learning human activities and behaviors. For example, an assistant service can track how completely and consistently an elderly person's daily routines are performed, and determine when assistance is needed (e.g., a fall occurs) [43]. ...
Conference Paper
Full-text available
Despite the active research into, and the development of, human activity recognition over the decades, existing techniques still have several limitations, in particular, poor performance due to insufficient ground-truth data and little support of intra-class variability of activities (i.e., the same activity may be performed in different ways by different individuals, or even by the same individuals with different time frames). Aiming to tackle these two issues, in this paper, we present a robust activity recognition approach by extracting the intrinsic shared structures from activities to handle intra-class variability, and the approach is embedded into a semi-supervised learning framework by utilizing the learned correlations from both labeled and easily-obtained unlabeled data simultaneously. We use l2,1 minimization on both loss function and regularizations to effectively resist outliers in noisy sensor data and improve recognition accuracy by discerning underlying commonalities from activities. Extensive experimental evaluations on four community-contributed public datasets indicate that with little training samples, our proposed approach outperforms a set of classical supervised learning methods as well as those recently proposed semi-supervised approaches.
... For example, WiVi [6,5] uses ISAR technique to track the RF beam, enabling a through-wall gesture recognition. RF-Care [34,35,27] proposes to recognize human gestures and activities in a device-free manner based on a passive RFID (Radio-frequency identification) array. WiSee [25] can exploit the doppler shift in narrow bands in wide-band OFDM (Orthogonal Frequency Division Multiplexing) transmissions to recognize 9 different human gestures. ...
Conference Paper
Hand gesture is becoming an increasingly popular means of interacting with consumer electronic devices, such as mobile phones, tablets and laptops. In this paper, we present AudioGest, a device-free gesture recognition system that can accurately sense the hand in-air movement around user's devices. Compared to the state-of-the-art, AudioGest is superior in using only one pair of built-in speaker and microphone, without any extra hardware or infrastructure support and with no training, to achieve fine-grained hand detection. Our system is able to accurately recognize various hand gestures, estimate the hand in-air time, as well as average moving speed and waving range. We achieve this by transforming the device into an active sonar system that transmits inaudible audio signal and decodes the echoes of hand at its microphone. We address various challenges including cleaning the noisy reflected sound signal, interpreting the echo spectrogram into hand gestures, decoding the Doppler frequency shifts into the hand waving speed and range, as well as being robust to the environmental motion and signal drifting. We implement the proof-of-concept prototype in three different electronic devices and extensively evaluate the system in four real-world scenarios using 3,900 hand gestures that collected by five users for more than two weeks. Our results show that AudioGest can detect six hand gestures with an accuracy up to 96%, and by distinguishing the gesture attributions, it can provide up to 162 control commands for various applications.
... Another variation of DfPL was implemented on RSSI of RFID infrastructure using monostatic or bistatic antennas connected to RFID readers that reads the deployed passive or active RFID tags using backscatter communication [27][28][29]. RFID based DfPL systems are among the early systems that proposed Artificial Neural Network (ANN) as well as posture and fall detection [30,31]. Besides using RSSI, some other variants of RFID based systems were implemented using the coupling effect of passive RFID tags by placing the antennas of tags parallel and close to each other making one of the tags unreadable without an interference and using Angle of Arrival (AoA) [32,33]. ...
... Many scholars have contributed to activity recognition across a wide range of applications such as posture recognition [20], fall detection [21,22], human tracking [23], and gesture-based movements [24]. Many different activity recognition methods for data analysis such as digital signal processing [25], time and frequency domain features extraction methods [26], as well as statistical, inclination angle and threshold-based methods [27][28][29] have been used in the classification of static, gait-related activities, rehabilitation movement activities. All of these proposed methods are somewhat associated with the hip fracture patient monitoring system. ...
Article
Full-text available
Hip fracture incidence is life-threatening and has an impact on the person’s physical functionality and their ability to live independently. Proper rehabilitation with a set program can play a significant role in recovering the person’s physical mobility, boosting their quality of life, reducing adverse clinical outcomes, and shortening hospital stays. The Internet of Things (IoT), with advancements in digital health, could be leveraged to enhance the backup intelligence used in the rehabilitation process and provide transparent coordination and information about movement during activities among relevant parties. This paper presents a post-operative hip fracture rehabilitation model that clarifies the involved rehabilitation process, its associated events, and the main physical movements of interest across all stages of care. To support this model, the paper proposes an IoT-enabled movement monitoring system architecture. The architecture reflects the key operational functionalities required to monitor patients in real time and throughout the rehabilitation process. The approach was tested incrementally on ten healthy subjects, particularly for factors relevant to the recognition and tracking of movements of interest. The analysis reflects the significance of personalization and the significance of a one-minute history of data in monitoring the real-time behavior. This paper also looks at the impact of edge computing at the gateway and a wearable sensor edge on system performance. The approach provides a solution for an architecture that balances system performance with remote monitoring functional requirements.
... In device-free systems, the user does not need to carry any devices or sensors [11]. Device-free techniques have been used in many ubiquitous applications, such as indoor localisation [12]- [14], gesture recognition [15]- [17] and activity recognition [3], [18], [19]. Youssef outlines several challenges in the device-free area [20]. ...
Article
Full-text available
Recently, door access control with Internet of Things (IoT) has become increasingly popular in the field of security. However, conventional approaches such as video-based or biological information based cannot satisfy the requirements of personal privacy protection in the modern society. Hence, a wireless signal based technique which does not need users to carry any devices, called device-free have been introduced in recent years to detect and identify persons. In this paper, we present BLEDoorGuard, a wireless, invisible and robust door access system which leverages received signal strength indicator (RSSI) from Bluetooth Low Energy (BLE) beacons to recognise a person who accesses a door. We evaluated BLEDoorGuard in two real world scenarios: the first is an office with a key lock, and the second is a meeting room with swipe card access. We exploit the characteristics of use of BLE for person identification and propose a two-step algorithm with multiple classifiers. We demonstrate that BLEDoorGuard is capable of identifying the actual user during door access with an accuracy of 69% and 62% among groups of 6 and 10 people, respectively.
Article
Hand gesture is becoming an increasingly popular means of interacting with consumer electronic devices, such as mobile phones, tablets and laptops. In this paper, we present AudioGest, a device-free gesture recognition system that can accurately sense the hand in-air movement around user's devices. Compared to the state-of-the-art, AudioGest is superior in using only one pair of built-in speaker and microphone, without any extra hardware or infrastructure support and with no training, to achieve a multi-modal hand detection. Specifically, our system is not only able to accurately recognize various hand gestures, but also reliably estimate the hand in-air duration, average moving speed and waving range. We address various challenges including cleaning the noisy reflected sound signal, interpreting the echo spectrogram into hand gestures, decoding the Doppler frequency shifts into the hand waving speed and range, as well as being robust to the environmental motion and signal drifting. We extensively evaluate our system on three electronic deivces under four real-world scenarios using overall 3,900 hand gestures collected by five users for more than two weeks. Our results show that AudioGest detects six hand gestures with an accuracy up to 96%, and by distinguishing the gesture attributions, it can provide up to 162 control commands for various applications.
Article
Elderly care is one of the many applications supported by real-time activity recognition systems. Traditional approaches use cameras, body sensor networks, or radio patterns from various sources for activity recognition. However, these approaches are limited due to ease-of-use, coverage, or privacy preserving issues. In this paper, we present a novel wearable Radio Frequency Identification (RFID) system aims at providing an easy-to-use solution with high detection coverage. Our system uses passive tags which are maintenance-free and can be embedded into the clothes to reduce the wearing and maintenance efforts. A small RFID reader is also worn on the user's body to extend the detection coverage as the user moves. We exploit RFID radio patterns and extract both spatial and temporal features to characterize various activities. We also address the issues of false negative of tag readings and tag/antenna calibration, and design a fast online recognition system. Antenna and tag selection is done automatically to explore the minimum number of devices required to achieve target accuracy. We develop a prototype system which consists of a wearable RFID system and a smartphone to demonstrate the working principles, and conduct experimental studies with four subjects over two weeks. The results show that our system achieves a high recognition accuracy of 93.6 percent with a latency of 5 seconds. Additionally, we show that the system only requires two antennas and four tagged body parts to achieve a high recognition accuracy of 85 percent.
Article
Full-text available
Over the last two decades, metric-based instruments have garnered popularity in mental health. Self-administered surveys, such as the Patient Health Questionnaire 9 (PHQ 9), have been leveraged to inform treatment practice of Major Depressive Disorder (MDD). The aim of this study was to measure the reliability and usability of a novel voice-based delivery system of the PHQ 9 using Amazon Alexa within a patient population. Forty-one newly admitted patients to a behavioral medicine clinic completed the PHQ 9 at two separate time points (first appointment and one-month follow up). Patients were randomly assigned to a version (voice vs paper) completing the alternate format at the next appointment. Patients additionally completed a 26-item User Experience Questionnaire (UEQ) and open-ended questionnaire at each session. Assessments between PHQ 9 total scores for the Alexa and paper version showed a high degree of reliability (α = .86). Quantitative UEQ results showed significantly higher overall positive attitudes towards the Alexa format with higher subscale scores on attractiveness, stimulation , and novelty. Further qualitative responses supported these findings with 85.7% of participants indicating a willingness to use the device at home. With the benefit of user instruction in a clinical environment, the novel Alexa delivery system was shown to be consistent with the paper version giving evidence of reliability between the two formats. User experience assessments further showed a preference for the novel version over the traditional format. It is our hope that future studies may examine the efficacy of the Alexa format in improving the at-home clinical treatment of depression.
Article
Breath monitoring helps assess the general personal health and gives clues to chronic diseases. Yet current breath monitoring technologies are inconvenient and intrusive. For instance, typical breath monitoring devices need to attach nasal probes or chest bands to users. Wireless sensing technologies have been applied to monitor breathing using radio waves without physical contact. Those wireless sensing technologies however require customized radios which are not readily available. More importantly, due to interference, such technologies do not work well with multiple users. With multiple users in presence, the detection accuracy of existing systems decreases dramatically. In this paper, we propose to monitor users? breathing using commercial-off-the-shelf (COTS) RFID systems. In our system, passive lightweight RFID tags are attached to users? clothes and backscatter radio waves, and commodity RFID readers report low level data (e.g., phase values). We detect the effective human respiration by analyzing the low level data reported by commodity readers. To enhance the measurement robustness, we synthesize data streams from an array of multiple tags to improve the monitoring accuracy. We implement a prototype the breath monitoring system with commodity RFID systems. The experiment results show that our system can simultaneously monitor breathing with high accuracy even with the presence of multiple users.
Article
We develop CACE (Constraints And Correlations mining Engine), a framework that significantly improves the recognition accuracy of complex daily activities in multi-inhabitant smarthomes. CACE views the implicit relationships between the activities of multiple people as an asset, and exploits such constraints and correlations in a hierarchical fashion, taking advantage of both person-specific sensor data (generated by wearable devices) and person-independent ambient sensor data (generated by ambient sensors). To effectively utilize such couplings, CACE first uses a multi-target particle filtering approach over ambient sensors captured movement data, to identify the number of distinct users and infer individual-specific movement trajectories. We then utilize a Hierarchical Dynamic Bayesian Network (HDBN)-based model for activity recognition. This model utilizes the inter-and-intra individual correlations and constraints, at both micro-activity and macro-activity levels, to recognize individual activities accurately. These constraints are learnt automatically using data-mining techniques, and help to dramatically reduce the computational complexity of HDBN-based inferencing. Empirical studies using a real-world testbed of 5 multi-inhabitant smarthomes shows that CACE is able to achieve an activity recognition accuracy of ≍95%, with a 16-fold reduction in computational overhead compared to traditional hybrid classification approaches.
Article
In combination with current sociological trends, the maturing development of IoT devices is projected to revolutionize healthcare. A network of body-worn sensors, each with a unique ID, can collect health data that is orders-of-magnitude richer than what is available today from sporadic observations in clinical/hospital environments. When databased, analyzed, and compared against information from other individuals using data analytics, HIoT data enables the personalization and modernization of care with radical improvements in outcomes and reductions in cost. In this paper, we survey existing and emerging technologies that can enable this vision for the future of healthcare, particularly in the clinical practice of healthcare. Three main technology areas underlie the development of this field: (a) sensing, where there is an increased drive for miniaturization and power efficiency; (b) communications, where the enabling factors are ubiquitous connectivity, standardized protocols, and the wide availability of cloud infrastructure, and (c) data analytics and inference, where the availability of large amounts of data and computational resources is revolutionizing algorithms for individualizing inference and actions in health management. Throughout the paper, we use a case study to concretely illustrate the impact of these trends. We conclude our paper with a discussion of the emerging directions, open issues, and challenges.
Article
Recently, Internet of Things (IoT) has raised as an important research area that combines the environmental sensing and machine learning capabilities to flourish the concept of smart spaces, in which intelligent and customized services can be provided to users in a smart manner. In smart spaces, one fundamental service that needs to be provided is accurate and unobtrusive user identification. In this work, to address this challenge, we propose a Gait Recognition as a Service (GRaaS) model, which is an instantiation of the traditional Sensing as a Service ( S ² aaS ) model, and is specially deigned for user identification using gait in smart spaces. To illustrate the idea, a Radio Frequency Identification (RFID)-based gait recognition service is designed and implemented following the GRaaS concept. Novel tag selection algorithms and attention-based Long Short-term Memory (At-LSTM) models are designed to realize the device layer and edge layer, achieving a robust recognition with 96.3% accuracy. Extensive evaluations are provided, which show that the proposed service has accurate and robust performance and has great potential to support future smart space applications.
Article
With the increasing number of vehicles and traffic accidents, driving safety has become an important factor that affects human daily life. As the primary reason for driving accidents, driving fatigue could be prevented by a sensing and alarm system built in the vehicle. In this paper, we propose an effective, low-cost driving fatigue detection system to recognize driver's nodding movements using commodity RFID devices. The system measures the phase difference between two RFID tags attached to the back of a hat worn by the driver. To accurately extract nodding features, we propose an effective approach to mitigate the environment noise, interference caused by surrounding movements, and the cumulative error caused by the frequency hopping offset in FCC-compliant RFID systems. A long short-term memory (LSTM) autoencoder model is implemented to detect the nodding movement using the calibrated data. The highly accurate detection performance of the proposed system is validated by extensive experiments in various real driving scenarios.
Article
Using radio-frequency (RF) sensing techniques for human posture recognition has attracted growing interest due to its advantages of pervasiveness, contact-free observation, and privacy protection. Conventional RF sensing techniques are constrained by their radio environments, which limit the number of transmission channels to carry multi-dimensional information about human postures. Instead of passively adapting to the environment, in this paper, we design an RF sensing system for posture recognition based on reconfigurable intelligent surfaces (RISs). The proposed system can actively customize the environments to provide desirable propagation properties and diverse transmission channels. However, achieving high recognition accuracy requires the optimization of RIS configuration, which is a challenging problem. To tackle this challenge, we formulate the optimization problem, decompose it into two subproblems, and propose algorithms to solve them. Based on the developed algorithms, we implement the system and carry out practical experiments. Both simulation and experimental results verify the effectiveness of the designed algorithms and system. Compared to the random configuration and non-configurable environment cases, the designed system can greatly improve the recognition accuracy.
Article
With the help of computer vision-based technology, current face recognition access control systems can effectively reduce the hassle of swiping cards. However, it is not suitable for special industry areas where privacy is concerned, and it cannot be used in environments where light conditions are not satisfied. In this paper, we propose Tsarray, an RFID-based method that intelligent senses access control events by vertically deploying large-scale passive tags to the attachment plane. By fully studying the correlation between access control events, and the received RF signals, we choose a tag array deployment strategy of 4 rows, and 15 columns to eliminate environmental interference, and extract distinctive time slot tag array maps for direction tracking. Besides, by setting the same direction conversion module, the signal feature differences caused by the direction are eliminated. Moreover, we built a model to extract the spatiotemporal characteristics, and explore the model structure that is most suitable for the access control event. We evaluate the performance of Tsarray in a real environment, and used a reserved unseen test sample as a system test. Results show that our system can achieve the optimal performance in direction tracking, the accuracy of the 10 volunteers’ recognition is 97.5%, the accuracy of the height recognition is 95%, and the accuracy of the weight recognition is 92.5%.
Chapter
In future cellular systems, localization and sensing will be built-in with specific applications, and to support flexible as well as seamless connectivity. Driving by this trend, there exists a need for fine resolution sensing solutions and cm-level localization accuracy. Fortunately, with recent development of new materials, reconfigurable intelligent surfaces (RIS) provide an opportunity to reshape and control the electromagnetic characteristics of the environment, which can be utilized to improve the performance of sensing and localization.
Article
Full-text available
In view of the fact that the study of human action recognition systems is a very important research field in the field of artificial intelligence, the use of a continuous image of the human skeleton’s key points for deep learning action training and identification is the current development focus. However, it is limited by the image shooting angle and the motion attitude transformation of the visual masking problem, resulting in the misjudgment of human skeleton key points affecting the accuracy of motion training and identification. This research paper puts forward a human body motion recognition system that uses a human skeleton key point correction method to improve the accuracy of human body motion recognition. The basic correction algorithm of a human skeleton key candidate point is based on the principle of the symmetrical characteristics of the human skeleton key points, and the human skeleton key candidate point advanced correction algorithm is based on using the human body shield map as the limit of the human skeleton key point range. This research paper proposes a performance comparison of this system and the relevant human action identification systems ST-GCN, 2 S-AGCN, GCN-NAS, with Gaussian filter for the ST-GCN, GCN-NAS and 2 S-AGCN. Human motion recognition accuracy is increased by at least 68%, 40%, 56%, 68%, 50% and 46%, respectively, for the experimental data.
Article
In recent years, the number of yoga practitioners has been drastically increased and there are more men and older people practice yoga than ever before. Internet of Things (IoT)-based yoga training system is needed for those who want to practice yoga at home. Some studies have proposed RGB/Kinect camera-based or wearable device-based yoga posture recognition methods with a high accuracy; however, the former has a privacy issue and the latter is impractical in the long-term application. Thus, this paper proposes an IoT-based privacy-preserving yoga posture recognition system employing a deep convolutional neural network (DCNN) and a low-resolution infrared sensor-based wireless sensor network (WSN). The WSN has three nodes ( ${x}$ , ${y}$ , and ${z}$ -axes) where each integrates $8\times 8$ pixels’ thermal sensor module and a Wi-Fi module for connecting the deep learning server. We invited 18 volunteers to perform 26 yoga postures for two sessions each lasted for 20 s. First, recorded sessions are saved as .csv files, then preprocessed and converted to grayscale posture images. Totally, 93 200 posture images are employed for the validation of the proposed DCNN models. The tenfold cross-validation results revealed that F <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sub> -scores of the models trained with xyz (all 3-axes) and ${y}$ (only ${y}$ -axis) posture images were 0.9989 and 0.9854, respectively. An average latency for a single posture image classification on the server was 107 ms. Thus, we conclude that the proposed IoT-based yoga posture recognition system has a great potential in the privacy-preserving yoga training system.
Conference Paper
Full-text available
Much work have been done in activity recognition using wearable sensors organized in a body sensor network. The quality and communication reliability of the sensor data much affects the system performance. Recent studies show the potential of using RFID radio information instead of sensor data for activity recognition. This approach has the advantages of low cost and high reliability. Radio-based recognition method is also amiable to packet loss and has the advantages including MAC layer simplicity and low transmission power level. In this paper, we present a novel wearable Radio Frequency Identification (RFID) system using passive tags which are smaller and more cost-effective to recognize human activities in real-time. We exploit RFID radio patterns and extract both spatial and temporal features to characterize various activities. We also address two issues - the false negative issue of tag readings and tag/antenna calibration, and design a fast online recognition system. We develop a prototype system which consists of a wearable RFID system and a smartphone to demonstrate the working principles, and conduct experimental studies with four subjects over two weeks. The results show that our system achieves a high recognition accuracy of 93.6% with a latency of 5s.
Article
Full-text available
A key challenge for mobile health is to develop new technology that can assist individuals in maintaining a healthy lifestyle by keeping track of their everyday behaviors. Smartphones embedded with a wide variety of sensors are enabling a new generation of personal health applications that can actively monitor, model and promote wellbeing. Automated wellbeing tracking systems available so far have focused on physical fitness and sleep and often require external non-phone based sensors. In this work, we take a step towards a more comprehensive smartphone based system that can track activities that impact physical, social, and mental wellbeing namely, sleep, physical activity, and social interactions and provides intelligent feedback to promote better health. We present the design, implementation and evaluation of BeWell, an automated wellbeing app for the Android smartphones and demonstrate its feasibility in monitoring multi-dimensional wellbeing. By providing a more complete picture of health, BeWell has the potential to empower individuals to improve their overall wellbeing and identify any early signs of decline.
Article
Full-text available
Many real-world applications that focus on addressing needs of a human, require information about the activities being performed by the human in real-time. While advances in pervasive computing have lead to the development of wireless and non-intrusive sensors that can capture the necessary activity information, current activity recognition approaches have so far experimented on either a scripted or pre-segmented sequence of sensor events related to activities. In this paper we propose and evaluate a sliding window based approach to perform activity recognition in an on line or streaming fashion; recognizing activities as and when new sensor events are recorded. To account for the fact that different activities can be best characterized by different window lengths of sensor events, we incorporate the time decay and mutual information based weighting of sensor events within a window. Additional contextual information in the form of the previous activity and the activity of the previous window is also appended to the feature describing a sensor window. The experiments conducted to evaluate these techniques on real-world smart home datasets suggests that combining mutual information based weighting of sensor events and adding past contextual information into the feature leads to best performance for streaming activity recognition.
Article
Full-text available
Without requiring objects to carry any transceiver, device-free based object tracking provides a promising solution for many localization and tracking systems to monitor non-cooperative objects such as intruders. However, existing device-free solutions mainly use sensors and active RFID tags, which are much more expensive compared to passive tags. In this paper, we propose a novel motion detection and tracking method using passive RFID tags, named Twins. The method leverages a newly observed phenomenon called critical state caused by interference among passive tags. We contribute to both theory and practice of such phenomenon by presenting a new interference model that perfectly explains this phenomenon and using extensive experiments to validate it. We design a practical Twins based intrusion detection scheme and implement a real prototype with commercial off-the-shelf reader and tags. The results show that Twins is effective in detecting the moving object, with low location error of 0.75m in average.
Article
Full-text available
The widespread usage of wireless local area networks and mobile devices has fostered the interest in localization systems for wireless environments. The majority of research in the context of wireless-based localization systems has focused on device-based active localization, in which a device is attached to tracked entities. Recently, device-free passive localization (DfP) has been proposed where the tracked entity is neither required to carry devices nor participate actively in the localization process. DfP systems are based on the fact that RF signals are affected by the presence of people and objects in the environment. The DfP concept enables a wide range of applications including intrusion detection and tracking, border protection, and smart buildings automation. Previous studies have focused on small areas with direct line of sight and/or controlled environments. In this paper, we present the design, implementation and analysis of Nuzzer, a large-scale device-free passive localization system for real environments. Without any additional hardware, it makes use of the already installed wireless data networks to monitor and process changes in the received signal strength (RSS) transmitted from access points at one or more monitoring points. We present probabilistic techniques for DfP localization and evaluate their performance in a typical office building, rich in multipath, with an area of 1500 square meters. Our results show that the Nuzzer system gives device-free location estimates with less than 2 meters median distance error using only two monitoring laptops and three access points. This indicates the suitability of Nuzzer to a large number of application domains. Comment: 9 pages
Article
Full-text available
We consider the detection of activities from non-cooperating individuals with features obtained on the Radio Frequency channel. Since environmental changes impact the transmission channel between devices, the detection of this alteration can be used to classify environmental situations. We identify relevant features to detect activities of non-actively transmitting subjects. In particular, we distinguish with high accuracy an empty environment or a walking, lying, crawling or standing person, in case-studies of an active, device-free activity recognition system with software defined radios. We distinguish between two cases in which the transmitter is either under the control of the system or ambient. For activity detection the application of one-stage and two-stage classifiers is considered. Apart from the discrimination of the above activities, we can show that a detected activity can also be localised simultaneously within an area of less than 1 meter radius. Index Terms—J.9.d Pervasive computing, H.5.5.c Signal analysis, synthesis, and processing, I.5.4.m Signal processing, J.9.a Location-dependent and sensitive !
Conference Paper
Full-text available
The widespread usage of wireless local area networks and mobile devices has fostered the interest in localization systems for wireless environments. The majority of research in the context of wireless-based localization systems has focused on device-based active localization, in which a device is attached to tracked entities. Recently, device-free passive localization (DfP) has been proposed where the tracked entity is neither required to carry devices nor participate actively in the localization process. DfP systems are based on the fact that RF signals are affected by the presence of people and objects in the environment. Previous studies have focused on small areas with direct line of sight (LOS) and/or controlled environments. In this paper, we present the design, implementation and analysis of Nuzzer, a large-scale non-LOS DfP localization system, which tracks a single entity in real environments, rich in multipath. Without any additional hardware, Nuzzer makes use of the already-installed wireless data networks to monitor and process changes in the received signal strength (RSS) at one or more monitoring points transmitted from access points. The Nuzzer system enables many applications which support the elderly, including smart homes automation which can be used to assist the elderly, and intrusion detection which is used to protect the elderly's homes. We present deterministic techniques for DfP localization and evaluate their performance in a building, rich in multipath, with an area of 750 square meters. Our results show that the Nuzzer system gives device-free location estimates with less than 7 meters median distance error using only two monitoring laptops and three access points. This indicates the suitability of Nuzzer to a number of application domains.
Conference Paper
Full-text available
We present an approach to activity discovery, the unsu- pervised identification and modeling of human actions em- bedded in a larger sensor stream. Activity discovery can be seen as the inverse of the activity recognition problem. Rather than learn models from hand-labeled sequences, we attempt to discover motifs, sets of similar subsequences withinthe rawsensorstream, withoutthe benefitof labelsor manualsegmentation. These motifs are statistically unlikely andthus typically correspondto important or characteristic actions within the activity. The problemofactivitydiscovery differsfrom typicalmo- tif discovery, such as locating protein bindingsites, because of the nature of time series data representing human activ- ity. For example, in activity data, motifs will tend to be sparsely distributed, vary in length, and may only exhibit intra-motif similarity after appropriate time warping. In this paper, we motivate the activity discovery problem and present our approach for efficient discovery of meaningful actions from sensor data representing human activity. We empirically evaluate the approach on an exercise data set capturedbyawrist-mounted, three-axisinertialsensor. Our algorithm successfully discovers motifs that correspond to the real exercises with a recall rate of 96.3% and overall accuracy of 86.7% over six exercises and 864 occurrences.
Conference Paper
Full-text available
Typical location determination systems require the presence of a physical device that is attached to the person that is being tracked. In addition, they usually require the tracked device to participate actively in the localization process. In this paper, we introduce the concept of Device-free Passive (DfP) localization. A DfP system is envisioned to be able to detect, track, and identify entities that do not carry any device, nor participate actively in the localization process. The system works by monitoring and processing changes in the received physical signals at one or more monitoring points to detect changes in the environment. Applications for DfP systems include intrusion detection and tracking, protecting outdoor assets, such as pipelines, railroad tracks, and perimeters. We describe the DfP system's architecture and the challenges that need to be addressed to materialize a DfP system. We show the feasibility of the system by describing algorithms for implementing different functionalities of a DfP system that works with nominal WiFi equipment. We present two techniques for intrusion detection and a technique for tracking a single intruder. Our results show that the system can achieve very high probability of detection and tracking with very few false positives. We also identify different research directions for addressing the challenges of realizing a DfP system.
Conference Paper
Full-text available
For wearable computing applications, human activity is a central part of the user's context. In order to avoid user annoyance it should be acquired automatically using body-worn sensors. We propose to use multiple acceleration sensors that are distributed over the body, because they are lightweight, small and cheap. Furthermore activity can best be measured where it occurs. We present a hardware platform that we developed for the investigation of this issue and results as to where to place the sensors and how to extract the context information.
Article
Full-text available
Mobile devices are becoming increasingly sophisticated and the latest generation of smart cell phones now incorporates many diverse and powerful sensors. These sensors include GPS sensors, vision sensors (i.e., cameras), audio sensors (i.e., microphones), light sensors, temperature sensors, direction sensors (i.e., magnetic compasses), and acceleration sensors (i.e., accelerometers). The availability of these sensors in mass-marketed communication devices creates exciting new opportunities for data mining and data mining applications. In this paper we describe and evaluate a system that uses phone-based accelerometers to perform activity recognition, a task which involves identifying the physical activity a user is performing. To implement our system we collected labeled accelerometer data from twenty-nine users as they performed daily activities such as walking, jogging, climbing stairs, sitting, and standing, and then aggregated this time series data into examples that summarize the user activity over 10- second intervals. We then used the resulting training data to induce a predictive model for activity recognition. This work is significant because the activity recognition model permits us to gain useful knowledge about the habits of millions of users passively---just by having them carry cell phones in their pockets. Our work has a wide range of applications, including automatic customization of the mobile device's behavior based upon a user's activity (e.g., sending calls directly to voicemail if a user is jogging) and generating a daily/weekly activity profile to determine if a user (perhaps an obese child) is performing a healthy amount of exercise.
Article
Full-text available
This article describes one of the major efforts in the sensor network community to build an integrated sensor network system for surveillance missions. The focus of this effort is to acquire and verify information about enemy capabilities and positions of hostile targets. Such missions often involve a high element of risk for human personnel and require a high degree of stealthiness. Hence, the ability to deploy unmanned surveillance missions, by using wireless sensor networks, is of great practical importance for the military. Because of the energy constraints of sensor devices, such systems necessitate an energy-aware design to ensure the longevity of surveillance missions. Solutions proposed recently for this type of system show promising results through simulations. However, the simplified assumptions they make about the system in the simulator often do not hold well in practice, and energy consumption is narrowly accounted for within a single protocol. In this article, we describe the design and implementation of a complete running system, called VigilNet, for energy-efficient surveillance. The VigilNet allows a group of cooperating sensor devices to detect and track the positions of moving vehicles in an energy-efficient and stealthy manner. We evaluate VigilNet middleware components and integrated system extensively on a network of 70 MICA2 motes. Our results show that our surveillance strategy is adaptable and achieves a significant extension of network lifetime. Finally, we share lessons learned in building such an integrated sensor system.
Article
Full-text available
Radio Frequency IDentification (RFID) has attracted considerable attention in recent years for its low cost, general availability, and location sensing functionality. Most existing schemes require the tracked persons to be labeled with RFID tags. This requirement may not be satisfied for some activity sensing applications due to privacy and security concerns and uncertainty of objects to be monitored, e.g., group behavior monitoring in warehouses with privacy limitations, and abnormal customers in banks. In this paper, we propose TASA—Tag-free Activity Sensing using RFID tag Arrays for location sensing and frequent route detection. TASA relaxes the monitored objects from attaching RFID tags, online recovers and checks frequent trajectories by capturing the Received Signal Strength Indicator (RSSI) series for passive RFID tag arrays where objects traverse. In order to improve the accuracy for estimated trajectories and accelerate location sensing, TASA introduces reference tags with known positions. With the readings from reference tags, TASA can locate objects more accurately. Extensive experiment shows that TASA is an effective approach for certain activity sensing applications. Index Terms—RFID, activity sensing, tag-free localization, object tracking, frequent trajectories.
Article
Full-text available
We propose to use wearable computers and sensor systems to generate personal contextual annotations in audio visual recordings of meetings. In this paper, we argue that such annotations are essential and effective to allow the retrieval of relevant information from large audio-visual databases. The paper proposes several useful annotations that can be derived from cheap and unobtrusive sensors. It also describes a hardware platform designed to implement accelerometric activity recognition, outlines approaches to extract annotations and presents first experimental results.
Article
Full-text available
Pervasive computing technology can provide valuable health monitoring and assistance technology to help individuals live independent lives in their own homes. As a critical part of this technology, our objective is to design software algorithms that recognize and assess the consistency of activities of daily living that individuals perform in their own homes. We have designed algorithms that automatically learn Markov models for each class of activity. These models are used to recognize activities that are performed in a smart home and to identify errors and inconsistencies in the performed activity. We validate our approach using data collected from 60 volunteers who performed a series of activities in our smart apartment testbed. The results indicate that the algorithms correctly label the activities and successfully assess the completeness and consistency of the performed task. Our results indicate that activity recognition and assessment can be automated using machine learning algorithms and smart home technology. These algorithms will be useful for automating remote health monitoring and interventions.
Article
Full-text available
In order to provide relevant information to mobile users, such as workers engaging in the manual tasks of maintenance and assembly, a wearable computer requires information about the user's specific activities. This work focuses on the recognition of activities that are characterized by a hand motion and an accompanying sound. Suitable activities can be found in assembly and maintenance work. Here, we provide an initial exploration into the problem domain of continuous activity recognition using on-body sensing. We use a mock "wood workshop" assembly task to ground our investigation. We describe a method for the continuous recognition of activities (sawing, hammering, filing, drilling, grinding, sanding, opening a drawer, tightening a vise, and turning a screwdriver) using microphones and three-axis accelerometers mounted at two positions on the user's arms. Potentially "interesting" activities are segmented from continuous streams of data using an analysis of the sound intensity detected at the two different locations. Activity classification is then performed on these detected segments using linear discriminant analysis (LDA) on the sound channel and hidden Markov models (HMMs) on the acceleration data. Four different methods at classifier fusion are compared for improving these classifications. Using user-dependent training, we obtain continuous average recall and precision rates (for positive activities) of 78 percent and 74 percent, respectively. Using user-independent training (leave-one-out across five users), we obtain recall rates of 66 percent and precision rates of 63 percent. In isolation, these activities were recognized with accuracies of 98 percent, 87 percent, and 95 percent for the user-dependent, user-independent, and user-adapted cases, respectively.
Conference Paper
Full-text available
The manual assessment of activities of daily living (ADLs) is a fundamental problem in elderly care. The use of miniature sensors placed in the environment or worn by a person has great potential in effective and unobtrusive long term monitoring and recognition of ADLs. This paper presents an effective and unobtrusive activity recognition system based on the combination of the data from two different types of sensors: RFID tag readers and accelerometers. We evaluate our algorithms on non-scripted datasets of 10 housekeeping activities performed by 12 subjects. The experimental results show that recognition accuracy can be significantly improved by fusing the two different types of sensors. We analyze different acceleration features and algorithms, and based on tag detections we suggest the best tagspsila placements and the key objects to be tagged for each activity.
Article
Full-text available
A key aspect of pervasive computing is using computers and sensor networks to effectively and unobtrusively infer users' behavior in their environment. This includes inferring which activity users are performing, how they're performing it, and its current stage. Recognizing and recording activities of daily living is a significant problem in elder care. A new paradigm for ADL inferencing leverages radio-frequency-identification technology, data mining, and a probabilistic inference engine to recognize ADLs, based on the objects people use. We propose an approach that addresses these challenges and shows promise in automating some types of ADL monitoring. Our key observation is that the sequence of objects a person uses while performing an ADL robustly characterizes both the ADL's identity and the quality of its execution. So, we have developed Proactive Activity Toolkit (PROACT).
Article
Thesupport-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Article
Dirichlet process (DP) mixture models are the cornerstone of non-parametric Bayesian statistics, and the development of Monte-Carlo Markov chain (MCMC) sampling methods for DP mixtures has enabled the application of non-parametric Bayesian methods to a variety of practical data analysis problems. However, MCMC sampling can be prohibitively slow,and it is important to explore alternatives.One class of alternatives is provided by variational methods, a class of deterministic algorithms that convert inference problems into optimization problems (Opper and Saad 2001; Wainwright and Jordan 2003).Thus far, variational methods have mainly been explored in the parametric setting, in particular within the formalism of the exponential family (Attias2000; Ghahramani and Beal 2001; Bleietal .2003).In this paper, we present a variational inference algorithm for DP mixtures.We present experiments that compare the algorithm to Gibbs sampling algorithms for DP mixtures of Gaussians and present an application to a large-scale image analysis problem.
Article
The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Conference Paper
In this paper we introduce a novel device-free radio based activity recognition with localization method with various applications, such as e-Healthcare and security. Our method uses the properties of the signal subspace, which are estimated using signal eigenvectors of the covariance matrix obtained from an antenna array (array sensor) at the receiver side. To classify human activities (e.g., standing and moving) and/or positions, we apply a machine learning method with support vector machines (SVM). We compare the classification accuracy of the proposed method with signal subspace features and received signal strength (RSS). We analyze the impact of antenna deployment on classification accuracy in non-line-of-sight (NLOS) environments to prove the effectiveness of the proposed method. In addition, we compare our classification method with k-Nearest Neighbor (KNN). The experimental results show that the proposed method with signal subspace features provides accuracy improvements over the RSS-based method.
Conference Paper
We propose to use wearable computers and sensor systemsto generate personal contextual annotations in audiovisual recordings of meetings. In this paper we arguethat such annotations are essential and effective to allowretrieval of relevant information from large audio-visualdatabases. The paper proposes several useful annotationsthat can be derived from cheap and unobtrusive sensors. Italso describes a hardware platform designed to implementthis concept and presents first experimental results.
Conference Paper
Indoor localization based on signal strength fingerprinting has received significant attention from the community. This method is attractive because it does not require complex hardware beyond a simple radio transmitter. However, its main limitation is the inaccuracy caused by the variability of the signal strength. When applied to the localization of people, the signal variability can be attributed to three main sources: environmental dynamics (movement of people or objects), movement of transceiver (changes in the position and/or orientation of the transceivers) and body effects (distortion of the wireless signal due to body absorption). Our work focuses on the impact of the last two sources and provides two important contributions. First, we present an analysis to quantify the effects of antenna disorientation and transmitter misplacement. For the RFID system used in our work, these effects can decrease the localization accuracy by up to 50%. Motivated by these results, we identify parts of the human body where tags are less affected by unintentional movements. Second, we describe how multiple transmitters can be used to overcome the absorption effects of the human body. Our results indicate that four transmitters provide a reasonable trade-off between accuracy and hardware cost. We validate our findings through an extensive set of measurements gathered in a home environment. Our tests indicate that by following the guidelines proposed in this paper, the localization accuracy can improve from around 20% up to 88%.
Conference Paper
Transceiver-free object tracking is to trace a moving object without carrying any communication device in an environment where the environment is pre-deployed with some monitoring nodes. Among all the tracking technologies, RF-based technology is an emerging research field facing many challenges. Although we proposed the original idea, until now there is no method achieving scalability without sacrificing latency and accuracy. In this paper, we put forward a real-time tracking system RASS, which can achieve this goal and is promising in the applications like the safeguard system. Our basic idea is to divide the tracking field into different areas, with adjacent areas using different communication channels. So the interference among different areas can be prevented. For each area, three communicating nodes are deployed on the ceiling as a regular triangle to monitor this area. In each triangle area, we use a Support Vector Regression (SVR) model to locate the object. This model simulates the relationship between the signal dynamics caused by the object and the object position. It not only considers the ideal case of signal dynamics caused by the object, but also utilizes their irregular information. As a result it can reach the tracking accuracy to around 1m by just using three nodes in a triangle area with 4m in each side. The experiments show that the tracking latency of the proposed RASS system is bounded by only about 0.26s. Our system scales well to a large deployment field without sacrificing the latency and accuracy.
Conference Paper
This paper describes a distributed, multi-sensor system architecture designed to provide a wearable computer with a wide range of complex context information. Starting from an analysis of useful high level context information we present a top down design that focuses on the peculiarities of wearable applications. Thus, our design devotes particular attention to sensor placement, system partitioning as well as resource requirements given by the power consumption, computational intensity and communication overhead. We describe an implementation of our architecture and initial experimental results obtained with the system.
Conference Paper
We explore a dense sensing approach that uses RFID sensor network technology to recognize human activities. In our setting, everyday objects are instrumented with UHF RFID tags called WISPs that are equipped with accelerometers. RFID readersdetect when the objects are used by examining this sensor data, and daily activities are then inferred from the traces of object use via a Hidden Markov Model. In a study of 10 participants performing 14 activities in a model apartment, our approach yielded recognition rates with pre- cision and recall both in the 90% range. This compares well to recognition with a more intrusive short-range RFID bracelet that detects objects in the proximity of the user; this approach saw roughly 95% precision and 60% recall in the same study. We conclude that RFID sensor networks are a promising approach for indoor activity monitoring.
Conference Paper
In this work, algorithms are developed and evaluated to de- tect physical activities from data acquired using five small biaxial ac- celerometers worn simultaneously on different parts of the body. Ac- celeration data was collected from 20 subjects without researcher su- pervision or observation. Subjects were asked to perform a sequence of everyday tasks but not told specifically where or how to do them. Mean, energy, frequency-domain entropy, and correlation of acceleration data was calculated and several classifiers using these features were tested. De- cision tree classifiers showed the best performance recognizing everyday activities with an overall accuracy rate of 84%. The results show that although some activities are recognized well with subject-independent training data, others appear to require subject-specific training data. The results suggest that multiple accelerometers aid in recognition because conjunctions in acceleration feature values can effectively discriminate many activities. With just two biaxial accelerometers - thigh and wrist - the recognition performance dropped only slightly. This is the first work to investigate performance of recognition algorithms with multiple, wire-free accelerometers on 20 activities using datasets annotated by the subjects themselves.
Conference Paper
Ubiquitous computing researchers are increasingly turning to sensor- enabled "living laboratories" for the study of people and technologies in settings more natural than a typical laboratory. We describe the design and operation of the PlaceLab, a new live-in laboratory for the study of ubiquitous technologies in home settings. Volunteer research participants individually live in the PlaceLab for days or weeks at a time, treating it as a temporary home. Meanwhile, sensing devices integrated into the fabric of the architecture record a detailed description of their activities. The facility generates sensor and observational datasets that can be used for research in ubiquitous computing and other fields where domestic con- texts impact behavior. We describe some of our experiences constructing and op- erating the living laboratory, and we detail a recently generated sample dataset, available online to researchers.
Conference Paper
The advent of wearable sensors like accelerometers has opened a plethora of opportunities to recognize human activities from other low resolution sensory streams. In this paper we formulate recognizing activities from accelerometer data as a classification problem. In addition to the statistical and spectral features extracted from the acceleration data, we propose to extract features that characterize the variations in the first order derivative of the acceleration signal. We evaluate the performance of different state of the art discriminative classifiers like, boosted decision stumps (AdaBoost), support vector machines (SVM) and regularized logistic regression (RLogReg) under three different evaluation scenarios (namely subject independent, subject adaptive and subject dependent). We propose a novel computationally inexpensive methodology for incorporating smoothing classification temporally, that can be coupled with any classifier with minimal training for classifying continuous sequences. While a 3% increase in the classification accuracy was observed on adding the new features, the proposed technique for continuous recognition showed a 2.5 - 3% improvement in the performance.
Analysis of low resolution accelerometer data for continuous human activity recognition Activity recognition using cell phone accelerometers
  • N C Krishnan
  • S Panchanathan
  • J R Kwapisz
  • G M Weiss
  • S A Moore
N. C. Krishnan and S. Panchanathan. Analysis of low resolution accelerometer data for continuous human activity recognition. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3337–3340. IEEE, 2008. [16] J. R. Kwapisz, G. M. Weiss, and S. A. Moore. Activity recognition using cell phone accelerometers. ACM SIGKDD Explorations Newsletter, 12(2):74–82, 2011.
Twins: device-free object tracking using passive tags
  • J Han
  • C Qian
  • D Ma
  • X Wang
  • J Zhao
  • P Zhang
  • W Xi
  • Z Jiang
J. Han, C. Qian, D. Ma, X. Wang, J. Zhao, P. Zhang, W. Xi, and Z. Jiang. Twins: device-free object tracking using passive tags. In Proc. of IEEE Intl. Conference on Computer Communications (INFOCOM), 2014.
Bewell: A smartphone application to monitor, model and promote wellbeing
  • N D Lane
N. D. Lane et al. Bewell: A smartphone application to monitor, model and promote wellbeing. In Proc. of 5th Intl. ICST Conference on Pervasive Computing Technologies for Healthcare, pages 23-26, 2011.