Article

SpiderWalk: Circumstance-aware Transportation Activity Detection Using a Novel Contact Vibration Sensor

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This paper presents the design and implementation of the SpiderWalk system for circumstance-aware transportation activity detection using a novel contact vibration sensor. Different from existing systems that only report the type of activity, our system detects not only the activity but also its circumstances (e.g., road surface, vehicle, and shoe types) to provide better support for applications such as activity logging, location tracking, and smart persuasive applications. Inspired by but different from existing audio-based context detection approaches using microphones, the SpiderWalk system is designed and implemented using an ultra-sensitive, flexible contact vibration sensor which mimics the spiders' sensory slit organs. By sensing vibration patterns from the soles of shoes, the system can accurately detect transportation activities with rich circumstance information while resisting undesirable external signals from other sources or speech that may cause the data assignment and privacy preserving issues. Moreover, our system is implemented by reusing existing audio devices and can be used by an unmodified smartphone, making it ready for large-scale deployments. Finally, a novel temporal and spatial correlated classification approach is proposed to accurately detect the complex combinations of transportation activities and circumstances based on the output of each individual classifiers. Experiments conducted on a real-world data set suggest our system can accurately detect different transportation activities and their circumstances with an average detection accuracy of 93.8% with resource overheads comparable to existing audio- and GPS-based systems.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This approach can help in unveiling information that are not possible with only one source; for instance, [13] can detect public transport vehicle in addition to transportation mode. In [14], the authors recognize not only the transportation modes but also the circumstances in which the users performed the activities (e.g., road surface and shoe types). However, although external data may increase the accuracy and the kind of information that we can discover, it must be collected every other time since city information can change over time. ...
... In this context, random forest (RF) has given appealing results in this field [12,18,19], including data collected with different frequencies [20] and mobile phone signaling data [21]. Additionally, it is applied to detect socioeconomic attributes [22] and travel circumstances [14]. As said before, the recognition of more information needs the use of external data. ...
Article
Full-text available
Analyzing people’s mobility and identifying the transportation mode is essential for cities to create travel diaries. It can help develop essential technologies to reduce traffic jams and travel time between their points, thus helping to improve the quality of life of citizens. Previous studies in this context extracted many specialized features, reaching hundreds of them. This approach requires domain knowledge. Other strategies focused on deep learning methods, which need intense computational power and more data than traditional methods to train their models. In this work, we propose using information theory quantifiers retained from the ordinal patterns (OPs) transformation for transportation mode identification. Our proposal presents the advantage of using fewer data. OP is also computationally inexpensive and has low dimensionality. It is beneficial for scenarios where it is hard to collect information, such as Internet-of-things contexts. Our results demonstrated that OP features enhance the classification results of standard features in such scenarios.
... The novel Spiderwalk vibration sensor worn inside the subject's shoes, under the feet, was used to collect data over one month from six subjects. Transmitted wirelessly through Bluetooth connections to their smartphones, Ref. [40] demonstrated a high detection accuracy of 93.8% for determining the kind of vehicle a subject was traveling in, or if the participant were walking or sitting, and on what kind of surface. While the Spiderwalk method obtained a good accuracy of 93.8%, it required a specialized sensor in people's shoes that is not readily and passively available via people's existing smartphones, as our model does. ...
Article
Full-text available
This paper explores the utilization of smart device sensors for the purpose of vehicle recognition. Currently a ubiquitous aspect of people’s lives, smart devices can conveniently record details about walking, biking, jogging, and stepping, including physiological data, via often built-in phone activity recognition processes. This paper examines research on intelligent transportation systems to uncover how smart device sensor data may be used for vehicle recognition research, and fit within its growing body of literature. Here, we use the accelerometer and gyroscope, which can be commonly found in a smart phone, to detect the class of a vehicle. We collected data from cars, buses, trains, and bikes using a smartphone, and we designed a 1D CNN model leveraging the residual connection for vehicle recognition. The model achieved more than 98% accuracy in prediction. Moreover, we also provide future research directions based on our study.
... Foot-mounted sensors can measure heel-strike time, toe-off time, stance time, swing time, cadence, foot clearance, and stride length [11,37], even amongst those who suffer neurological conditions [3,37]. Another shoe-mounted system is SpiderWalk, which uses vibration sensors to perform activity detection [38]. This system uses machine learning methods to determine not only the performed activity (such as walking) but also other relevant contexts (such as walking surface). ...
Article
We demonstrate a new foot-mounted sensor system for mobile gait analysis which is based on Ultra Wideband (UWB) technology. Our system is wireless, inexpensive, portable, and able to estimate clinical measurements that are not currently available in traditional Inertial Measurement Unit (IMU) based wearables such as step width and foot positioning. We collect a dataset of over 2000 steps across 21 people to test our system in comparison with the clinical gold-standard GAITRite, and other IMU-based algorithms. We propose methods to calculate gait metrics from the UWB data that our system collects. Our system is then validated against the GAITRite mat, measuring step width, step length, and step time with mean absolute errors of 0.033m, 0.032m, and 0.012s respectively. This system has the potential for use in many fields including sports medicine, neurological diagnostics, fall risk assessment, and monitoring of the elderly.
Article
Full-text available
Smart cities are one of the emerging domains for computational applications. Many of these applications may benefit from the ubiquitous computing paradigm to provide better services. An important aspect of these applications is how to obtain data about their users and understand them. Context-aware approaches has been proven to be successful in understanding these data. These solutions obtain data from one or more sensors and apply context recognition techniques to infer higher level information. Several works in the last decade have presented ubiquitous approaches for context recognition that can be applied in smart cities. Our work presents a systematic mapping that provides an overview of context recognition approaches applied in smart cities domains. Several aspects of these approaches have been analyzed, such as reasoning techniques, sensors usage, context level, and applications. Of the total 3627 papers returned in the search, 93 papers were analyzed after two filtering processes. The analysis of these papers have shown that only few recent works explored situation recognition information and the full potential of the sensing capabilities in smart cities.The main objective of this article is the identification of future open context recognition approaches allowing the development of news solutions and research.
Conference Paper
Full-text available
Foot interfaces, such as pressure-sensitive insoles, still yield unused potential such as for implicit interaction. In this paper, we introduce CapSoles, enabling smart insoles to implicitly identify who is walking on what kind of floor. Our insole prototype relies on capacitive sensing and is able to sense plantar pressure distribution underneath the foot, plus a capacitive ground coupling effect. By using machine learning algorithms, we evaluated the identification of 13 users, while walking, with a confidence of ~95% after a recognition delay of ~1s. Once the user's gait is known, again we can discover irregularities in gait plus a varying ground coupling. While both effects in combination are usually unique for several ground surfaces, we demonstrate to distinguish six kinds of floors, which are sand, lawn, paving stone, carpet, linoleum, and tartan with an average accuracy of ~82%. Moreover, we demonstrate the unique effects of wet and electrostatically charged surfaces.
Conference Paper
Full-text available
Transportation or travel mode recognition plays an important role in enabling us to derive transportation profiles, e.g., to assess how eco-friendly our travel is, and to adapt travel information services such as maps to the travel mode. However, current methods have two key limitations: low transportation mode recognition accuracy and coarse-grained transportation mode recognition capability. In this paper, we propose a new method which leverages a set of wearable foot force sensors in combination with the use of a mobile phone’s GPS (FF+GPS) to address these limitations. The transportation modes recognised include walking, cycling, bus passenger, car passenger, and car driver. The novelty of our approach is that it provides a more fine-grained transportation mode recognition capability in terms of reliably differentiating bus passenger, car passenger and car driver for the first time. Result shows that compared to a typical accelerometer-based method with an average accuracy of 70%, the FF+GPS based method achieves a substantial improvement with an average accuracy of 95% when evaluated using ten individuals.
Conference Paper
Full-text available
Transportation mode detection (TMD) is a growing field of research, in which a variety of methods have been developed, foremost for outdoor travels. It has been employed in application areas such as public transportation and environmental footprint profiling. For indoor travels the problem of TMD has received comparatively little attention, even though diverse transportation modes, such as biking and electric vehicles, are used indoors. The potential applications are diverse, and include scheduling and progress tracking for mobile workers, and management of vehicular resources. However, for indoor TMD, the physical environment as well as the availability and reliability of sensing resources differ drastically from outdoor scenarios. Therefore, many of the methods developed for outdoor TMD cannot be easily and reliably applied indoors. In this paper, we explore indoor transportation scenarios to arrive at a conceptual model of indoor transportation modes, and then compare challenges for outdoor and indoor TMD. In addition, we explore methods for TMD we deem suitable in indoor settings, and we perform an extensive real-world evaluation of such methods at a large hospital complex. The evaluation utilizes Wi-Fi and accelerometer data collected through smartphones carried by hospital workers throughout four days of work routines. The results show that the methods can distinguish between six common modes of transportation used by the hospital workers with an F-score of \(84.2\,\%\).
Article
Full-text available
Recently developed flexible mechanosensors based on inorganic silicon, organic semiconductors, carbon nanotubes, graphene platelets, pressure-sensitive rubber and self-powered devices are highly sensitive and can be applied to human skin. However, the development of a multifunctional sensor satisfying the requirements of ultrahigh mechanosensitivity, flexibility and durability remains a challenge. In nature, spiders sense extremely small variations in mechanical stress using crack-shaped slit organs near their leg joints. Here we demonstrate that sensors based on nanoscale crack junctions and inspired by the geometry of a spider's slit organ can attain ultrahigh sensitivity and serve multiple purposes. The sensors are sensitive to strain (with a gauge factor of over 2,000 in the 0-2 per cent strain range) and vibration (with the ability to detect amplitudes of approximately 10 nanometres). The device is reversible, reproducible, durable and mechanically flexible, and can thus be easily mounted on human skin as an electronic multipixel array. The ultrahigh mechanosensitivity is attributed to the disconnection-reconnection process undergone by the zip-like nanoscale crack junctions under strain or vibration. The proposed theoretical model is consistent with experimental data that we report here. We also demonstrate that sensors based on nanoscale crack junctions are applicable to highly selective speech pattern recognition and the detection of physiological signals. The nanoscale crack junction-based sensory system could be useful in diverse applications requiring ultrahigh displacement sensitivity.
Article
Full-text available
In this paper, we propose BodyBeat, a novel mobile sensing system for capturing and recognizing a diverse range of non-speech body sounds in real-life scenarios. Non-speech body sounds, such as sounds of food intake, breath, laughter, and cough contain invaluable information about our dietary behavior, respiratory physiology, and affect. The BodyBeat mobile sensing system consists of a custom-built piezoelectric microphone and a distributed computational framework that utilizes an ARM microcontroller and an Android smartphone. The custom-built microphone is designed to capture subtle body vibrations directly from the body surface without being perturbed by external sounds. The microphone is attached to a 3D printed neckpiece with a suspension mechanism. The ARM embedded system and the Android smartphone process the acoustic signal from the microphone and identify non-speech body sounds. We have extensively evaluated the BodyBeat mobile sensing system. Our results show that BodyBeat outperforms other existing solutions in capturing and recognizing different types of important non-speech body sounds.
Article
Full-text available
The rapidly growing adoption of sensor-enabled smartphones has greatly fueled the proliferation of applications that use phone sensors to monitor user behavior. A central sensor among these is the microphone which enables, for instance, the detection of valence in speech, or the identification of speakers. Deploying multiple of these applications on a mobile device to continuously monitor the audio environment allows for the acquisition of a diverse range of sound-related contextual inferences. However, the cumulative processing burden critically impacts the phone battery. To address this problem, we propose DSP.Ear -- an integrated sensing system that takes advantage of the latest low-power DSP co-processor technology in commodity mobile devices to enable the continuous and simultaneous operation of multiple established algorithms that perform complex audio inferences. The system extracts emotions from voice, estimates the number of people in a room, identifies the speakers, and detects commonly found ambient sounds, while critically incurring little overhead to the device battery. This is achieved through a series of pipeline optimizations that allow the computation to remain largely on the DSP. Through detailed evaluation of our prototype implementation we show that, by exploiting a smartphone's co-processor, DSP.Ear achieves a 3 to 7 times increase in the battery lifetime compared to a solution that uses only the phone's main processor. In addition, DSP.Ear is 2 to 3 times more power efficient than a naïve DSP solution without optimizations. We further analyze a large-scale dataset from 1320 Android users to show that in about 80-90% of the daily usage instances DSP.Ear is able to sustain a full day of operation (even in the presence of other smartphone workloads) with a single battery charge.
Article
Full-text available
recorded by the accelerometer can be successfully utilised to determine the mode of transportation in use, which will provide an alternative to conventional household travel surveys and make it possible to implement customer-oriented advertising programmes. In this study, a comparison is made between changes in pre-processing, selection methods for generating training data, and classifiers, using the accelerometer data collected from three cities in Japan. The classifiers used were support vector machines (SVM), adaptive boosting (AdaBoost), decision tree and random forests. The results of this exercise suggest that using a 125-point moving average during pre-processing and selecting training data proportionally for all modes will maximise prediction accuracy. Moreover, random forests outperformed all other classifiers by yielding an overall prediction accuracy of 99.8 % for all three cities.
Chapter
Full-text available
Four audio feature sets are evaluated in their ability to differentiate five audio classes: popular music, classical music, speech, noise and crowd noise. The feature sets include low-level signal properties, mel-frequency spectral coefficients, and two new sets based on perceptual models of hearing. The temporal behavior of the features is analyzed and parameterized and these parameters are included as additional features. Using a standard Gaussian framework for classification, results show that the temporal behavior of features is important for automatic audio classification. In addition, classification is better, on average, if based on features from models of auditory perception rather than on standard features.
Conference Paper
Full-text available
Hidden Markov models (HMMs) have proven to be one of the most widely used tools for learning probabilistic models of time series data. In an HMM, information about the past is conveyed through a single discrete variable—the hidden state. We discuss a generalization of HMMs in which this state is factored into multiple state variables and is therefore represented in a distributed manner. We describe an exact algorithm for inferring the posterior probabilities of the hidden state variables given the observations, and relate it to the forward–backward algorithm for HMMs and to algorithms for more general graphical models. Due to the combinatorial nature of the hidden state representation, this exact algorithm is intractable. As in other intractable systems, approximate inference can be carried out using Gibbs sampling or variational methods. Within the variational framework, we present a structured approximation in which the the state variables are decoupled, yielding a tractable algorithm for learning the parameters of the model. Empirical comparisons suggest that these approximations are efficient and provide accurate alternatives to the exact methods. Finally, we use the structured approximation to model Bach's chorales and show that factorial HMMs can capture statistical structure in this data set which an unconstrained HMM cannot.
Article
Full-text available
We present algorithms for coupling and training hidden Markov models (HMMs) to model interacting processes, and demonstrate their superiority to conventional HMMs in a vision task classifying two-handed actions. HMMs are perhaps the most successful framework in perceptual computing for modeling and classifying dynamic behaviors, popular because they offer dynamic time warping, a training algorithm and a clear Bayesian semantics. However the Markovian framework makes strong restrictive assumptions about the system generating the signal-that it is a single process having a small number of states and an extremely limited state memory. The single-process model is often inappropriate for vision (and speech) applications, resulting in low ceilings on model performance. Coupled HMMs provide an efficient way to resolve many of these problems, and offer superior training speeds, model likelihoods, and robustness to initial conditions.
Conference Paper
Full-text available
Recognition of everyday physical activities is difficult due to the challenges of building informative, yet unobtrusive sensors. The most widely deployed and used mobile computing device today is the mobile phone, which presents an obvious candidate for recognizing activities. This paper explores how coarse-grained GSM data from mobile phones can be used to recognize high-level properties of user mobility, and daily step count. We demonstrate that even without knowledge of observed cell tower locations, we can recognize mobility modes that are useful for several application domains. Our mobility detection system was evaluated with GSM traces from the everyday lives of three data collectors over a period of one month, yielding an overall average ac- curacy of 85%, and a daily step count number that reasonably approximates the numbers determined by several commercial pedometers.
Conference Paper
Full-text available
The greatest contributor of CO2 emissions in the average American household is personal transportation. Because transportation is inherently a mobile activity, mobile devices are well suited to sense and provide feedback about these activities. In this paper, we explore the use of personal ambient displays on mobile phones to give users feedback about sensed and self-reported transportation behaviors. We first present results from a set of formative studies exploring our respondents' existing transportation routines, willingness to engage in and maintain green transportation behavior, and reactions to early mobile phone "green" application design concepts. We then describe the results of a 3-week field study (N=13) of the UbiGreen Transportation Display prototype, a mobile phone application that semi-automatically senses and reveals information about transportation behavior. Our contributions include a working system for semi-automatically tracking transit activity, a visual design capable of engaging users in the goal of increasing green transportation, and the results of our studies, which have implications for the design of future green applications.
Article
Full-text available
Spatial and temporal plantar pressure distributions are important and useful measures in footwear evaluation, athletic training, clinical gait analysis, and pathology foot diagnosis. However, present plantar pressure measurement and analysis systems are more or less uncomfortable to wear and expensive. This paper presents an in-shoe plantar pressure measurement and analysis system based on a textile fabric sensor array, which is soft, light, and has a high-pressure sensitivity and a long service life. The sensors are connected with a soft polymeric board through conductive yarns and integrated into an insole. A stable data acquisition system interfaces with the insole, wirelessly transmits the acquired data to remote receiver through Bluetooth path. Three configuration modes are incorporated to gain connection with desktop, laptop, or smart phone, which can be configured to comfortably work in research laboratories, clinics, sport ground, and other outdoor environments. A real-time display and analysis software is presented to calculate parameters such as mean pressure, peak pressure, center of pressure (COP), and shift speed of COP. Experimental results show that this system has stable performance in both static and dynamic measurements.
Article
Full-text available
The aim of the study was to determine the effectiveness of new, individually fitted sports shoes against overuse injuries to the lower limb among newspaper carriers. Patients (N = 176) with lower-limb overuse injuries were randomly assigned to use new, individually adjusted footwear with good shock absorbing properties (test group = 86) or the subjects' own, used footwear (control group = 90). The main outcome measurements were lower-limb pain intensity during walking, as rated on a visual analogue scale (0-100), number of painful days, subjective assessment of global improvement, foot fatigue, number of hyperkeratotic skin lesions and diagnosed overuse injuries, and costs of foot care as compared between the treatment groups. At the 6-month follow-up there was a difference in favor of the test group with respect to lower-limb pain intensity and number of painful days, when compared with the control group. At 1 year, 53% and 33% of the test and control groups, respectively, thought they were better than at the time of the baseline examination (number needed to treat being 5 between the test and control groups). The test subjects had less foot fatigue and fewer hyperkeratotic skin lesions. There was no difference in the number of diagnosed overuse injuries between the groups. During the year of follow-up, the all-inclusive mean costs of foot care were USD 70 and USD 158 in the test and control groups, respectively. Individually adjusted shock-absorbing shoes offer slight health benefits for lower-limb overuse injuries. Proper shoes may decrease the need to use health care resources.
Article
Full-text available
Despite their importance for urban planning, traffic forecasting and the spread of biological and mobile viruses, our understanding of the basic laws governing human motion remains limited owing to the lack of tools to monitor the time-resolved location of individuals. Here we study the trajectory of 100,000 anonymized mobile phone users whose position is tracked for a six-month period. We find that, in contrast with the random trajectories predicted by the prevailing Lévy flight and random walk models, human trajectories show a high degree of temporal and spatial regularity, each individual being characterized by a time-independent characteristic travel distance and a significant probability to return to a few highly frequented locations. After correcting for differences in travel distances and the inherent anisotropy of each trajectory, the individual travel patterns collapse into a single spatial probability distribution, indicating that, despite the diversity of their travel history, humans follow simple reproducible patterns. This inherent similarity in travel patterns could impact all phenomena driven by human mobility, from epidemic prevention to emergency response, urban planning and agent-based modelling.
Article
Full-text available
We present a framework for learning in hidden Markov models with distributed state representations. Within this framework, we derive a learning algorithm based on the Expectation--Maximization (EM) procedure for maximum likelihood estimation. Analogous to the standard Baum-Welch update rules, the M-step of our algorithm is exact and can be solved analytically.However, due to the combinatorial nature of the hidden state representation, the exact E-step is intractable. A simple and tractable mean field approximation is derived. Empirical results on a set of problems suggest that both the mean field approximation and Gibbs sampling are viable alternatives to the computationally expensive exact algorithm.
Article
In this paper, we introduce Mago, a novel system that can infer a person's mode of transport (MOT) using the Hall-effect magnetic sensor and accelerometer present in most smart devices. When a vehicle is moving, the motions of its mechanical components such as the wheels, transmission and the differential distort the earth's magnetic field. The magnetic field is distorted corresponding to the vehicle structure (e.g., bike chain or car transmission system), which manifests itself as a strong signal for sensing a person's transportation modality. We utilize this magnetic signal combined with the accelerometer and design a robust algorithm for the MOT detection. In particular, our system extracts frame-based features from the sensor data and can run in nearly real-time with only a few seconds of delay. We evaluated Mago using over 70 hours of daily commute data from 7 participants and the leave-one-out analysis of our cross-user, cross-device model reports an average accuracy of 94.4% among seven classes (stationary, bus, bike, car, train, light rail and scooter). Besides MOT, our system is able to reliably differentiate the phone's in-car position at an average accuracy of 92.9%. We believe Mago could potentially benefit many contextually-aware applications that require MOT detection such as a digital personal assistant or a life coaching application.
Book
Data Mining: Practical Machine Learning Tools and Techniques, Fourth Edition, offers a thorough grounding in machine learning concepts, along with practical advice on applying these tools and techniques in real-world data mining situations. This highly anticipated fourth edition of the most acclaimed work on data mining and machine learning teaches readers everything they need to know to get going, from preparing inputs, interpreting outputs, evaluating results, to the algorithmic methods at the heart of successful data mining approaches. Extensive updates reflect the technical changes and modernizations that have taken place in the field since the last edition, including substantial new chapters on probabilistic methods and on deep learning. Accompanying the book is a new version of the popular WEKA machine learning software from the University of Waikato. Authors Witten, Frank, Hall, and Pal include today's techniques coupled with the methods at the leading edge of contemporary research. Please visit the book companion website at http://www.cs.waikato.ac.nz/ml/weka/book.html It contains Powerpoint slides for Chapters 1-12. This is a very comprehensive teaching resource, with many PPT slides covering each chapter of the book Online Appendix on the Weka workbench; again a very comprehensive learning aid for the open source software that goes with the book Table of contents, highlighting the many new sections in the 4th edition, along with reviews of the 1st edition, errata, etc. Provides a thorough grounding in machine learning concepts, as well as practical advice on applying the tools and techniques to data mining projects Presents concrete tips and techniques for performance improvement that work by transforming the input or output in machine learning methods Includes a downloadable WEKA software toolkit, a comprehensive collection of machine learning algorithms for data mining tasks-in an easy-to-use interactive interface Includes open-access online courses that introduce practical applications of the material in the book.
Article
Microphones are remarkably powerful sensors of human behavior and context. However, audio sensing is highly susceptible to wild fluctuations in accuracy when used in diverse acoustic environments (such as, bedrooms, vehicles, or cafes), that users encounter on a daily basis. Towards addressing this challenge, we turn to the field of deep learning; an area of machine learning that has radically changed related audio modeling domains like speech recognition. In this paper, we present DeepEar – the first mobile audio sensing framework built from coupled Deep Neural Networks (DNNs) that simultaneously perform common audio sensing tasks. We train DeepEar with a large-scale dataset including unlabeled data from 168 place visits. The resulting learned model, involving 2.3M parameters, enables DeepEar to significantly increase inference robustness to background noise beyond conventional approaches present in mobile devices. Finally, we show DeepEar is feasible for smartphones by building a cloud-free DSP-based prototype that runs continuously, using only 6% of the smartphone’s battery daily
Conference Paper
Motivated by safety challenges resulting from distracted pedestrians, this paper presents a sensing technology for fine-grained location classification in an urban environment. It seeks to detect the transitions from sidewalk locations to in-street locations, to enable applications such as alerting texting pedestrians when they step into the street. In this work, we use shoe-mounted inertial sensors for location classification based on surface gradient profile and step patterns. This approach is different from existing shoe sensing solutions that focus on dead reckoning and inertial navigation. The shoe sensors relay inertial sensor measurements to a smartphone, which extracts the step pattern and the inclination of the ground a pedestrian is walking on. This allows detecting transitions such as stepping over a curb or walking down sidewalk ramps that lead into the street. We carried out walking trials in metropolitan environments in United States (Manhattan) and Europe (Turin). The results from these experiments show that we can accurately determine transitions between sidewalk and street locations to identify pedestrian risk.
Conference Paper
Determining the mode of transport of an individual is an important element of contextual information. In particular, we focus on differentiating between different forms of motorized transport such as car, bus, subway etc. Our approach uses location information and features derived from transit route information (schedule information, not real-time) published by transit agencies. This enables no up-front training or learning of routes and can be deployed instantly to a new place since most transit agencies publish this information. Combined with motion detection using phone accelerometers, we obtain a classification accuracy of around 90% on 50+ hours of car and transit data.
Conference Paper
Energy is a significant bottleneck of smartphone operation. Today's smartphone batteries can normally support less than two day's continuous daily use. It is therefore important to find out where the energy goes inside a smartphone. In this paper, we present a hardware based method for Android smartphones. We conduct a comprehensive power evaluation under a predefined set of test cases, and identify a number of primary power-hungry modules, such as the screen display, GPS and WiFi modules. Finally, an energy model for these modules is established.
Article
Accelerometer is the predominant sensor used for lowpower context detection on smartphones. Although lowpower, accelerometer is orientation and position-dependent, requires a high sampling rate, and subsequently complex processing and training to achieve good accuracy. We present an alternative approach for context detection using only the smartphone's barometer, a relatively new sensor now present in an increasing number of devices. The barometer is independent of phone position and orientation. Using a low sampling rate of 1 Hz, and simple processing based on intuitive logic, we demonstrate that it is possible to use the barometer for detecting the basic user activities of IDLE, WALKING, and VEHICLE at extremely lowpower. We evaluate our approach using 47 hours of realworld transportation traces from 3 countries and 13 individuals, as well as more than 900 km of elevation data pulled from Google Maps from 5 cities, comparing power and accuracy to Google's accelerometer-based Activity Recognition algorithm, and to Future Urban Mobility Survey's (FMS) GPS-accelerometer server-based application. Our barometer-based approach uses 32 mW lower power compared to Google, and has comparable accuracy to both Google and FMS. This is the first paper that uses only the barometer for context detection.
Article
In everyday life, we are able to perform various activities simultaneously without consciously paying attention to them. For example, we can easily read a newspaper while drinking coffee. This latter activity takes place in our background or periphery of attention. Contrarily, interactions with computing technology usually require focused attention. With interactive technologies becoming increasingly present in the everyday environment, it is essential to explore how these technologies could be developed such that people can interact with them both in the focus and in the periphery of attention. This upcoming field of Peripheral Interaction aims to fluently embed interactive technology into everyday life. This workshop brings together researchers and practitioners from different disciplines to share research and design work and to further shape the field of Peripheral Interaction.
Article
We present a prototype mobile phone application that implements a novel transportation mode detection algorithm. The application is designed to run in the background, and continuously collects data from built-in acceleration and network location sensors. The collected data is analyzed automatically and partitioned into activity segments. A key finding of our work is that walking activity can be robustly detected in the data stream, which, in turn, acts as a separator for partitioning the data stream into other activity segments. Each vehicle activity segment is then sub-classified according to the vehicle type. Our approach yields high accuracy despite the low sampling interval and does not require GPS data. As a result, device power consumption is effectively minimized. This is a very crucial point for large-scale real-world deployment. As part of an experiment, the application has been used by 495 samples, and our prototype provides 82% accuracy in transportation mode classification for an experiment performed in Zurich, Switzerland. Incorporating location type information with this activity classification technology has the potential to impact many phenomena driven by human mobility and to enhance awareness of behavior, urban planning, and agent-based modeling.
Conference Paper
Accurate activity recognition enables the development of a variety of ubiquitous computing applications, such as context-aware systems, lifelogging, and personal health systems. Wearable sensing technologies can be used to gather data for activity recognition without requiring sensors to be installed in the infrastructure. However, the user may need to wear multiple sensors for accurate recognition of a larger number of different activities. We developed a wearable acoustic sensor, called BodyScope, to record the sounds produced in the user's throat area and classify them into user activities, such as eating, drinking, speaking, laughing, and coughing. The F-measure of the Support Vector Machine classification of 12 activities using only our BodyScope sensor was 79.5%. We also conducted a small-scale in-the-wild study, and found that BodyScope was able to identify four activities (eating, drinking, speaking, and laughing) at 71.5% accuracy.
Conference Paper
Auditeur is a general-purpose, energy-efficient, and context-aware acoustic event detection platform for smartphones. It enables app developers to have their app register for and get notified on a wide variety of acoustic events. Auditeur is backed by a cloud service to store user contributed sound clips and to generate an energy-efficient and context-aware classification plan for the phone. When an acoustic event type has been registered, the smartphone instantiates the necessary acoustic processing modules and wires them together to execute the plan. The phone then captures, processes, and classifies acoustic events locally and efficiently. Our analysis on user-contributed empirical data shows that Auditeur's energy-aware acoustic feature selection algorithm is capable of increasing the device lifetime by 33.4%, sacrificing less than 2% of the maximum achievable accuracy. We implement seven apps with Auditeur, and deploy them in real-world scenarios to demonstrate that Auditeur is versatile, 11.04% - 441.42% less power hungry, and 10.71% - 13.86% more accurate in detecting acoustic events, compared to state-of-the-art techniques. We present a user study to demonstrate that novice programmers can implement the core logic of interesting apps with Auditeur in less than 30 minutes, using only 15 - 20 lines of Java code.
Conference Paper
We present novel accelerometer-based techniques for accurate and fine-grained detection of transportation modes on smartphones. The primary contributions of our work are an improved algorithm for estimating the gravity component of accelerometer measurements, a novel set of accelerometer features that are able to capture key characteristics of vehicular movement patterns, and a hierarchical decomposition of the detection task. We evaluate our approach using over 150 hours of transportation data, which has been collected from 4 different countries and 16 individuals. Results of the evaluation demonstrate that our approach is able to improve transportation mode detection by over 20% compared to current accelerometer-based systems, while at the same time improving generalization and robustness of the detection. The main performance improvements are obtained for motorised transportation modalities, which currently represent the main challenge for smartphone-based transportation mode detection.
Conference Paper
We propose a novel method for automatic detection of the transport mode of a person carrying a Smartphone. Existing approaches assume idealized positioning data with no GPS signal losses, require information from additional external sources such as real time bus locations, or only allow for a coarse distinction between very few categories (e.g. ´still´, ´walk´, ´motorized´). Our approach is designed to deal with cluttered real-world Smartphone data and can distinguish between fine-grained transport mode categories. It is robust against GPS signal losses by including positioning data obtained from the cellular network and data from accelerometer readings. Mode detection is performed by a two-stage classification technique using randomized ensemble of classifiers combined with a Hidden Markov Model. We report promising results of an experimental performance analysis with real-world data collected by 15 volunteers during their everyday routines over a period of two months.
Article
Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, ***, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.
Conference Paper
Accurate, real-time measurement of energy expended during everyday activities would enable development of novel health monitoring and wellness technologies. A technique using three miniature wearable accelerometers is presented that improves upon state-of-the-art energy expenditure (EE) estimation. On a dataset acquired from 24 subjects performing gym and household activities, we demonstrate how knowledge of activity type, which can be automatically inferred from the accelerometer data, can improve EE estimates by more than 15% when compared to the best estimates from other methods.
Conference Paper
Top end mobile phones include a number of specialized (e.g., accelerometer, compass, GPS) and general purpose sensors (e.g., microphone, camera) that enable new people-centric sensing applications. Perhaps the most ubiquitous and un- exploited sensor on mobile phones is the microphone - a powerful sensor that is capable of making sophisticated in- ferences about human activity, location, and social events from sound. In this paper, we exploit this untapped sensor not in the context of human communications but as an en- abler of new sensing applications. We propose SoundSense, a scalable framework for modeling sound events on mobile phones. SoundSense is implemented on the Apple iPhone and represents the first general purpose sound sensing sys- tem specifically designed to work on resource limited phones. The architecture and algorithms are designed for scalability and SoundSense uses a combination of supervised and unsu- pervised learning techniques to classify both general sound types (e.g., music, voice) and discover novel sound events specific to individual users. The system runs solely on the mobile phone with no back-end interactions. Through im- plementation and evaluation of two proof of concept people- centric sensing applications, we demostrate that SoundSense is capable of recognizing meaningful sound events that occur in users' everyday lives.
Article
User mobility has given rise to a variety of Web applications, in which the global positioning system (GPS) plays many important roles in bridging between these applications and end users. As a kind of human behavior, transportation modes, such as walking and driving, can provide pervasive computing systems with more contextual information and enrich a user’s mobility with informative knowledge. In this article, we report on an approach based on supervised learning to automatically infer users ’ transportation modes, including driving, walking, taking a bus and riding a bike, from raw GPS logs. Our approach consists of three parts: a change point-based segmentation method, an inference model and a graph-based post-processing algorithm. First, we propose a change point-based segmentation method to partition each GPS trajectory into separate segments of different transportation modes. Second, from each segment, we identify a set of sophisticated features, which are not affected by differing traffic conditions (e.g., a person’s direction when in a car is constrained more by the road than any change in traffic conditions). Later, these features are fed to a generative inference model to classify the segments of different modes. Third, we conduct graph-based postprocessing to further improve the inference performance. This postprocessing algorithm considers both the commonsense constraints of the real world and typical user behaviors
Article
The advances of wireless networking and sensor technology open up an interesting opportunity to infer human activities in a smart home environment. Existing work in this paradigm focuses mainly on recognizing activities of single user. In this work, we focus on the fundamental problem of recognizing activities of multiple users using a wireless body sensor network, and propose a scalable pattern mining approach to recognize both single- and multiuser activities in a unified framework. We exploit Emerging Pattern—a discriminative knowledge pattern which describes significant changes among activity classes of data—for building activity models and design a scalable, noise-resistant, Emerging Pattern-based Multiuser Activity Recognizer (epMAR) to recognize both single- and multiuser activities. We develop a multimodal, wireless body sensor network for collecting real-world traces in a smart home environment, and conduct comprehensive empirical studies to evaluate our system. Results show that epMAR outperforms existing schemes in terms of accuracy, scalability, and robustness. Index Terms—Wireless body sensor networks, sensor-based activity recognition, pattern mining.
Article
As mobile phones advance in functionality and capability, they are being used for more than just communication. Increasingly, these devices are being employed as instruments for introspection into habits and situations of individuals and communities. Many of the applications enabled by this new use of mobile phones rely on contextual information. The focus of this work is on one dimension of context, the transportation mode of an individual when outside. We create a convenient (no specific position and orientation setting) classification system that uses a mobile phone with a built-in GPS receiver and an accelerometer. The transportation modes identified include whether an individual is stationary, walking, running, biking, or in motorized transport. The overall classification system consists of a decision tree followed by a first-order discrete Hidden Markov Model and achieves an accuracy level of 93.6% when tested on a dataset obtained from sixteen individuals.
Article
The advances of wearable sensors and wireless networks offer many opportunities to recognize human activities from sensor readings in pervasive computing. Existing work so far focuses mainly on recognizing activities of a single user in a home environment. However, there are typically multiple inhabitants in a real home and they often perform activities together. In this paper, we investigate the problem of recognizing multi-user activities using wearable sensors in a home setting. We develop a multi-modal, wearable sensor platform to collect sensor data for multiple users, and study two temporal probabilistic models—Coupled Hidden Markov Model (CHMM) and Factorial Conditional Random Field (FCRF)—to model interacting processes in a sensor-based, multi-user scenario. We conduct a real-world trace collection done by two subjects over two weeks, and evaluate these two models through our experimental studies. Our experimental results show that we achieve an accuracy of 96.41% with CHMM and an accuracy of 87.93% with FCRF, respectively, for recognizing multi-user activities.
Article
Recognizing human activities from sensor readings has recently attracted much research interest in pervasive computing due to its potential in many applications, such as assistive living and healthcare. This task is particularly challenging because human activities are often performed in not only a simple (i.e., sequential), but also a complex (i.e., interleaved or concurrent) manner in real life. Little work has been done in addressing complex issues in such a situation. The existing models of interleaved and concurrent activities are typically learning-based. Such models lack of flexibility in real life because activities can be interleaved and performed concurrently in many different ways. In this paper, we propose a novel pattern mining approach to recognize sequential, interleaved, and concurrent activities in a unified framework. We exploit Emerging Pattern—a discriminative pattern that describes significant changes between classes of data—to identify sensor features for classifying activities. Different from existing learning-based approaches which require different training data sets for building activity models, our activity models are built upon the sequential activity trace only and can be applied to recognize both simple and complex activities. We conduct our empirical studies by collecting real-world traces, evaluating the performance of our algorithm, and comparing our algorithm with static and temporal models. Our results demonstrate that, with a time slice of 15 seconds, we achieve an accuracy of 90.96 percent for sequential activity, 88.1 percent for interleaved activity, and 82.53 percent for concurrent activity. Index Terms—Human activity recognition, pattern analysis, emerging pattern, classifier design and evaluation.
Article
Gastrointestinal (GI) problems are not uniformly assessed in intensive care unit (ICU) patients and respective data in available literature are insufficient. We aimed to describe the prevalence, risk factors and importance of different GI symptoms. We prospectively studied all patients hospitalized to the General ICU of Tartu University Hospital in 2004-2007. Of 1374 patients, 62 were excluded due to missing data. Seven hundred and seventy-five (59.1%) patients had at least one GI symptom at least during 1 day of their stay, while 475 (36.2%) suffered from more than one symptom. Absent or abnormal bowel sounds were documented in 542 patients (41.3%), vomiting/regurgitation in 501 (38.2%), high gastric aspirate volume in 298 (22.7%), diarrhoea in 184 (14.0%), bowel distension in 139 (10.6%) and GI bleeding in 97 (7.4%) patients during their ICU stay. Absent or abnormal bowel sounds and GI bleeding were associated with significantly higher mortality. The number of simultaneous GI symptoms was an independent risk factor for ICU mortality. The ICU length of stay and mortality of patients who had two or more GI symptoms simultaneously were significantly higher than in patients with a maximum of one GI symptom. GI symptoms occur frequently in ICU patients. Absence of bowel sounds and GI bleeding are associated with impaired outcome. Prevalence of GI symptoms at the first day in ICU predicts the mortality of the patients.
Article
Relatively little is known about the incidence of the risks facing those who exercise regularly. Clinical reports suggest a variety of musculoskeletal ailments, and several pathophysiologic conditions may result from the various aerobic activities most likely to be pursued by large parts of the U.S. population. But adequate epidemiologic data are scarce. Careful epidemiologic studies are needed to develop incidence information.
Mago: Mode of Transport Inference Using the Hall-Effect Magnetic Sensor and Accelerometer
  • Ke-Yu Chen
  • Jonathan Shah
  • Lama Huang
  • Nachman
Model, Framework, and Platform of Health Persuasive Social Network
  • Al Soleh Udin
  • Ayubi
LookUp: Enabling Pedestrian Safety Services via Shoe Sensing
  • Carlo Shubham Jain
  • Yanzhi Borgiattino
  • Marco Ren
  • Yingying Gruteser
  • Carla Fabiana Chen
  • Chiasserini
Urban sensing: Using smartphones for transportation mode classification. Computers, Environment and Urban Systems (CEUS'15)
  • Dongyoun Shin
  • Bige Daniel G Aliaga
  • Stefan Muller Tuncer
  • Sungah Arisona
  • Dani Kim
  • Gerhard N Zund
  • Schmitt
ShoeSoleSense for Peripheral Interaction
  • Bernhard Slawik
Classifying the mode of transportation on mobile phones using GIS information
  • C Rahul
  • Chiehyih Shah
  • Hong Wan
  • Lama Lu
  • Nachman
In-shoe plantar pressure measurement and analysis system based on fabric pressure sensing array
  • L Shu
  • T Hua
  • Y Wang
  • D D Li Q Qiao
  • X Feng
  • Tao
Ultrasensitive mechanical crack-based sensor inspired by the spider sensory system
  • Daeshik Kang
  • V Peter
  • Yong Whan Pikhitsa
  • C W Choi
  • Sung Soo Lee
  • Linfeng Shin
  • Byeonghak Piao
  • Kahpyang Park
  • Taeil Suh
  • Mansoo Kim
  • Choi
Soleh Udin Al Ayubi. 2013. Model Framework and Platform of Health Persuasive Social Network
  • Al Soleh Udin
  • Ayubi