Conference Paper

11th International Workshop on Human Activity Sensing Corpus and Applications (HASCA)

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Transportation and locomotion mode recognition from multimodal smartphone sensors is useful to provide just-in-time context-aware assistance. However, the field is currently held back by the lack of standardized datasets, recognition tasks and evaluation criteria. Currently, recognition methods are often tested on ad-hoc datasets acquired for one-off recognition problems and with differing choices of sensors. This prevents a systematic comparative evaluation of methods within and across research groups. Our goal is to address these issues by: i) introducing a publicly available, large-scale dataset for transportation and locomotion mode recognition from multimodal smartphone sensors; ii) suggesting twelve reference recognition scenarios, which are a superset of the tasks we identified in related work; iii) suggesting relevant combinations of sensors to use based on energy considerations among accelerometer, gyroscope, magnetometer and GPS sensors; iv) defining precise evaluation criteria, including training and testing sets, evaluation measures, and user-independent and sensor-placement independent evaluations. Based on this, we report a systematic study of the relevance of statistical and frequency features based on information theoretical criteria to inform recognition systems. We then systematically report the reference performance obtained on all the identified recognition scenarios using a machine-learning recognition pipeline. The extent of this analysis and the clear definition of the recognition tasks enable future researchers to evaluate their own methods in a comparable manner, thus contributing to further advances in the field. The dataset and the code are available online.
Article
Full-text available
Scientific advances build on reproducible research which need publicly available benchmark datasets. The computer vision and speech recognition communities have led the way in establishing benchmark datasets. There are much less datasets available in mobile computing, especially for rich locomotion and transportation analytics. This paper presents a highly versatile and precisely annotated large-scale dataset of smartphone sensor data for multimodal locomotion and transportation analytics of mobile users. The dataset comprises 7 months of measurements, collected from all sensors of 4 smartphones carried at typical body locations, including the images of a body-worn camera, while 3 participants used 8 different modes of transportation in the southeast of the United Kingdom, including in London. In total 28 context labels were annotated, including transportation mode, participant’s posture, inside/outside location, road conditions, traffic conditions, presence in tunnels, social interactions, and having meals. The total amount of collected data exceed 950 GB of sensor data, which corresponds to 2812 hours of labelled data and 17562 km of traveled distance. We present how we set up the data collection, including the equipment used and the experimental protocol. We discuss the dataset, including the data curation process, the analysis of the annotations and of the sensor data. We discuss the challenges encountered and present the lessons learned and some of the best practices we developed to ensure high quality data collection and annotation. We discuss the potential applications which can be developed using this large-scale dataset. In particular, we present how a machine-learning system can use this dataset to automatically recognize modes of transportations. Many other research questions related to transportation analytics, activity recognition, radio signal propagation and mobility modelling can be adressed through this dataset. The full dataset is being made available to the community, and a thorough preview is already published 1.
Conference Paper
Full-text available
Human activity recognition through the wearable sensor will enable a next-generation human-oriented ubiquitous computing. However, most of research on human activity recognition so far is based on small number of subjects, and non-public data. To overcome the situation, we have gathered 4897 accelerometer data with 116 subjects and compose them as HASC2011corpus. In the field of pattern recognition, it is very important to evaluate and to improve the recognition methods by using the same dataset as a common ground. We make the HASC2011corpus into public for the research community to use it as a common ground of the Human Activity Recognition. We also show several facts and results of obtained from the corpus.
Book
Activity recognition has emerged as a challenging and high-impact research field, as over the past years smaller and more powerful sensors have been introduced in wide-spread consumer devices. Validation of techniques and algorithms requires large-scale human activity corpuses and improved methods to recognize activities and the contexts in which they occur. This book deals with the challenges of designing valid and reproducible experiments, running large-scale dataset collection campaigns, designing activity and context recognition methods that are robust and adaptive, and evaluating activity recognition systems in the real world with real users.
The University of Sussex-Huawei Locomotion (SHL) dataset and competition
  • D Roggen
  • D. Roggen
Collecting complex activity data sets in highly rich networked sensor environments
  • D Roggen
  • A Calatroni
  • M Rossi
  • T Holleczek
  • G Tröster
  • P Lukowicz
  • G Pirkl
  • D Bannach
  • A Ferscha
  • J Doppler
  • C Holzmann
  • M Kurz
  • G Holl
  • R Chavarriaga
  • H Sagha
  • H Bayati
  • J Del R. Milln
  • Roggen D.