Mathias Ciliberto

Mathias Ciliberto
  • MSc
  • PhD Student at University of Sussex

About

29
Publications
5,316
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
764
Citations
Current institution
University of Sussex
Current position
  • PhD Student

Publications

Publications (29)
Article
Full-text available
The Sussex-Huawei Locomotion-Transportation (SHL) Recognition Challenges aim to advance and capture the state-of-the-art in locomotion and transportation mode recognition from smartphone motion (inertial) sensors. The goal of this series of machine learning and data science challenges was to recognize eight locomotion and transportation activities...
Conference Paper
Full-text available
In this paper we summarize the contributions of participants to the fourth Sussex-Huawei Locomotion-Transportation (SHL) Recognition Challenge organized at the HASCA Workshop of UbiComp/ISWC 2021. The goal of this machine learning/data science challenge is to recognize eight locomotion and transportation activities (Still, Walk, Run, Bike, Bus, Car...
Conference Paper
Full-text available
In this paper we summarize the contributions of participants to the third Sussex-Huawei Locomotion-Transportation (SHL) Recognition Challenge organized at the HASCA Workshop of UbiComp/ISWC 2020. The goal of this machine learning/data science challenge is to recognize eight locomotion and transportation activities (Still, Walk, Run, Bike, Bus, Car,...
Chapter
We explore how Google Glass can be used to annotate cystoscopy findings in a hands-free and reproducible manner by surgeons during operations in the sterile environment inspired by the current practice of hand-drawn sketches. We present three data entry variants involving head movements and speech input. In an experiment with eight surgeons and fou...
Chapter
In this chapter we present a case study on drinking gesture recognition from a dataset annotated by Experience Sampling (ES). The dataset contains 8825 “sensor events” and users reported 1808 “drink events” through experience sampling. We first show that the annotations obtained through ES do not reflect accurately true drinking events. We present...
Chapter
The Sussex-Huawei Transportation-Locomotion (SHL) Recognition Challenge 2018 aims to recognize eight transportation activities (Still, Walk, Run, Bike, Bus, Car, Train, Subway) from the inertial and pressure sensor data of a smartphone. In this chapter, we, as part of competition organizing team, present reference recognition performance obtained b...
Conference Paper
The recognition of complex and subtle human behaviors from wearable sensors will enable next-generation human-oriented computing in scenarios of high societal value (e.g., dementia care). This will require large-scale human activity corpuses and much improved methods to recognize activities and the context in which they occur. This workshop deals w...
Conference Paper
Full-text available
In this paper we summarize the contributions of participants to the Sussex-Huawei Transportation-Locomotion (SHL) Recognition Challenge organized at the HASCA Workshop of Ubi-Comp 2019. The goal of this machine learning/data science challenge is to recognize eight locomotion and transportation activities (Still, Walk, Run, Bike, Bus, Car, Train, Su...
Conference Paper
Template matching methods can benefit from multi-cores architecture in order to parallelise and accelerate the matching of multiple templates. We present WLCSSCuda: a GPU accelerated implementation of the Warping Longest Common Subsequence (WLCSS) pattern recognition algorithm. We evaluate our method on 4 NVIDIA GPUs and 4 multi-cores CPUs. We obse...
Article
Full-text available
Transportation and locomotion mode recognition from multimodal smartphone sensors is useful to provide just-in-time context-aware assistance. However, the field is currently held back by the lack of standardized datasets, recognition tasks and evaluation criteria. Currently, recognition methods are often tested on ad-hoc datasets acquired for one-o...
Conference Paper
Full-text available
In this paper we, as part of the Sussex-Huawei Locomotion-Transportation (SHL) Recognition Challenge organizing team, present reference recognition performance obtained by applying various classical and deep-learning classifiers to the testing dataset. We aim to recognize eight modes of transportation (Still, Walk, Run, Bike, Bus, Car, Train, Subwa...
Conference Paper
In this paper we present a case study on drinking gesture recognition from a dataset annotated by Experience Sampling (ES). The dataset contains 8825 "sensor events" and users reported 1808 "drink events" through experience sampling. We first show that the annotations obtained through ES do not reflect accurately true drinking events. We present th...
Article
Full-text available
Scientific advances build on reproducible research which need publicly available benchmark datasets. The computer vision and speech recognition communities have led the way in establishing benchmark datasets. There are much less datasets available in mobile computing, especially for rich locomotion and transportation analytics. This paper presents...
Conference Paper
We have completed the collection of one of the richest accurately annotated mobile dataset of modes of transportation and locomotion. To do this, we developed a highly reliable Android application called DataLogger capable of recording multisensor data from multiple synchronized smartphones simultaneously. The application allows real-time data anno...
Conference Paper
Full-text available
We explain how to obtain a highly versatile and precisely annotated dataset for the multimodal locomotion of mobile users. After presenting the experimental setup, data management challenges and potential applications, we conclude with the best practices for assuring data quality and reducing loss. The dataset currently comprises 7 months of measur...
Conference Paper
Annotating activity recognition datasets is a very time consuming process. Using lay annotators (e.g. using crowd-sourcing) has been suggested to speed this up. However, this requires to preserve privacy of users and may preclude relying on video for annotation. We investigate to which extent using a 3D human model animated from the data of inertia...

Network

Cited By