Conference Paper

Object-Based Activity Recognition with Heterogeneous Sensors on Wrist.

DOI: 10.1007/978-3-642-12654-3_15 Conference: Pervasive Computing, 8th International Conference, Pervasive 2010, Helsinki, Finland, May 17-20, 2010. Proceedings
Source: DBLP

ABSTRACT This paper describes how we recognize activities of daily living (ADLs) with our designed sensor device, which is equipped
with heterogeneous sensors such as a camera, a microphone, and an accelerometer and attached to a user’s wrist. Specifically,
capturing a space around the user’s hand by employing the camera on the wrist mounted device enables us to recognize ADLs
that involve the manual use of objects such as making tea or coffee and watering plant. Existing wearable sensor devices equipped
only with a microphone and an accelerometer cannot recognize these ADLs without object embedded sensors. We also propose an
ADL recognition method that takes privacy issues into account because the camera and microphone can capture aspects of a user’s
private life. We confirmed experimentally that the incorporation of a camera could significantly improve the accuracy of ADL

  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we consider the design of a system in which Internet-connected mobile users contribute sensor data as training samples, and collaborate on building a model for classification tasks such as activity or context recognition. Constructing the model can naturally be performed by a service running in the cloud, but users may be more inclined to contribute training samples if the privacy of these data could be ensured. Thus, in this paper, we focus on privacy-preserving collaborative learning for the mobile setting, which addresses several competing challenges not previously considered in the literature: supporting complex classification methods like support vector machines, respecting mobile computing and communication constraints, and enabling user-determined privacy levels. Our approach, Pickle, ensures classification accuracy even in the presence of significantly perturbed training samples, is robust to methods that attempt to infer the original data or poison the model, and imposes minimal costs. We validate these claims using a user study, many real-world datasets and two different implementations of Pickle.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.
    Sensors 07/2014; 14(7):11735-11759. DOI:10.3390/s140711735 · 2.05 Impact Factor
  • Source
    06/2014, Degree: Dr.-Ing.

Full-text (2 Sources)

Available from
May 31, 2014