Conference Paper

Object-Based Activity Recognition with Heterogeneous Sensors on Wrist

DOI: 10.1007/978-3-642-12654-3_15 Conference: Pervasive Computing, 8th International Conference, Pervasive 2010, Helsinki, Finland, May 17-20, 2010. Proceedings
Source: DBLP

ABSTRACT

This paper describes how we recognize activities of daily living (ADLs) with our designed sensor device, which is equipped
with heterogeneous sensors such as a camera, a microphone, and an accelerometer and attached to a user’s wrist. Specifically,
capturing a space around the user’s hand by employing the camera on the wrist mounted device enables us to recognize ADLs
that involve the manual use of objects such as making tea or coffee and watering plant. Existing wearable sensor devices equipped
only with a microphone and an accelerometer cannot recognize these ADLs without object embedded sensors. We also propose an
ADL recognition method that takes privacy issues into account because the camera and microphone can capture aspects of a user’s
private life. We confirmed experimentally that the incorporation of a camera could significantly improve the accuracy of ADL
recognition.

Download full-text

Full-text

Available from: Yasushi Sakurai
  • Source
    • "Other approaches are based on the assumption that every object is able to provide its state with binary switches [14] or infrared sensors [12]. In [13] a similar approach to our idea is presented. This system uses also a wrist-worn camera in combination with other bodyworn sensor systems. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We address a specific, particularly difficult class of activity recognition problems defined by (1) subtle, and hardly discriminative hand motions such as a short press or pull, (2) large, ill defined NULL class (any other hand motion a person may express during normal life), and (3) difficulty of collecting sufficient training data, that generalizes well from one to multiple users. In essence we intend to spot activities such as opening a cupboard, pressing a button, or taking an object from a shelve in a large data stream that contains typical every day activity. We focus on body-worn sensors without instrumenting objects, we exploit available infrastructure information, and we perform a one-to-many-users training scheme for minimal training effort. We demonstrate that a state of the art motion sensors based approach performs poorly under such conditions (Equal Error Rate of 18% in our experiments). We present and evaluate a new multi modal system based on a combination of indoor location with a wrist mounted proximity sensor, camera and inertial sensor that raises the EER to 79%.
    Full-text · Conference Paper · Jan 2013
  • Source
    • "The energy can be used to distinguish low intensity activities such as standing from high intensity activities such as walking [17] [1]. The dominant frequency is the frequency that has the largest FFT component, and it allows us to distinguish between repetitive motions with similar energy values [8]. We construct a feature vector concatenating the above features extracted from all the body-worn accelerometers. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes an activity recognition method that models an end user’s activities without using any labeled/ unlabeled acceleration sensor data obtained from the user. Our method employs information about the end user’s physical characteristics such as height and gender to find other users whose sensor data prepared in advance may be similar to those of the end user. Then, we model the end user’s activities by using the labeled sensor data from the similar users. Therefore, our method does not require the end user to collect and label her training sensor data. We confirmed the effectiveness of our method by using 100 hours of sensor data obtained from 40 participants, and achieved a good recognition accuracy almost identical to that of a recognition method employing an end user’s labeled training data.
    Full-text · Conference Paper · Jun 2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Many formal specification languages have been developed to engineer complex systems. However natural language (NL) has remained the choice of domain experts to specify the system because formal specification languages are not easy to master. Therefore NL requirements documentation must be reinterpreted by software engineers into a formal specification language. When the system is very complicated, which is mostly the case when one chooses to use formal specification, this conversion is both non-trivial and error-prone, if not implausible. This challenge comes from many factors such as miscommunication between domain experts and engineers. However the major bottleneck of this conversion is from the inborn characteristic of ambiguity of NL and the different level of the formalism between the two domains of NL and the formal specification. This is why there have been very few attempts to automate the conversion from requirements documentation to a formal specification language. This research project is developed as an application of formal specification and linguistic techniques to automate the conversion from a requirements document written in NL to a formal specification language. Contextual Natural Language Processing (CNLP) is used to handle the ambiguity problem in NL and Two Level Grammar (TLG) is used to deal with the different formalism level between NL and formal specification languages to achieve automated conversion from NL requirements documentation into a formal specification (in our case the Vienna Development Method - VDM++). A knowledge base is built from the NL requirements documentation using CNLP by parsing the documentation and storing the syntactic, semantic, and contextual information.
    Preview · Conference Paper · Dec 2001
Show more