An energy-efficient quality adaptive framework for multi-modal sensor context recognition.

Conference Paper · January 2011with7 Reads
Source: DBLP
Conference: Ninth Annual IEEE International Conference on Pervasive Computing and Communications, PerCom 2011, 21-25 March 2011, Seattle, WA, USA, Proceedings
  • 15.86 · University of Maryland, Baltimore County
  • 37.62 · Singapore Management University
  • 22.25 · University of Texas at Austin
  • 24.69 · Agency for Science, Technology and Research (A*STAR)
In pervasive computing environments, under- standing the context of an entity is essential for adapting the application behavior to changing situations. In our view, context is a high-level representation of a user or entity's state and can capture location, activities, social relationships, capabilities, etc. Inherently, however, these high-level context metrics are difficult to capture using uni-modal sensors only, and must therefore be inferred with the help of multi-modal sensors. However a key challenge in supporting context-aware pervasive computing environments, is how to determine in an energy-efficient manner multiple (potentially competing) high- level context metrics simultaneously using low-level sensor data streams about the environment and the entities present therein. In this paper, we first highlight the intricacies of determining multiple context metrics as compared to a single context, and then develop a novel framework and practical implementa- tion for this problem. The proposed framework captures the tradeoff between the accuracy of estimating multiple context metrics and the overhead incurred in acquiring the necessary sensor data stream. In particular, we develop a multi-context search heuristic algorithm that computes the optimal set of sensors contributing to the multi-context determination as well as the associated parameters of the sensing tasks. Our goal is to satisfy the application requirements for a specified accuracy at a minimum cost. We compare the performance of our heuristic based framework with a brute-forced approach for multi- context determination. Experimental results with SunSPOT sensors demonstrate the potential impact of the proposed framework.
    • "Other important works on activity recognition are [5], [2], [8]. Relevant works on multi-sensor fusion and context-awareness in Smart Environments include [10], [6]. The state-of-the-art works utilizing smartphone/ smart wearable onboard sensors for activity recognition, in majority utilizes only the accelerometer and gyroscope for activity sensing and GPS for outdoor location sensing. "
    [Show abstract] [Hide abstract] ABSTRACT: In this work we present A-Wristocracy, a novel framework for recognizing very fine-grained and complex in-home activities of human users (particularly elderly people) with wrist-worn device sensing. Our designed A-Wristocracy system improves upon the state-of-the-art works on in-home activity recognition using wearables. These works are mostly able to detect coarse-grained ADLs (Activities of Daily Living) but not large number of fine-grained and complex IADLs (Instrumental Activities of Daily Living). These are also not able to distinguish similar activities but with different context (such as sit on floor vs. sit on bed vs. sit on sofa). Our solution helps accurate detection of in-home ADLs/ IADLs and contextual activities, which are all critically important for remote elderly care in tracking their physical and cognitive capabilities. A-Wristocracy makes it feasible to classify large number of fine-grained and complex activities, through Deep Learning based data analytics and exploiting multi-modal sensing on wrist-worn device. It exploits minimal functionality from very light additional infrastructure (through only few Bluetooth beacons), for coarse level location context. A-Wristocracy preserves direct user privacy by excluding camera/ video imaging on wearable or infrastructure. The classification procedure consists of practical feature set extraction from multi-modal wearable sensor suites, followed by Deep Learning based supervised fine-level classification algorithm. We have collected exhaustive home-based ADLs and IADLs data from multiple users. Our designed classifier is validated to be able to recognize very fine-grained complex 22 daily activities (much larger number than 6-12 activities detected by state-of-the-art works using wearable and no camera/ video) with high average test accuracies of 90% or more for two users in two different home environments.
    Full-text · Conference Paper · Jun 2015 · IEEE Transactions on Services Computing
    • "For individual mobile device, the energy consumed by an MCS task falls into three parts: sensing, computing and data transfer. In order to reduce energy consumption in sensing, existing approaches include adopting low power sensors [12], adapting sensing frequency [2], and inferring data rather than sensing data directly [13]. To save energy in computing, the MCS framework can use low power processors [14], energy-efficient data processing algorithm [15] or code of- floading [16]. "
    [Show abstract] [Hide abstract] ABSTRACT: This paper proposes a novel task allocation frame-work, CrowdTasker, for mobile crowdsensing. CrowdTasker op-erates on top of energy-efficient Piggyback Crowdsensing (PCS) task model, and aims to maximize the coverage quality of the sensing task while satisfying the incentive budget constraint. In order to achieve this goal, CrowdTasker first predicts the call and mobility of mobile users based on their historical records. With a flexible incentive model and the prediction results, CrowdTasker then selects a set of users in each sensing cycle for PCS task participation, so that the resulting solution achieves near-maximal coverage quality without exceeding incentive budget. We evaluated CrowdTasker extensively using a large-scale real-world dataset and the results show that CrowdTasker significantly outperformed three baseline approaches by achieving 3% -60% higher coverage quality.
    Full-text · Conference Paper · Mar 2015 · IEEE Transactions on Services Computing
    • "Orchestrator [41] considered an active resource orchestration framework at the resource-constrained devices while considering availability of resources. The authors in [42], [39] studied a context recognition framework which uses multi-modal sensor streams about the environment and the entities present therein while simultaneously considering both the quality of context accuracy and energy-efficiency. The authors in [38] proposed a producer-oriented model for applications to share efficient data flow execution rather than to separately manage common data flow as in [43]. "
    [Show abstract] [Hide abstract] ABSTRACT: “Cloud of Things(CoT)” is a concept that provides smart things’ functions as a service and allows them to be used by multiple applications. In the CoT, a single smart thing instance should efficiently host multiple applications, called multi-tenancy. Therefore, simultaneous accesses to shared smart things may lead to resource conflicts. Moreover, smart things inherently form complex dependencies on real-world. Since handling resource conflicts with complex dependencies at an application level is typically ad-hoc and error-prone, it increases the application development burden. To address these issues, we propose a middleware for Evolvable ClOut of things (ECO). The ECO middleware manages organizations to handle dependency among/between smart things and virtualizes physical smart things to enable isolation between/among multiple applications yet internally controls smart things’ sharing to resolve resource conflicts and provides a consolidation framework for efficient utilization of the shared smart things. A leasebased sharing control is employed with two tenant switch schemes which are analyzed. All these features are hidden from application context, thus reducing complexities in developing applications. From implementation and evaluations with workload applications, we show that the ECO middleware provides efficient sharing controls and access controls with negligible virtualization overhead.
    Full-text · Article · Sep 2014
Show more