An energy-efficient quality adaptive framework for multi-modal sensor context recognition.
Conference: Ninth Annual IEEE International Conference on Pervasive Computing and Communications, PerCom 2011, 21-25 March 2011, Seattle, WA, USA, Proceedings
In pervasive computing environments, under- standing the context of an entity is essential for adapting the application behavior to changing situations. In our view, context is a high-level representation of a user or entity's state and can capture location, activities, social relationships, capabilities, etc. Inherently, however, these high-level context metrics are difficult to capture using uni-modal sensors only, and must therefore be inferred with the help of multi-modal sensors. However a key challenge in supporting context-aware pervasive computing environments, is how to determine in an energy-efficient manner multiple (potentially competing) high- level context metrics simultaneously using low-level sensor data streams about the environment and the entities present therein. In this paper, we first highlight the intricacies of determining multiple context metrics as compared to a single context, and then develop a novel framework and practical implementa- tion for this problem. The proposed framework captures the tradeoff between the accuracy of estimating multiple context metrics and the overhead incurred in acquiring the necessary sensor data stream. In particular, we develop a multi-context search heuristic algorithm that computes the optimal set of sensors contributing to the multi-context determination as well as the associated parameters of the sensing tasks. Our goal is to satisfy the application requirements for a specified accuracy at a minimum cost. We compare the performance of our heuristic based framework with a brute-forced approach for multi- context determination. Experimental results with SunSPOT sensors demonstrate the potential impact of the proposed framework.
Available from: Dawud Gordon
[Show abstract] [Hide abstract]
ABSTRACT: Pervasive computing envisions implicit interaction between people and their intelligent environments instead of between individuals and their devices, inevitably leading to groups of individuals interacting with the same intelligent environment. These environments must be aware of user contexts and activities, as well as the contexts and activities of groups of users. Here an application for in-network group activity recognition using only mobile devices and their sensors is presented. Different data abstraction levels for recognition were investigated in terms of recognition rates, power consumption and wireless communication volumes for the devices involved. The results indicate that using locally extracted features for global, multi-user activity recognition is advantageous (10% reduction in energy consumption, theoretically no loss in recognition rates). Using locally classified single-user activities incurred a 47% loss in recognition capabilities, making it unattractive. Local clustering of sensor data indicates potential for group activity recognition with room for improvement (40% reduction in energy consumed, though 20% loss of recognition abilities). © 2012 ICST Institute for Computer Science, Social Informatics and Telecommunications Engineering.
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.