[Show abstract][Hide abstract] ABSTRACT: We present the results, experiences and lessons learned from comparing a diverse set of technical approaches to indoor localization during the 2014 Microsoft Indoor Localization Competition. 22 different solutions to indoor localization from different teams around the world were put to test in the same unfamiliar space over the course of 2 days, allowing us to directly compare the accuracy and overhead of various technologies. In this paper, we provide a detailed analysis of the evaluation study's results, discuss the current state-of-the-art in indoor localization, and highlight the areas that, based on our experience from organizing this event, need to be improved to enable the adoption of indoor location services.
[Show abstract][Hide abstract] ABSTRACT: To assist drivers and prevent collisions, we propose a system called OmniView that extends driver's vision in all directions, using cameras of multiple collaborating smartphones in the surrounding vehicles. OmniView provides a driver with a traffic map about the relative positions of surrounding vehicles. Under OmniView, each vehicle detects other vehicles in its view, estimates their relative positions, and broadcasts its local map. Upon reception of a map from another vehicle, a vehicle updates its own map by fusing it with the received map. A key issue faced by OmniView is, how does a vehicle address another vehicle in its map? We propose that a vehicle's image itself could be treated as its address. However, if we include images in each map message, the communication overhead will be high. Towards that end, OmniView resolves a vehicle's image to a small unique ID. With this approach, we demonstrate that it is feasible to develop the OmniView system that produces a traffic map in real-time. Besides, through computer vision techniques and the collaboration between vehicles, OmniView could show the positions of surrounding vehicles with reasonable accuracy on the map. Such a traffic map, even without being displayed to the drivers, can act as the common substrate based on which various alerts can be triggered to avoid accidents.
[Show abstract][Hide abstract] ABSTRACT: This paper revisits the randomized backoff problem in CSMA networks and identifies opportunities of improvement. The key observation is that today's backoff operation, such as in WiFi, attempts to create a total ordering among all nodes contending for the channel. Total ordering indeed assigns a unique backoff to each node (thus avoiding collisions), but pays the penalty of choosing the random back-offs from a large range, ultimately translating to channel wastage. We envision breaking away from total ordering. Briefly, we force nodes to pick random numbers from a smaller range, so that groups of nodes pick the same random number (i.e., partial order). Now, the group that picks the smallest number - the winners - is advanced to a second round, where they again perform the same operation. We show that narrowing down the contenders through multiple rounds improves channel utilization. The intuition is that time for partially ordering all nodes plus totally ordering each small group is actually less than the time needed to totally order all nodes. We instantiate the idea with two well known CSMA protocols - WiFi and oCSMA. We resolve new challenges regarding multi domain contentions and group signaling. USRP and simulation based microbenchmarks are promising. We believe the idea of "hierarchical backoff" applies to other CSMA systems as well, exploration of which is left to future work.
[Show abstract][Hide abstract] ABSTRACT: This video presents a demo of indoor localization in multiple settings. In the demo, a user walks with a smartphone and the user's location is shown on the phone's screen in real time. Our system, called Unsupervised Indoor Localization (UnLoc) utilizes the sensor data from smartphones to learn "invisible landmarks" in the environment. Example landmarks could be a unique magnetic fluctuation experienced when the phone is near a water-cooler, or a distinct gyroscope rotation when the user turns a corner. We use these indoor "landmarks" to periodically reset the user's location. To track the user between these landmarks, we use an optimized variant of dead reckoning, ultimately leading to a robust location tracking system. We call our system UnLoc, since the landmarks are generated in an unsupervised manner, requiring no manual effort or floorplan of the building. The demo describes the high level intuitions, shows UnLoc in operation, and shares experiences from running UnLoc in various real-world environments.
[Show abstract][Hide abstract] ABSTRACT: We present a demonstration of WalkCompass, a system to appear in the MobiSys 2014 main conference. WalkCompass exploits smartphone sensors to estimate the direction in which a user is walking. We find that several smartphone localization systems in the recent past, including our own, make a simplifying assumption that the user's walking direction is known. In trying to relax this assumption, we were not able to find a generic solution from past work. While intuition suggests that the walking direction should be detectable through the accelerometer, in reality this direction gets blended into various other motion patterns during the act of walking, including up and down bounce, side-to-side sway, swing of arms or legs, etc. WalkCompass analyzes the human walking dynamics to estimate the dominating forces and uses this knowledge to find the heading direction of the pedestrian. In the demonstration we will show the performance of this system when the user holds the smartphone on the palm. A collection of YouTube videos of the demo is posted at http://synrg.csl.illinois.edu/projects/ localization/walkcompass.
[Show abstract][Hide abstract] ABSTRACT: This paper describes WalkCompass, a system that exploits smartphone sensors to estimate the direction in which a user is walking. We find that several smartphone localization systems in the recent past, including our own, make a simplifying assumption that the user's walking direction is known. In trying to relax this assumption, we were not able to find a generic solution from past work. While intuition suggests that the walking direction should be detectable through the accelerometer, in reality this direction gets blended into various other motion patterns during the act of walking, including up and down bounce, side-to-side sway, swing of arms or legs, etc. Moreover, the walking direction is in the phone's local coordinate system (e.g., along Y axis), and translation to global directions, such as 45 degree North, can be challenging when the compass is itself erroneous. WalkCompass copes with these challenges and develops a stable technique to estimate the user's walking direction within a few steps. Results drawn from 15 different environments demonstrate median error of less than 8 degrees, across 6 different users, 3 surfaces, and 3 holding positions. While there is room for improvement, we believe our current system can be immediately useful to various applications centered around localization and human activity recognition.
[Show abstract][Hide abstract] ABSTRACT: A variety of techniques have been used by prior work on the problem of smartphone location. In this paper, we propose a novel approach using sound source localization (SSL) with microphone arrays to determine where in a room a smartphone is located. In our system called Daredevil, smartphones emit sound at particular times and frequencies, which are received by microphone arrays. Using SSL that we modified for our purposes, we can calculate the angle between the center of each microphone array and the phone, and thereby triangulate the phone’s position. In this early work, we demonstrate the feasibility of our approach and present initial results. Daredevil can locate smartphones in a room with an average precision of 3.19 feet. We identify a number of challenges in realizing the system in large deployments, and we hope this work will benefit researchers who pursue such techniques.
[Show abstract][Hide abstract] ABSTRACT: We present eNav, a smartphone-based vehicular GPS navigation system that has an energy-saving location sensing mode capable of drastically reducing navigation energy needs. Traditional implementations sample the phone GPS at the highest possible rate (usually 1Hz) to ensure constant highest possible localization accuracy. This practice results in excessive phone battery consumption and reduces the attainable length of a navigation session. The seemingly most common solution would be to always use a car-charger and keep the phone plugged-in during navigation at all times. However, according to a comprehensive survey we conducted, only a small percent of people would actually always carry around their phones' car-chargers and cables, as doing so is inconvenient and defeats the true ''wireless'' nature of mobile phones. In addressing this problem, eNav exploits the phone's lower-energy on-board motion sensors for approximate location sensing when the vehicle is sufficiently far from the next navigation waypoint, using actual GPS sampling only when close. Our user study shows that, while remaining virtually transparent to users, eNav can reduce navigation energy consumption by over 80% without compromising navigation quality or user experience.
[Show abstract][Hide abstract] ABSTRACT: This paper envisions a future in which smartphones can be inserted into toys, such as a teddy bear, to make them interactive to children. Our idea is to leverage the smartphones' sensors to sense children's gestures, cues, and reactions, and interact back through acoustics, vibration, and when possible, the smartphone display. This paper is an attempt to explore this vision, ponder on applications, and take the first steps towards addressing some of the challenges. Our limited measurements from actual kids indicate that each child is quite unique in his/her "gesture vocabulary", motivating the need for personalized models. To learn these models, we employ signal processing-based approaches that first identify the presence of a gesture in a phone's sensor stream, and then learn its patterns for reliable classification. Our approach does not require manual supervision (i.e., the child is not asked to make any specific gesture); the phone detects and learns through observation and feedback. Our prototype, while far from a complete system, exhibits promise -- we now believe that an unsupervised sensing approach can enable new kinds of child-toy interactions.
[Show abstract][Hide abstract] ABSTRACT: Mobile phones are becoming the convergent platform for personal sensing, computing, and communication. This paper attempts to exploit this convergence toward the problem of automatic image tagging. We envision TagSense, a mobile phone-based collaborative system that senses the people, activity, and context in a picture, and merges them carefully to create tags on-the-fly. The main challenge pertains to discriminating phone users that are in the picture from those that are not. We deploy a prototype of TagSense on eight Android phones, and demonstrate its effectiveness through 200 pictures, taken in various social settings. While research in face recognition continues to improve image tagging, TagSense is an attempt to embrace additional dimensions of sensing toward this end goal. Performance comparison with Apple iPhoto and Google Picasa shows that such an out-of-band approach is valuable, especially with increasing device density and greater sophistication in sensing and learning algorithms.
No preview · Article · Jan 2014 · IEEE Transactions on Mobile Computing
[Show abstract][Hide abstract] ABSTRACT: We intend to develop a smartphone app that can tell whether its user is a driver or a passenger in an automobile. While the core problem can be solved relatively easily with special installations in new high-end vehicles (e.g., NFC), constraints of backward compatibility makes the problem far more challenging. We design a Driver Detection System (DDS) that relies entirely on smartphone sensors, and is thereby compatible with all automobiles. Our approach harnesses smartphone sensors to recognize micro-activities in humans, that in turn discriminate between the driver and the passenger. We demonstrate an early prototype of this system on Android NexusS and Apple iPhones. Reported results show greater than 85% accuracy across 6 users in 2 different cars.
[Show abstract][Hide abstract] ABSTRACT: This paper describes a system for automatically rating content - mainly movies and videos - at multiple granularities. Our key observation is that the rich set of sensors available on today's smartphones and tablets could be used to capture a wide spectrum of user reactions while users are watching movies on these devices. Examples range from acoustic signatures of laughter to detect which scenes were funny, to the stillness of the tablet indicating intense drama. Moreover, unlike in most conventional systems, these ratings need not result in just one numeric score, but could be expanded to capture the user's experience. We combine these ideas into an Android based prototype called Pulse, and test it with 11 users each of whom watched 4 to 6 movies on Samsung tablets. Encouraging results show consistent correlation between the user's actual ratings and those generated by the system. With more rigorous testing and optimization, Pulse could be a candidate for real-world adoption.
[Show abstract][Hide abstract] ABSTRACT: This paper presents iSee, a crowdsourced approach to detecting and localizing events in outdoor environments. Upon spotting an event, an iSee user only needs to swipe on her smartphone's touchscreen in the direction of the event. These swiping directions are often inaccurate and so are the compass measurements. Moreover, the swipes do not encode any notion of how far the event is located from the user, neither is the GPS location of the user accurate. Furthermore, multiple events may occur simultaneously and users do not explicitly indicate which events they are swiping towards. Nonetheless, as more users start contributing data, we show that our proposed system is able to quickly detect and estimate the locations of the events. We have implemented iSee on Android phones and have experimented in real-world settings by planting virtual "events" in our campus and asking volunteers to swipe on seeing one. Results show that iSee performs appreciably better than established triangulation and clustering-based approaches, in terms of localization accuracy, detection coverage, and robustness to sensor noise.
[Show abstract][Hide abstract] ABSTRACT: Much is known about the immediate and predictive antecedents of smoking lapse, which include situations (e.g., presence of
other smokers), activities (e.g., alcohol consumption), and contexts (e.g., outside). This commentary suggests smartphone-based
systems could be used to infer these predictive antecedents in real time and provide the smoker with just-in-time intervention.
The smartphone of today is equipped with an array of sensors, including GPS, cameras, light sensors, barometers, accelerometers,
and so forth, that provide information regarding physical location, human movement, ambient sounds, and visual imagery. We
propose that libraries of algorithms to infer these antecedents can be developed and then incorporated into diverse mobile
research and personalized treatment applications. While a number of challenges to the development and implementation of such
applications are recognized, our field benefits from a database of known antecedents to a problem behavior, and further research
and development in this exciting area are warranted.
No preview · Article · May 2013 · Nicotine & Tobacco Research