September 2019
·
8 Reads
Inexpensive, low-power sensors and microcontrollers are widely available along with tutorials about how to use them in systems that sense the world around them. Despite this progress, it remains difficult for non-experts to design and implement event recognizers that find events in raw sensor data streams. Such a recognizer might identify specific events, such as gestures, from accelerometer or gyroscope data and be used to build an interactive system. While it is possible to use machine learning to learn event recognizers from labeled examples in sensor data streams, non-experts find it difficult to label events using sensor data alone. We combine sensor data and video recordings of example events to create a better interface for labeling examples. Non-expert users were able to collect video and sensor data and then quickly and accurately label example events using the video and sensor data together. We include 3 example systems based on event recognizers that were trained from examples labeled using this process.