Michael Jones’s research while affiliated with Brigham Young University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (16)


Editorial: Inbodied interaction
  • Article
  • Full-text available

October 2023

·

13 Reads

Frontiers in Computer Science

m. c. schraefel

·

Michael Jones

·

·

Download

Discomfort: a new material for interaction design

August 2023

·

85 Reads

·

1 Citation

Frontiers in Computer Science

We present discomfort as a new material for HCI researchers and designers to consider in applications that help a person develop a new skill, practice, or state. In this context, discomfort is a fundamental precursor to a necessary adaptation which leads to the development of a new skill, practice, or state. The way in which discomfort is perceived, and when it is experienced, is often part of a rationale for rejecting or adopting a practice. Factors that influence the choice to accept or reject a practice of discomfort create opportunities for designing interactions that facilitate discomfort. Enabling effective engagement with discomfort may therefore open opportunities for increased personal development. We propose incorporating discomfort-as-material into our designs explicitly as a mechanism to make desired adaptations available to more of us, more effectively, and more of the time. To explore this possibility, we offer an overview of the physiology and neurology of discomfort in adaptation and propose three issues related to incorporating discomfort into design: preparation for discomfort, need for recovery, and value of the practice.


Understanding the Roles of Video and Sensor Data in the Annotation of Human Activities

August 2022

·

12 Reads

·

2 Citations

Human activities can be recognized in sensor data using supervised machine learning algorithms. In this approach, human annotators must annotate events in the sensor data which are used as input to supervised learning algorithms. Annotating events directly in time series graphs of data streams is difficult. Video is often collected and synchronized to the sensor data to aid human annotators in identifying events in the data. Other work in human activity recognition (HAR) minimizes the cost of annotation by using unsupervised or semi-supervised machine learning algorithms or using algorithms that are more tolerant of human annotation errors. Rather than adjusting algorithms, we focus on the performance of the human annotators themselves. Understanding how human annotators perform annotation may lead to annotation interfaces and data collection schemes that better support annotators. We investigate the accuracy and efficiency of human annotators in the context of four HAR tasks when using video, data, or both to annotate events. After a training period, we found that annotators were more efficient when using data alone on three of four tasks and more accurate when marking event types when using video alone on all four tasks. Annotators were more accurate when marking event boundaries using data alone on two tasks and more accurate using video alone on the other two tasks. Our results suggest that data and video collected for annotation of HAR tasks play different roles in the annotation process and these roles may vary with the HAR task.



Discomfort: a New Material for Interaction Design

May 2021

·

45 Reads

This paper proposes discomfort as a new material for HCI researchers and designers to consider in any application that helps a person develop a new skill, practice or state. Discomfort is a fundamental precursor of adaptation and adaptation leads to new skill, practice or state. The way in which discomfort is perceived, and when it is experienced, is also often part of a rationale for rejecting or adopting a practice. Engaging effectively with discomfort may lead to increased personal development. We propose incorporating discomfort-as-material into our designs explicitly as a mechanism to make desired adaptations available to more of us, more effectively and more of the time. To explore this possibility, we offer an overview of the physiology and neurology of discomfort in adaptation and propose 3 issues related to incorporating discomfort into design: preparation for discomfort, need for recovery, and value of the practice. We look forward in the Workshop to exploring and developing ideas for specific Discomfortable Designs to insource discomfort as part of positive, resilient adaptation.


Rethinking the Role of a Mobile Computing in Recreational Hiking

October 2020

·

31 Reads

·

6 Citations

Mobile computing devices, especially smartphones, are part of the recreational hiking experience in the United States. In our survey of over a thousand people in the United States in 2017, about 95% of respondents reported that they prefer to bring a smartphone when they go hiking. A smartphone used during hiking is simply a tool. That tool can improve or worsen the quality of a hiking experience. In this chapter, we propose a vision of interactive mobile computing design for hiking which may improve the quality of the hiking experience. Our vision of interactive computing and hiking is built on three principles: time spent outdoors is good for individuals, computing can play a positive role in outdoor recreation, and human–nature interaction is more important than human–computer interaction. We illustrate our approach using an extended scenario.



Figure 2: Synchronization diagram. The blue signals denote the movement from the same event.
Synchronization between Sensors and Cameras in Movement Data Labeling Frameworks

November 2019

·

208 Reads

·

5 Citations

·

Michael Jones

·

Kevin Seppi

·

[...]

·

Paul J M Havinga

Obtaining labeled data for activity recognition tasks is a tremendously time consuming, tedious, and labor-intensive task. Often, ground-truth video of the activity is recorded along with sensor-data recorded during the activity. The data must be synchronized with the recorded video to be useful. In this paper, we present and compare two labeling frameworks that each has a different approach to synchronization. Approach A uses time-stamped visual indicators positioned on the data loggers. The approach results in accurate synchronization between video and data but adds more overhead and is not practical when using multiple sensors, subjects, and cameras simultaneously. Also, synchronization needs to be redone for each recording session. Approach B uses Real-Time Clocks (RTC) on the devices for synchronization, which is less accurate but has several advantages: multiple subjects can be recorded on various cameras, it becomes easier to collect more data, and synchronization only needs to be done once across multiple recording sessions. Therefore, it is easier to collect more data which increases the probability of capturing an unusual activity. The best way forward is likely a combination of both approaches.


Understanding How Non-experts Collect and Annotate Activity Data

September 2019

·

8 Reads

Inexpensive, low-power sensors and microcontrollers are widely available along with tutorials about how to use them in systems that sense the world around them. Despite this progress, it remains difficult for non-experts to design and implement event recognizers that find events in raw sensor data streams. Such a recognizer might identify specific events, such as gestures, from accelerometer or gyroscope data and be used to build an interactive system. While it is possible to use machine learning to learn event recognizers from labeled examples in sensor data streams, non-experts find it difficult to label events using sensor data alone. We combine sensor data and video recordings of example events to create a better interface for labeling examples. Non-expert users were able to collect video and sensor data and then quickly and accurately label example events using the video and sensor data together. We include 3 example systems based on event recognizers that were trained from examples labeled using this process.


W.O.U.S.: Widgets of Unusual Size

March 2018

·

28 Reads

Recent work in tangible interfaces, including widget sets like .NET Gadgeteer and Phidgets, has enabled prototyping of rich physical interaction at a handheld or tabletop scale. But it remains unclear how participants respond to physical widgets at larger scales. What kinds of interaction would larger widgets enable, and what kinds of systems - if any - can or should be built with them? We built unusually-sized widgets, or "mega-widgets" in order to explore this territory. We present the results of two iterations of building mega-widgets and accompanying user studies designed to help understand participants» reactions to mega-widgets and probe possible applications. Responses indicated, among other things, a correlation between widget size and the perceived size or importance of what it might control. Mega-widgets were also perceived as increasing the precision of user input control and providing a fun and playful element. We hope that knowledge gained from this exploratory work can help lay groundwork for further exploration of widgets at larger scales.


Citations (8)


... The efficacy and precision of human annotators, whether employing video, data, or both for annotating events across four human activity recognition (HAR) tasks [28] observed that annotators were more accurate in classifying kinds of events when employing video alone on all four tasks and more effective while using data alone on three of the four assignments. The annotations of event boundaries based on data alone were more accurate. ...

Reference:

Application of human-computer interaction technology integrating biomimetic vision system in animation design with a biomechanical perspective
Understanding the Roles of Video and Sensor Data in the Annotation of Human Activities
  • Citing Article
  • August 2022

... Such apps have developed a variety of functionalities from personalized training plans, weight loss tracking, to measuring steps and distance covered, estimating calories loss, etc., and some apps also explore social interactions in this context. Hakkila and Rovaniemi [7] and Anderson and Jones [1] argue that mobile technology has the potential to support activities in nature in ways that can be regarded as calming, relaxing and purifying, provided that the systems developed support users in an unobtrusive manner. For example, the Hobbit app [17] explores the concept of an asocial hiking app, in which users can generate routes that avoid meeting other people. ...

Rethinking the Role of a Mobile Computing in Recreational Hiking
  • Citing Chapter
  • October 2020

... Self-reflection empowers athletes to draw upon their prior experiences, effectively leveraging them to improve future performances in pursuit of their goals [63]. Previous studies have explored the use of various tools to support self-reflection on running data, including dashboards [38], applications on smartwatches, smartphones, and smart devices [23,26,36], physicalization of data [1,32], and integrated displays on running shoes [60]. While these reflection tools have different objectives, they share the common goal of enhancing self-knowledge, self-modelling, and goal tracking to help with promoting positive running behaviour, motor learning [14,51], and self-development in sports [22]. ...

Tangible Interactions with Physicalizations of Personal Experience Data
  • Citing Conference Paper
  • January 2020

... Many of the most recent published papers in the field of machine learning and Activity Recognition (AR) rely heavily on labeled data sets. For this reason, the synchronization approach using visual key and synchronization using real-time clocks were made to label the obtained data [32]. ...

Synchronization between Sensors and Cameras in Movement Data Labeling Frameworks

... Note that only one data logger and one video stream are captured. This approach has been used in data collection for Alpine skiing [4], hiking [3], and rock climbing [3]. Synchronization in this process involves capturing, on video, a red flash emitted by the data logger ten seconds after the data logger is turned on. ...

Accelerometer data and video collected while hiking and climbing at UbiMount 2016
  • Citing Conference Paper
  • September 2017

... Up to now various movement related workshops have taken place, e.g. [8,37,39,54]. We want to highlight the workshop "Move to be Moved" [20], that focused on discussing the emerging landscape that is formed by movement-based design, establishing an academic community in IxD and HCI. ...

UbiMount: 2nd workshop on ubiquitous computing in the mountains
  • Citing Conference Paper
  • September 2017

... IMU suits are a popular sensing modality in studies that focus on vibration [16] or turn detection [17]. Other studies investigated skier turn detection algorithms for alpine skiing and utilized a small number of IMUs across various locations on the skier body such as the knee [18] or boot cuff [19,20]. Ref. [6] had a similar sensing setup and used in-field data to classify specific skiing maneuver types. ...

Automatic detection of alpine ski turns in sensor data
  • Citing Conference Paper
  • September 2016