Courtni Byun’s research while affiliated with Brigham Young University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (6)


It is a Truth Individually Acknowledged: Cross-references On Demand
  • Conference Paper

January 2024

Piper Vasicek

·

Courtni Byun

·

Kevin Seppi



Understanding the Roles of Video and Sensor Data in the Annotation of Human Activities

August 2022

·

12 Reads

·

2 Citations

Human activities can be recognized in sensor data using supervised machine learning algorithms. In this approach, human annotators must annotate events in the sensor data which are used as input to supervised learning algorithms. Annotating events directly in time series graphs of data streams is difficult. Video is often collected and synchronized to the sensor data to aid human annotators in identifying events in the data. Other work in human activity recognition (HAR) minimizes the cost of annotation by using unsupervised or semi-supervised machine learning algorithms or using algorithms that are more tolerant of human annotation errors. Rather than adjusting algorithms, we focus on the performance of the human annotators themselves. Understanding how human annotators perform annotation may lead to annotation interfaces and data collection schemes that better support annotators. We investigate the accuracy and efficiency of human annotators in the context of four HAR tasks when using video, data, or both to annotate events. After a training period, we found that annotators were more efficient when using data alone on three of four tasks and more accurate when marking event types when using video alone on all four tasks. Annotators were more accurate when marking event boundaries using data alone on two tasks and more accurate using video alone on the other two tasks. Our results suggest that data and video collected for annotation of HAR tasks play different roles in the annotation process and these roles may vary with the HAR task.


Figure 4: Plot showing human agreement with each model type. CopulaLDA performs slightly worse than LDA. Humans preferred topic assignments from Anchor Words by a wide margin.
Automatic Evaluation of Local Topic Quality
  • Preprint
  • File available

May 2019

·

28 Reads

·

·

Wilson Fearn

·

[...]

·

Kevin Seppi

Topic models are typically evaluated with respect to the global topic distributions that they generate, using metrics such as coherence, but without regard to local (token-level) topic assignments. Token-level assignments are important for downstream tasks such as classification. Even recent models, which aim to improve the quality of these token-level topic assignments, have been evaluated only with respect to global metrics. We propose a task designed to elicit human judgments of token-level topic assignments. We use a variety of topic model types and parameters and discover that global metrics agree poorly with human assignments. Since human evaluation is expensive we propose a variety of automated metrics to evaluate topic models at a local level. Finally, we correlate our proposed metrics with human judgments from the task on several datasets. We show that an evaluation based on the percent of topic switches correlates most strongly with human judgment of local topic quality. We suggest that this new metric, which we call consistency, be adopted alongside global metrics such as topic coherence when evaluating new topic models.

Download

Citations (4)


... 13 Large language models are frequently deficient in providing real papers and correctly matching authors to their own papers when generating citations, and therefore are at risk of creating fictitious citations that appear convincing despite incorrect information including digital object identifier numbers. 6,14 Studies evaluating perceptions of AI use in academic journals and strengths and weaknesses of the tools revealed no agreement on how to report the use of AI tools. 15 There are many tools; for example, some are used to improve grammar and others to generate content, yet parameters on substantive use versus nonsubstantive use are lacking. ...

Reference:

Use of artificial intelligence in family medicine publications: Joint statement from journal editors
This Reference Does Not Exist: An Exploration of LLM Citation Accuracy and Relevance
  • Citing Conference Paper
  • January 2024

... Automated computational tools (e.g., LLMs and other AI tools) are increasingly being deployed to complete various tasks and workflows in professional settings, including research tasks [4,12,28]. This has already led to (sometimes tongue-in-cheek, sometimes earnest) speculation and exploration of whether computational tools could replace humans in the research process altogether [8,26]. This is generally motivated through the substantial impact it might be able to have on time saved, especially in the context of secondary research which is generally quite timeconsuming [5,32,39]. ...

Dispensing with Humans in Human-Computer Interaction Research
  • Citing Conference Paper
  • April 2023

... The efficacy and precision of human annotators, whether employing video, data, or both for annotating events across four human activity recognition (HAR) tasks [28] observed that annotators were more accurate in classifying kinds of events when employing video alone on all four tasks and more effective while using data alone on three of the four assignments. The annotations of event boundaries based on data alone were more accurate. ...

Understanding the Roles of Video and Sensor Data in the Annotation of Human Activities
  • Citing Article
  • August 2022