Alexander J. Quinn’s research while affiliated with Purdue University West Lafayette and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (31)


CARING-AI: Towards Authoring Context-aware Augmented Reality INstruction through Generative Artificial Intelligence
  • Preprint
  • File available

January 2025

·

31 Reads

·

Rahul Jain

·

Seungguen Chi

·

[...]

·

Context-aware AR instruction enables adaptive and in-situ learning experiences. However, hardware limitations and expertise requirements constrain the creation of such instructions. With recent developments in Generative Artificial Intelligence (Gen-AI), current research tries to tackle these constraints by deploying AI-generated content (AIGC) in AR applications. However, our preliminary study with six AR practitioners revealed that the current AIGC lacks contextual information to adapt to varying application scenarios and is therefore limited in authoring. To utilize the strong generative power of GenAI to ease the authoring of AR instruction while capturing the context, we developed CARING-AI, an AR system to author context-aware humanoid-avatar-based instructions with GenAI. By navigating in the environment, users naturally provide contextual information to generate humanoid-avatar animation as AR instructions that blend in the context spatially and temporally. We showcased three application scenarios of CARING-AI: Asynchronous Instructions, Remote Instructions, and Ad Hoc Instructions based on a design space of AIGC in AR Instructions. With two user studies (N=12), we assessed the system usability of CARING-AI and demonstrated the easiness and effectiveness of authoring with Gen-AI.

Download

ImpersonatAR: Using Embodied Authoring and Evaluation to Prototype Multi-Scenario Use Cases for Augmented Reality Applications

September 2023

·

16 Reads

·

2 Citations

Journal of Computing and Information Science in Engineering

Prototyping use cases for Augmented Reality (AR) applications can be beneficial to elicit the functional requirements of the features early-on, to drive the subsequent development in a goal-oriented manner. Doing so would require designers to identify the goal-oriented interactions and map the associations between those interactions in a spatio-temporal context. Pertaining to the multiple scenarios that may result from the mapping, and the embodied nature of the interaction components, recent AR prototyping methods lack the support to adequately capture and communicate the intent of designers and stakeholders during this process. We present ImpersonatAR, a mobile-device based prototyping tool that utilizes embodied demonstrations in the augmented environment to support prototyping and evaluation of multi-scenario AR use cases. The approach uses: 1) capturing events or steps in form of embodied demonstrations using avatars and 3D animations, 2) organizing events and steps to compose multi-scenario experience, and finally 3) allowing stakeholders to explore the scenarios through interactive role-play with the prototypes. We conducted a user study with 10 participants to prototype use cases using ImpersonatAR from two different AR application features. Results validated that ImpersonatAR promotes exploration and evaluation of diverse design possibilities of multi-scenario AR use cases through embodied representations of the different scenarios.


The Design of a Virtual Prototyping System for Authoring Interactive VR Environments from Real World Scans

July 2023

·

28 Reads

·

10 Citations

Journal of Computing and Information Science in Engineering

Domain users (DUs) with a knowledge base in specialized fields are frequently excluded from authoring Virtual Reality (VR)-based applications in corresponding fields. This is largely due to the requirement of VR programming expertise needed to author these applications. To address this concern, we developed VRFromX, a system workflow design to make the virtual content creation process accessible to DUs irrespective of their programming skills and experience. VRFromX provides an in-situ process of content creation in VR that (a) allows users to select regions of interest in scanned point clouds or sketch in mid-air using a brush tool to retrieve virtual models, and (b) then attach behavioral properties to those objects. Using a welding use case, we performed a usability evaluation of VRFromX with 20 DUs from which 12 were novices in VR programming. Study results indicated positive user ratings for the system features with no significant differences across users with or without VR programming expertise. Based on the qualitative feedback, we also implemented two other use cases to demonstrate potential applications. We envision that the solution can facilitate the adoption of the immersive technology to create meaningful virtual environments.


Interacting Objects: A Dataset of Object-Object Interactions for Richer Dynamic Scene Representations

January 2023

·

8 Reads

·

15 Citations

IEEE Robotics and Automation Letters

Dynamic environments in factories, surgical robotics, and warehouses increasingly involve humans, machines, robots, and various other objects such as tools, fixtures, conveyors, and assemblies. In these environments, numerous interactions occur not just between humans and objects but also between objects themselves. However, current scene-graph datasets predominantly focus on human-object interactions (HOI) and overlook object-object interactions (OOIs) despite the necessity of OOIs in effectively representing dynamic environments. This oversight creates a significant gap in the coverage of interactive elements in dynamic scenes. We address this gap by proposing, to the best of our knowledge, the first dataset* annotating for OOI categories in dynamic scenes. To model OOIs, we establish a classification taxonomy for spatio-temporal interactions. We use our taxonomy to annotate OOIs in video clips of dynamic scenes. Then, we introduce a spatio-temporal OOI classification task which aims to identify interaction categories between two given objects in a video clip. Further, we benchmark our dataset for the spatio-temporal OOI classification task by adopting state-of-the-art approaches from related areas of Human-Object Interaction Classification, Visual Relationship Classification, and Scene-Graph Generation. Additionally, we utilize our dataset to examine the effectiveness of OOI and HOI-based features in the context of Action Recognition. Notably, our experimental results show that OOI-based features outperform HOI-based features for the task of Action Recognition.


TaskLint: Automated Detection of Ambiguities in Task Instructions

October 2022

·

14 Reads

·

3 Citations

Proceedings of the AAAI Conference on Human Computation and Crowdsourcing

Clear instructions are a necessity for obtaining accurate results from crowd workers. Even small ambiguities can force workers to choose an interpretation arbitrarily, resulting in errors and inconsistency. Crisp instructions require significant time to design, test, and iterate. Recent approaches have engaged workers to detect and correct ambiguities. However, this process increases the time and money required to obtain accurate, consistent results. We present TaskLint, a system to automatically detect problems with task instructions. Leveraging a diverse set of existing NLP tools, TaskLint identifies words and sentences that might foretell worker confusion. This is analogous to static analysis tools for code ("linters"), which detect possible features in code that might indicate the presence of bugs. Our evaluation of TaskLint using task instructions created by novices confirms the potential for static tools to improve task clarity and the accuracy of results, while also highlighting several challenges.





TaskMate: A Mechanism to Improve the Quality of Instructions in Crowdsourcing

May 2019

·

104 Reads

·

33 Citations

Developing instructions for microtask crowd workers requires time to ensure consistent interpretations by crowd workers. Even with substantial effort, workers may still misinterpret the instructions due to ambiguous language and structure in the task design. Prior work demonstrated methods for facilitating iterative improvement with help from the requester. However, any participation by the requester reduces the time saved by delegating the work—and hence the utility of using crowdsourcing. We present TaskMate, a system for facilitating worker-led refinement of task instructions with minimal involvement by the requester. Small teams of workers search for ambiguities and vote on the interpretation they believe the requester intended. This paper describes the workflow, our implementation, and our preliminary evaluation.


Fig. 3 compares the result of experiment 1 with 100 replications. Table 1 summarizes the measurement from the simulation. In term of operational cost, the CRP-H is slightly lower than the control group on per task basis. In addition, while standard deviation of CRP-H is slightly higher than the control group, the average makespan of CRP-H is ~45% less than the control group. Two t-tests were conducted to investigate the statistical relationship between the two operational costs and the two makespans. At the significance level of 0.05, the both null hypotheses were rejected. Therefore, the CRP-H outperforms the control group both in terms of operational cost and makespan.
Fig. 3 Experiment 1: Operational cost (left), Makespan (right)
Fig. 4 compares the result of experiment 2 with 100 replications. Table 2 summarizes the measurement from the simulation. The introduction of the helper robot in experiment 2 allowed us to test more complex scenarios. CRP-H shows significantly better performance than the control group in both makespan and cost. Furthermore, the standard deviation of makespan and cost for CRP-H are also lower, showing better control consistency in the results.
Result of Experiment 1
Result of Experiment 2
Collaboration Requirement Planning Protocol for HUB-CI in Factories of the Future

January 2019

·

81 Reads

·

17 Citations

Procedia Manufacturing

Rapid advances in production systems’ models and technology continually challenge manufacturers preparing for the factories of the future. To address the complexity issues typically coupled with the improvements, we have developed a brain-inspired model for production systems, HUB-CI. It is a virtual Hub for Collaborative Intelligence, receiving human instructions from a human-computer interface; and in turn, commanding robots via ROS. The purpose of HUB-CI is to manage diverse local information and real-time signals obtained from system agents (robots, humans, and warehouse components, e.g., carts, shelves, racks) and globally update real-time assignments and schedules for those agents. With Collaborative Control Theory (CCT) we first develop the protocol for collaborative requirement planning for a HUB-CI, (CRP-H), through which we can synchronize the agents to work smoothly and execute rapidly changing tasks. This protocol is designed to answer: Which robot(s) should perform each human-assigned task, and when should this task be performed? The primary two phases of CRP-H, CRP-I (task assignment optimization) and CRP-II (agents schedule harmonization) are developed and validated for two test scenarios: a two-robot collaboration system with five tasks; and a two-robot-and-helper-robot collaboration system with 25 tasks. Simulation results indicate that under CRP-H, both operational cost and makespan of the production work are significantly reduced in the two scenarios. We summarize with the implications and future plans for integrating HUB-CI and CRP-H in a cyber-augmented physical simulation model.


Citations (27)


... Consuming two Esp8266 microcontrollers that are associated to a single cloud frequency that is providing ThingSpeak IoT Cloud, we remain doing the automation of the water supply system. Using separate mobile hotspots or Wi-Fi routers is necessary in order to connect the two microcontrollers to the cloud server [11,12]. Two microcontrollers, one of which should be positioned close to the tank and the other close to the river or the dam, should be placed in close proximity to each other. ...

Reference:

A Smart Pumping System with Intelligent Control Algorithm for Optimizing Energy Efficiency International Journal of Scientific Methods in Computational Science and Engineering
Interacting Objects: A Dataset of Object-Object Interactions for Richer Dynamic Scene Representations
  • Citing Article
  • January 2023

IEEE Robotics and Automation Letters

... Ipsita et al. [70,71] Developed a VR prototyping system for welding to enable subject matter experts to author virtual environments from real-world scans; expertise with VR programming was not required to author content using the system. ...

The Design of a Virtual Prototyping System for Authoring Interactive VR Environments from Real World Scans
  • Citing Article
  • July 2023

Journal of Computing and Information Science in Engineering

... Workflow and task design have also received strong attention in the HCOMP community [40,71,111]. Cost-quality-time optimization [39], predicting label quality [51], or aggregation mechanisms [107] were some objectives pursued in this direction. ...

WingIt: Efficient Refinement of Unclear Task Instructions
  • Citing Article
  • June 2018

Proceedings of the AAAI Conference on Human Computation and Crowdsourcing

... The performance of the tool in detecting and correcting ambiguity was compared to that of people. Similarly, the authors [22] propose TaskLint, a system that identifies errors with task instructions automatically. TaskLint employs a variety of existing natural language processing (NLP) methods to recognize words and phrases that may indicate worker uncertainty. ...

TaskLint: Automated Detection of Ambiguities in Task Instructions

Proceedings of the AAAI Conference on Human Computation and Crowdsourcing

... A common concern with crowdsourcing is whether inexpert workers have sufficient expertise to successfully undertake a given annotation task. Intuitively, more guidance and scaffolding are likely necessary with more skilled tasks and fewer expert workers (Huang et al., 2021). Alternatively, if we use sufficiently expert annotators, we assume difficult cases can be handled (Retelny et al., 2014;Vakharia and Lease, 2015). ...

Task Design for Crowdsourcing Complex Cognitive Skills
  • Citing Conference Paper
  • May 2021

... Many prior physical task performance support systems are built around linear and single-path tasks, e.g., machine operation [10,14,25,36,37] and assembly [63,64]. These tutorial systems do not have the affordance for reconfiguration. ...

AdapTutAR: An Adaptive Tutoring System for Machine Tasks in Augmented Reality
  • Citing Conference Paper
  • May 2021

... Others pair natural language with other input modalities, such as gesture, to resolve ambiguity and further convey exactness [41]. Many interfaces either assume that the robot has visual access to task-critical objects (as is often the case for closed collaborative environments [13,17,35]), has previously encountered them, or is at least capable of finding them [12,14,29,30]. Users are thereby unable to convey a belief about where objects might be. ...

Vipo: Spatial-Visual Programming with Functions for Robot-IoT Workflows
  • Citing Conference Paper
  • April 2020

... References Advantages Challenges Shared collaborative Decision-Making [21], [40], [9], [10] Enhanced decisionmaking, optimized collaboration, and improved system performance. ...

Collaboration Requirement Planning Protocol for HUB-CI in Factories of the Future

Procedia Manufacturing

... The prevalence bound uses an estimate of the proportion of indeterminate items to construct a performance interval. This proportion can be estimated by examining a random sample of items for indeterminacy (e.g., via crowdsourcing techniques [8,25]). The partition bound is obtained by splitting the evaluation corpus into two subsets: determinate (items with |VRS| = 1) and indeterminate ( |VRS| ≥ 1). ...

TaskMate: A Mechanism to Improve the Quality of Instructions in Crowdsourcing
  • Citing Conference Paper
  • May 2019

... Some exceptionally compliant treated establishments never participate in disclosure, perhaps as a countersignal (see Bederson et al., 2018), or from concern that current disclosure will commit them to future disclosure (see e.g., Grubb, 2011). Similarly, establishments expecting poor performance might still participate due to company policy, or perceived commitment from prior disclosure. ...

Incomplete Disclosure: Evidence of Signaling and Countersignaling
  • Citing Article
  • February 2018

American Economic Journal: Microeconomics