J. Gregory Trafton

Washington DC VA Medical Center, Washington, Washington, D.C., United States

Are you J. Gregory Trafton?

Claim your profile

Publications (146)66.31 Total impact

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This work investigated the impact of uncertainty representation on performance in a complex authentic visualization task, submarine localization.
    Human Factors The Journal of the Human Factors and Ergonomics Society 05/2014; 56(3):509-20. · 1.18 Impact Factor
  • J. Gregory Trafton, Raj M. Ratwani
    [Show abstract] [Hide abstract]
    ABSTRACT: Many interfaces have been designed to prevent or reduce errors. These interfaces may, in fact, reduce the error rate of specific error classes, but may also have unintended consequences. In this paper, we show a series of studies where a better interface did not reduce the number of errors but instead shifted errors from one error class (omissions) to another error class (perseverations). We also show that having access to progress tracking (a progress bar) does not reduce the number of errors. We propose and demonstrate a solution -- a predictive error system -- that reduces errors based on the error class, not on the type of interface.
    04/2014;
  • Paul Baxter, J. Gregory Trafton
    [Show abstract] [Hide abstract]
    ABSTRACT: Developments in autonomous agents for Human-Robot Interaction (HRI), particularly social, are gathering pace. The typical approach to such efforts is to start with an application to a specific interaction context (problem, task, or aspect of interaction) and then try to generalise to different contexts. Alternatively however, the application of Cognitive Architectures emphasises generality across contexts in the first instance. While not the "silver-bullet" solution, this perspective has a number of advantages both in terms of the functionality of the resulting systems, and indeed in the process of applying these ideas. Centred on invited talks to present a range of perspectives, this workshop provides a forum to introduce and discuss the application (both existing and potential) of Cognitive Architectures to HRI, particularly in the social domain. Participants will gain insight into how such a consideration of Cognitive Architectures complements the development of autonomous social robots.
    Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction; 03/2014
  • Lilia Moshkina, Susan Trickett, J. Gregory Trafton
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we describe a large-scale (over 4000 participants) observational field study at a public venue, designed to explore how social a robot needs to be for people to engage with it. In this study we examined a prediction of Computers Are Social Actors (CASA) framework: the more machines present human-like characteristics in a consistent manner, the more likely they are to invoke a social response. Our humanoid robot's behavior varied in the amount of social cues, from no active social cues to increasing levels of social cues during story-telling to human-like game-playing interaction. We found several strong aspects of support for CASA: the robot that provides even minimal social cues (speech) is more engaging than a robot that does nothing, and the more human-like the robot behaved during story-telling, the more social engagement was observed. However, contrary to the prediction, the robot's game-playing did not elicit more engagement than other, less social behaviors.
    Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction; 03/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: Crandall and Cummings & Mitchell introduced fan-out as a measure of the maximum number of robots a single human operator can supervise in a given single-human-multiple-robot system. Fan-out is based on the time constraints imposed by limitations of the robots and of the supervisor, e.g., limitations in attention. Adapting their work, we introduced a dynamic model of operator overload that predicts failures in supervisory control in real time, based on fluctuations in time constraints and in the supervisor's allocation of attention, as assessed by eye fixations. Operator overload was assessed by damage incurred by unmanned aerial vehicles when they traversed hazard areas. The model generalized well to variants of the baseline task. We then incorporated the model into the system where it predicted in real time, when an operator would fail to prevent vehicle damage and alerted the operator to the threat at those times. These model-based adaptive cues reduced the damage rate by one-half relative to a control condition with no cues.
    Human-Machine Systems, IEEE Transactions on. 01/2014; 44(1):30-40.
  • Source
    Wallace Lawson, J Gregory Trafton, Eric Martinson
    [Show abstract] [Hide abstract]
    ABSTRACT: Complexion plays a remarkably important role in recog-nition. Experiments with human subjects have shown that complexion provides as much distinctiveness as other well-known features such as the shape of the face. From the perspective an autonomous robot, changes in lighting (e.g., intensity, orientation) and camera parameters (e.g., white balance) can make capturing complexion challenging. In this paper, we evaluate complexion as a soft biometric us-ing color (histograms) and texture (local binary patterns). We train a linear SVM to distinguish between the individual and impostors. We demonstrate the performance of this ap-proach on a database of over 200 individuals collected to study biometrics in human-robot interaction. In our exper-iment, we identify 9 individuals that interact with the robot on a regular basis, rejecting all others as unknown.
    IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS); 09/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Adaptive automation (AA) can improve performance while addressing the problems associated with a fully automated system. The best way to invoke AA is unclear, but two ways include critical events and the operator's state. A hybrid model of AA invocation, the dynamic model of operator overload (DMOO), that takes into account critical events and the operator's state was recently shown to improve performance. The DMOO initiates AA using critical events and attention allocation, informed by eye movements. We compared the DMOO with an inaccurate automation invocation system and a system that invoked AA based only on critical events. Fewer errors were made with DMOO than with the inaccurate system. In the critical event condition, where automation was invoked at an earlier point in time, there were more memory and planning errors, while for the DMOO condition, which invocated automation at a later point in time, there were more perceptual errors. These findings provide a framework for reducing specific types of errors through different automation invocation.
    Proceedings of the SIGCHI Conference on Human Factors in Computing Systems; 04/2013
  • Source
    Wallace Lawson, J Gregory Trafton
    [Show abstract] [Hide abstract]
    ABSTRACT: Object recognition is a practical problem with a wide variety of potential applications. Recognition becomes substantially more difficult when objects have not been presented in some logical, "posed" manner selected by a human observer. We propose to solve this problem using active object recognition, where the same object is viewed from multiple viewpoints when it is necessary to gain confidence in the classification decision. We demonstrate the effect of unposed objects on a state-of-the-art approach to object recognition, then show how an active approach can increase accuracy. The active approach works by attaching confidence to recognition, prompting further inspection when confidence is low. We demonstrate a performance increase on a wide variety of objects from the RGB-D database, showing a significant increase in recognition accuracy.
    International Conference on Computer Vision Theory and Applications; 02/2013
  • Erik M Altmann, J Gregory Trafton, David Z Hambrick
    [Show abstract] [Hide abstract]
    ABSTRACT: We investigated the effect of short interruptions on performance of a task that required participants to maintain their place in a sequence of steps each with their own performance requirements. Interruptions averaging 4.4 s long tripled the rate of sequence errors on post-interruption trials relative to baseline trials. Interruptions averaging 2.8 s long-about the time to perform a step in the interrupted task-doubled the rate of sequence errors. Nonsequence errors showed no interruption effects, suggesting that global attentional processes were not disrupted. Response latencies showed smaller interruption effects than sequence errors, a difference we interpret in terms of high levels of interference generated by the primary task. The results are consistent with an account in which activation spreading from the focus of attention allows control processes to navigate task-relevant representations and in which momentary interruptions are disruptive because they shift the focus and thereby cut off the flow. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
    Journal of Experimental Psychology General 01/2013; · 3.99 Impact Factor
  • Source
    E. Martinson, W. Lawson, J.G. Trafton
    [Show abstract] [Hide abstract]
    ABSTRACT: Person identification is a fundamental robotic capability for long-term interactions with people. It is important to know with whom the robot is interacting for social reasons, as well as to remember user preferences and interaction histories. There exist, however, a number of different features by which people can be identified. This work describes three alternative, soft biometrics (clothing, complexion, and height) that can be learned in real-time and utilized by a humanoid robot in a social setting for person identification. The use of these biometrics is then evaluated as part of a novel experiment in robotic person identification carried out at Fleet Week, New York City in May, 2012. In this experiment, Octavia employed soft biometrics to discriminate between groups of 3 people. 202 volunteers interacted with Octavia as part of the study, interacting with the robot from multiple locations in a challenging environment.
    Human-Robot Interaction (HRI), 2013 8th ACM/IEEE International Conference on; 01/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Spatial arrangement of information can have large effects on problem solving. Although such effects have been observed in various domains (e.g., instruction and interface designs), little is known about the cognitive processing mechanisms underlying these effects, nor its applicability to complex visual problem solving. In three experiments, we showed that the impact of spatial arrangement of information on problem solving time can be surprisingly large for complex real world tasks. It was also found that the effect can be caused by large increases in slow, external information searches (Experiment 1), that the spatial arrangement itself is the critical factor and the effect is domain-general (Experiment 2a), and that the underlying mechanism can involve micro-strategy selection for information encoding in a response to differing information access cost (Experiment 2b). Overall, these studies show a large slowdown effect (i.e., approximately 30%) that stacking information produces over spatially distributed information, and multiple paths by which this effect can be produced.
    International Journal of Human-Computer Studies 11/2012; 70(11):812–827. · 1.42 Impact Factor
  • J.G. Trafton, A. Jacobs, A.M. Harrison
    [Show abstract] [Hide abstract]
    ABSTRACT: We built and evaluated a predictive model for resuming after an interruption. Two different experiments were run. The first experiment showed that people used a transactive memory process, relying on another person to keep track of where they were after being interrupted while retelling a story. A memory for goals model was built using the ACT-R/E cognitive architecture that matched the cognitive and behavioral aspects of the experiment. In a second experiment, the memory for goals model was put on an embodied robot that listened to a story being told. When the human storyteller attempted to resume the story after an interruption, the robot used the memory for goals model to determine if the person had forgotten the last thing that was said. If the model predicted that the person was having trouble remembering the last thing said, the robot offered a suggestion on where to resume. Signal detection analyses showed that the model accurately predicted when the person needed help.
    Proceedings of the IEEE 01/2012; 100(3):648-659. · 6.91 Impact Factor
  • J. Gregory Trafton, Anthony M. Harrison
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a spatial system called Specialized Egocentrically Coordinated Spaces embedded in an embodied cognitive architecture (ACT-R Embodied). We show how the spatial system works by modeling two different developmental findings: gaze-following and Level 1 perspective taking. The gaze-following model is based on an experiment by Corkum and Moore (1998), whereas the Level 1 visual perspective-taking model is based on an experiment by Moll and Tomasello (2006). The models run on an embodied robotic system.
    Topics in Cognitive Science 10/2011; 3(4):686 - 706. · 2.88 Impact Factor
  • Source
    William G. Kennedy, J. Gregory Trafton
    [Show abstract] [Hide abstract]
    ABSTRACT: We have investigated actual and perceived human performance associated with a simple task involving walking and applied the developed knowledge to a human-robot interaction. Based on experiments involving walking at a “purposeful and comfortable” pace, parameters were determined for a trapezoidal model of walking: starting from standing still, accelerating to a constant pace, walking at a constant pace, and decelerating to a stop. We also collected data on humans’ evaluation of the accomplishment of a simple task involving walking: determining the transitions from having taken too short a period of time to an appropriate time and from having taken an appropriate time to having taken too long. People were found to be accurate in estimating the task duration for short tasks, but to underestimate the duration of longer tasks. This information was applied to a human-robot interaction involving a human leaving for a “moment” and the robot knows how long the task should take and how time is evaluated by a human.
    International Journal of Social Robotics 08/2011; 3:243-252.
  • Melissa R Beck, Maura C Lohrenz, J Gregory Trafton
    [Show abstract] [Hide abstract]
    ABSTRACT: Reports an error in "Measuring search efficiency in complex visual search tasks: Global and local clutter" by Melissa R. Beck, Maura C. Lohrenz and J. Gregory Trafton (Journal of Experimental Psychology: Applied, 2010[Sep], Vol 16[3], 238-250). The copyright for the article was incorrectly listed. The correct copyright information is provided in the erratum. (The following abstract of the original article appeared in record 2010-19027-002.) Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e., digital aeronautical charts) to examine limits of attention in visual search. Undergraduates at a large southern university searched for a target among 4, 8, or 16 distractors in charts with high, medium, or low global clutter. The target was in a high or low local-clutter region of the chart. In Experiment 1, reaction time increased as global clutter increased, particularly when the target was in a high local-clutter region. However, there was no effect of distractor set size, supporting the notion that global clutter is a better measure of attention against competition in complex visual search tasks. As a control, Experiment 2 demonstrated that increasing the number of distractors leads to a typical set size effect when there is no additional clutter (i.e., no chart). In Experiment 3, the effects of global and local clutter were minimized when the target was highly salient. When the target was nonsalient, more fixations were observed in high global clutter charts, indicating that the number of elements competing with the target for attention was also high. The results suggest design techniques that could improve pilots' search performance in aeronautical charts. (PsycINFO Database Record (c) 2011 APA, all rights reserved).
    Journal of Experimental Psychology Applied 06/2011; 17(2):190. · 1.75 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: It is generally accepted that, with practice, people improve on most tasks. However, when tasks have multiple parts, it is not always clear what aspects of the tasks practice or training should focus on. This research explores the features that allow training to improve the ability to resume a task after an interruption, specifically focusing on task-specific versus general interruption/resumption-process mechanisms that could account for improved performance. Three experiments using multiple combinations of primary tasks and interruptions were conducted with undergraduate psychology students. The first experiment showed that for one primary and interruption task-pair, people were able to resume the primary task faster when they had previous practice with the interruption. The second experiment replicated this finding for two other sets of primary and interruption task-pairs. Finally, the third experiment showed that people were able to resume a primary task faster only when they had previous practice with that specific primary and interruption task-pair. Experience with other primary and interruption task-pairs, or practice on the primary task alone, did not facilitate resumption. This suggests that a critical component in resuming after an interruption is the relationship between two tasks. These findings are in line with a task-specific mechanism of resumption and incompatible with a general-process mechanism. These findings have practical implications for developing training programs and mitigation strategies to lessen the disruptive effects of interruptions which plague both our personal and professional environments.
    Journal of Experimental Psychology Applied 06/2011; 17(2):97-109. · 1.75 Impact Factor
  • James C Thompson, J Gregory Trafton, Patrick McKnight
    [Show abstract] [Hide abstract]
    ABSTRACT: As technology develops, social robots and synthetic avatars might begin to play more of a role in our lives. An influential theory of the perception of synthetic agents states that as they begin to look and move in a more human-like way, they elicit profound discomfort in the observer--an effect known as the Uncanny Valley. Previous attempts to examine the existence of the Uncanny Valley have not adequately manipulated movement parameters that contribute to perceptions of the humanness or eeriness. Here we parametrically manipulated three different kinematic features of two walking avatars and found that, contrary to the Uncanny Valley hypothesis, ratings of the humanness, familiarity, and eeriness of these avatars changed monotonically. Our results indicate that, when a full gradient of motion parameter changes is examined, ratings of synthetic agents by human observers do not show an Uncanny Valley.
    Perception 01/2011; 40(6):695-704. · 1.31 Impact Factor
  • Source
    Laura M. Hiatt, Anthony M. Harrison, J. Gregory Trafton
    IJCAI 2011, Proceedings of the 22nd International Joint Conference on Artificial Intelligence, Barcelona, Catalonia, Spain, July 16-22, 2011; 01/2011
  • Raj M. Ratwani, J. Gregory Trafton
    [Show abstract] [Hide abstract]
    ABSTRACT: Procedural errors occur despite the user having the correct knowledge of how to perform a particular task. Previous research has mostly focused on preventing these errors by redesigning tasks to eliminate error prone steps. A different method of preventing errors, specifically postcompletion errors (e.g., forgetting to retrieve the original document from a photocopier), has been proposed by Ratwani, McCurry, and Trafton (2008), which uses theoretically motivated eye movement measures to predict when a user will make an error. The predictive value of the eye-movement-based model was examined and validated on two different tasks using a receiver-operating characteristic analysis. A real-time eye-tracking postcompletion error prediction system was then developed and tested; results demonstrate that the real-time system successfully predicts and prevents postcompletion errors before a user commits the error.
    HUMAN–COMPUTER INTERACTION. 01/2011; 26(3):205-245.
  • J. Gregory Trafton
    [Show abstract] [Hide abstract]
    ABSTRACT: Interaction between two entities is a mixture of social, cognitive, and embodied qualities. We know a great deal about interaction between people, but only recently have begun exploring whether people interact with robots and avatars the same way that people interact with each other. In general, people do seem to interact with computers (and robots) the same way that people interact with each other. Most of this work has suggested that people respond to robots as social actors simply by providing relatively surface level cues about social behavior (e.g., a female voice). Our approach has been to build high-fidelity cognitive models that match human-level data and that run on our robots. These models have focused on pure cognitive robotics with emergent interaction: the robot architecture is a cognitive architecture and the robot deals with the environment and people the same way that as people do. We have recently been working on taking our cognitive models and predicting what a person would do next or in the near future, and then improving interaction. I will discuss some of the challenges and the success of using a pure cognitive approach and a hybrid robotics/cognitive approach.
    01/2011;

Publication Stats

2k Citations
66.31 Total Impact Points

Institutions

  • 2013
    • Washington DC VA Medical Center
      Washington, Washington, D.C., United States
  • 2010–2011
    • Louisiana State University
      • Department of Psychology
      Baton Rouge, LA, United States
  • 2004–2010
    • George Mason University
      • Department of Psychology
      Fairfax, Virginia, United States
  • 2007–2009
    • United States Naval Research Laboratory
      Washington, Washington, D.C., United States
  • 2002–2008
    • Michigan State University
      • Department of Psychology
      East Lansing, MI, United States
  • 2006
    • Naval Undersea Warfare Center
      Newport, Rhode Island, United States
  • 2004–2006
    • Rensselaer Polytechnic Institute
      • Department of Cognitive Science
      Troy, New York, United States
  • 1995
    • N.T.I.
      Georgia, United States