J. Gregory Trafton

United States Naval Research Laboratory, Washington, Washington, D.C., United States

Are you J. Gregory Trafton?

Claim your profile

Publications (158)94.41 Total impact

  • Source
    Sangeet S. Khemlani · Anthony M. Harrison · J. Gregory Trafton ·
    [Show abstract] [Hide abstract]
    ABSTRACT: We describe a novel computational theory of how individuals segment perceptual information into representations of events. The theory is inspired by recent findings in the cognitive science and cognitive neuroscience of event segmentation. In line with recent theories, it holds that online event segmentation is automatic, and that event segmentation yields mental simulations of events. But it posits two novel principles as well: first, discrete episodic markers track perceptual and conceptual changes, and can be retrieved to construct event models. Second, the process of retrieving and reconstructing those episodic markers is constrained and prioritized. We describe a computational implementation of the theory, as well as a robotic extension of the theory that demonstrates the processes of online event segmentation and event model construction. The theory is the first unified computational account of event segmentation and temporal inference. We conclude by demonstrating now neuroimaging data can constrain and inspire the construction of process-level theories of human reasoning.
    Frontiers in Human Neuroscience 10/2015; 9. DOI:10.3389/fnhum.2015.00590 · 3.63 Impact Factor
  • D. Gartenberg · G. Gunzelmann · B. Z. Veksler · J. G. Trafton ·

    09/2015; 59(1):289-293. DOI:10.1177/1541931215591059
  • Source
    Robert Thomson · Aryn Pyke · Laura M Hiatt · J Gregory Trafton ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Associative learning is an important part of human cognition, and is thought to play key role in list learning. We present here an account of associative learning that learns asymmetric item-to-item associations, strengthening or weakening associations over time with repeated exposures. This account, combined with an existing account of activation strengthening and decay, predicts the complicated results of a multi-trial free and serial recall task, including asymmetric contiguity effects that strengthen over time (Klein, Addis, & Kahana, 2005).
    Annual Conference of the Cognitive Science Society, Pasadena, CA; 07/2015
  • Sangeet Khemlani · Max Lotstein · J Gregory Trafton · P N Johnson-Laird ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Abstract We propose a theory of immediate inferences from assertions containing a single quantifier, such as: All of the artists are bakers; therefore, some of the bakers are artists. The theory is based on mental models and implemented in a computer program, mReasoner. It predicts three main levels of increasing difficulty: 1. immediate inferences in which the premise and conclusion have identical meanings, 2. those in which the initial mental model of the premise yields the correct conclusion, and 3. those in which only an alternative to the initial model establishes the correct conclusion. These levels of difficulty were corroborated for inferences to necessary conclusions (in a re-analysis of data from Newstead & Griggs, 1983), for inferences to modal conclusions, such as, it is possible that all of the bakers are artists (Experiment 1), for inferences with unorthodox quantifiers such as, most of the artists (Experiment 2), and for inferences about the consistency of pairs of quantified assertions (Experiment 3). The theory also includes three parameters in a stochastic system that predicted quantitative differences in accuracy within the three main sorts of inference.
    Quarterly journal of experimental psychology (2006) 01/2015; 68(10):1-61. DOI:10.1080/17470218.2015.1007151 · 2.13 Impact Factor
  • Erik M. Altmann · J. Gregory Trafton ·
    [Show abstract] [Hide abstract]
    ABSTRACT: We examined effects of adding brief (1 second) lags between trials in a task designed to study errors in interrupted sequential performance. These randomly occurring lags could act as short breaks and improve performance or as short interruptions and impair performance. The lags improved placekeeping accuracy, and to interpret this effect we developed a cognitive model of placekeeping operations, which accounts for the effect in terms of the lag making memory for recent performance more distinct. Self-report data suggest that rehearsal was the dominant strategy for maintaining placekeeping information during interruptions, and we incorporate a rehearsal mechanism in the model. To evaluate the model we developed a simple new goodness-of-fit test based on analysis of variance that offers an inferential basis for rejecting models that do not accommodate effects of experimental manipulations.
    International Journal of Human-Computer Studies 01/2015; 79. DOI:10.1016/j.ijhcs.2014.12.007 · 1.29 Impact Factor
  • Source
    Daniel Gartenberg · Bella Z. Veksler · Glenn Gunzelmann · J. Gregory Trafton ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Performance on tasks that require sustained attention can be impacted by various factors that include: signal duration, the use of declarative memory in the task, the frequency of critical stimuli that require a response, and the event-rate of the stimuli. A viable model of the ability to maintain vigilance ought to account for these phenomena. In this paper, we focus on one of these critical factors: signal duration. For this we use results from Baker (1963), who manipulated signal duration in a clock task where the second hand moved in a continuous swipe motion. The critical stimuli were stoppages of the hand that lasted for 200, 300, 400, 600, or 800 ms. The results provided evidence for an interaction between condition and time-on-task, where performance declined at a faster rate as the signal duration decreased. In this paper, we describe an ACT-R model that uses fatigue mechanisms from Gunzelmann et al. (2009) that were proposed to account for the impact of sleep loss on sustained attention performance. The research demonstrates how those same mechanisms can be used to understand vigilance task performance. This illustrates an important foundation for predicting and tracking vigilance decrements in applied settings, and validates a mechanism that creates a theoretical link between the vigilance decrement and sleep loss.
    10/2014; 58(1):909-913. DOI:10.1177/1541931214581191
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Objective: This work investigated the impact of uncertainty representation on performance in a complex authentic visualization task, submarine localization. Background: Because passive sonar does not provide unique course, speed, and range information on a contact, the submarine operates under significant uncertainty. There are many algorithms designed to address this problem, but all are subject to uncertainty. The extent of this solution uncertainty can be expressed in several ways, including a table of locations (course, speed, range) or a graphical area of uncertainty. Method: To test the hypothesis that the representation of uncertainty that more closely matches the experts’ preferred representation of the problem would better support performance, even for the nonexpert., performance data were collected using displays that were either stripped of the spatial or the tabular representation. Results: Performance was more accurate when uncertainty was displayed spatially. This effect was only significant for the nonexperts for whom the spatial displays supported almost expert-like performance. This effect appears to be due to reduced mental effort. Conclusion: These results suggest that when the representation of uncertainty for this spatial task better matches the expert’s preferred representation of the problem even a nonexpert can show expert-like performance. Application: These results could apply to any domain where performance requires working with highly uncertain information.
    Human Factors The Journal of the Human Factors and Ergonomics Society 05/2014; 56(3):509-20. DOI:10.1177/0018720813498093 · 1.69 Impact Factor
  • J. Gregory Trafton · Raj M. Ratwani ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Many interfaces have been designed to prevent or reduce errors. These interfaces may, in fact, reduce the error rate of specific error classes, but may also have unintended consequences. In this paper, we show a series of studies where a better interface did not reduce the number of errors but instead shifted errors from one error class (omissions) to another error class (perseverations). We also show that having access to progress tracking (a progress bar) does not reduce the number of errors. We propose and demonstrate a solution -- a predictive error system -- that reduces errors based on the error class, not on the type of interface.
  • Lilia Moshkina · Susan Trickett · J. Gregory Trafton ·
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we describe a large-scale (over 4000 participants) observational field study at a public venue, designed to explore how social a robot needs to be for people to engage with it. In this study we examined a prediction of Computers Are Social Actors (CASA) framework: the more machines present human-like characteristics in a consistent manner, the more likely they are to invoke a social response. Our humanoid robot's behavior varied in the amount of social cues, from no active social cues to increasing levels of social cues during story-telling to human-like game-playing interaction. We found several strong aspects of support for CASA: the robot that provides even minimal social cues (speech) is more engaging than a robot that does nothing, and the more human-like the robot behaved during story-telling, the more social engagement was observed. However, contrary to the prediction, the robot's game-playing did not elicit more engagement than other, less social behaviors.
    Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction; 03/2014
  • Paul Baxter · J. Gregory Trafton ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Developments in autonomous agents for Human-Robot Interaction (HRI), particularly social, are gathering pace. The typical approach to such efforts is to start with an application to a specific interaction context (problem, task, or aspect of interaction) and then try to generalise to different contexts. Alternatively however, the application of Cognitive Architectures emphasises generality across contexts in the first instance. While not the "silver-bullet" solution, this perspective has a number of advantages both in terms of the functionality of the resulting systems, and indeed in the process of applying these ideas. Centred on invited talks to present a range of perspectives, this workshop provides a forum to introduce and discuss the application (both existing and potential) of Cognitive Architectures to HRI, particularly in the social domain. Participants will gain insight into how such a consideration of Cognitive Architectures complements the development of autonomous social robots.
    Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction; 03/2014
  • L.A. Breslow · Daniel Gartenberg · J. Malcolm McCurry · J. Gregory Trafton ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Crandall and Cummings & Mitchell introduced fan-out as a measure of the maximum number of robots a single human operator can supervise in a given single-human-multiple-robot system. Fan-out is based on the time constraints imposed by limitations of the robots and of the supervisor, e.g., limitations in attention. Adapting their work, we introduced a dynamic model of operator overload that predicts failures in supervisory control in real time, based on fluctuations in time constraints and in the supervisor's allocation of attention, as assessed by eye fixations. Operator overload was assessed by damage incurred by unmanned aerial vehicles when they traversed hazard areas. The model generalized well to variants of the baseline task. We then incorporated the model into the system where it predicted in real time, when an operator would fail to prevent vehicle damage and alerted the operator to the threat at those times. These model-based adaptive cues reduced the damage rate by one-half relative to a control condition with no cues.
    IEEE Transactions on Human-Machine Systems 02/2014; 44(1):30-40. DOI:10.1109/TSMC.2013.2293317 · 1.98 Impact Factor
  • Source
    Wallace Lawson · J Gregory Trafton · Eric Martinson ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Complexion plays a remarkably important role in recog-nition. Experiments with human subjects have shown that complexion provides as much distinctiveness as other well-known features such as the shape of the face. From the perspective an autonomous robot, changes in lighting (e.g., intensity, orientation) and camera parameters (e.g., white balance) can make capturing complexion challenging. In this paper, we evaluate complexion as a soft biometric us-ing color (histograms) and texture (local binary patterns). We train a linear SVM to distinguish between the individual and impostors. We demonstrate the performance of this ap-proach on a database of over 200 individuals collected to study biometrics in human-robot interaction. In our exper-iment, we identify 9 individuals that interact with the robot on a regular basis, rejecting all others as unknown.
    IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS); 09/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Adaptive automation (AA) can improve performance while addressing the problems associated with a fully automated system. The best way to invoke AA is unclear, but two ways include critical events and the operator's state. A hybrid model of AA invocation, the dynamic model of operator overload (DMOO), that takes into account critical events and the operator's state was recently shown to improve performance. The DMOO initiates AA using critical events and attention allocation, informed by eye movements. We compared the DMOO with an inaccurate automation invocation system and a system that invoked AA based only on critical events. Fewer errors were made with DMOO than with the inaccurate system. In the critical event condition, where automation was invoked at an earlier point in time, there were more memory and planning errors, while for the DMOO condition, which invocated automation at a later point in time, there were more perceptual errors. These findings provide a framework for reducing specific types of errors through different automation invocation.
    Proceedings of the SIGCHI Conference on Human Factors in Computing Systems; 04/2013
  • Source
    Wallace Lawson · J Gregory Trafton ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Object recognition is a practical problem with a wide variety of potential applications. Recognition becomes substantially more difficult when objects have not been presented in some logical, "posed" manner selected by a human observer. We propose to solve this problem using active object recognition, where the same object is viewed from multiple viewpoints when it is necessary to gain confidence in the classification decision. We demonstrate the effect of unposed objects on a state-of-the-art approach to object recognition, then show how an active approach can increase accuracy. The active approach works by attaching confidence to recognition, prompting further inspection when confidence is low. We demonstrate a performance increase on a wide variety of objects from the RGB-D database, showing a significant increase in recognition accuracy.
    International Conference on Computer Vision Theory and Applications; 02/2013
  • Erik M Altmann · J Gregory Trafton · David Z Hambrick ·
    [Show abstract] [Hide abstract]
    ABSTRACT: We investigated the effect of short interruptions on performance of a task that required participants to maintain their place in a sequence of steps each with their own performance requirements. Interruptions averaging 4.4 s long tripled the rate of sequence errors on post-interruption trials relative to baseline trials. Interruptions averaging 2.8 s long-about the time to perform a step in the interrupted task-doubled the rate of sequence errors. Nonsequence errors showed no interruption effects, suggesting that global attentional processes were not disrupted. Response latencies showed smaller interruption effects than sequence errors, a difference we interpret in terms of high levels of interference generated by the primary task. The results are consistent with an account in which activation spreading from the focus of attention allows control processes to navigate task-relevant representations and in which momentary interruptions are disruptive because they shift the focus and thereby cut off the flow. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
    Journal of Experimental Psychology General 01/2013; 143(1). DOI:10.1037/a0030986 · 5.50 Impact Factor
  • Source
    E. Martinson · W. Lawson · J.G. Trafton ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Person identification is a fundamental robotic capability for long-term interactions with people. It is important to know with whom the robot is interacting for social reasons, as well as to remember user preferences and interaction histories. There exist, however, a number of different features by which people can be identified. This work describes three alternative, soft biometrics (clothing, complexion, and height) that can be learned in real-time and utilized by a humanoid robot in a social setting for person identification. The use of these biometrics is then evaluated as part of a novel experiment in robotic person identification carried out at Fleet Week, New York City in May, 2012. In this experiment, Octavia employed soft biometrics to discriminate between groups of 3 people. 202 volunteers interacted with Octavia as part of the study, interacting with the robot from multiple locations in a challenging environment.
    Human-Robot Interaction (HRI), 2013 8th ACM/IEEE International Conference on; 01/2013
  • Source
    Jooyoung Jang · Susan Bell Trickett · Christian D. Schunn · J. Gregory Trafton ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Spatial arrangement of information can have large effects on problem solving. Although such effects have been observed in various domains (e.g., instruction and interface designs), little is known about the cognitive processing mechanisms underlying these effects, nor its applicability to complex visual problem solving. In three experiments, we showed that the impact of spatial arrangement of information on problem solving time can be surprisingly large for complex real world tasks. It was also found that the effect can be caused by large increases in slow, external information searches (Experiment 1), that the spatial arrangement itself is the critical factor and the effect is domain-general (Experiment 2a), and that the underlying mechanism can involve micro-strategy selection for information encoding in a response to differing information access cost (Experiment 2b). Overall, these studies show a large slowdown effect (i.e., approximately 30%) that stacking information produces over spatially distributed information, and multiple paths by which this effect can be produced.
    International Journal of Human-Computer Studies 11/2012; 70(11):812–827. DOI:10.1016/j.ijhcs.2012.07.003 · 1.29 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: There are a variety of strategies that operators can utilize when performing a dynamic task, yet operator strategies are typically studied in a well-controlled environment that prevents the possibility of these strategies from interacting or competing with one another. In this study we investigated operator strategy use in a dynamic supervisory control task. We identified four possible strategies that the operator may use: scanning, opportunism, task knowledge, and memory. In order to determine the impact of time pressure on strategy use, we manipulated the speed of the vehicles. We found that as time pressure increased, operators shifted from a scanning strategy to a heuristic opportunistic strategy. We also found that when operators used task knowledge and memory they were more likely to be opportunistic.
    10/2012; 56(1):1025-1029. DOI:10.1177/1071181312561214
  • Source
    Laura M. Hiatt · Sangeet S. Khemlani · J. Gregory Trafton ·
    [Show abstract] [Hide abstract]
    ABSTRACT: Our interest is in developing embodied cognitive systems. In the majority of work on cognitive modeling, the focus is on generating models that can perform specific tasks in order to understand specific reasoning processes. This approach has traditionally been exceptionally successful at accomplishing its goal. The approach encounters limitations, however, when the cognitive models are going to be used in an embodied way (e.g., on a robot). Namely, the models are too narrow to operate in the real world due to its unpredictability. In this paper, we argue that one key way for cognitive agents to better operate in real-world environments is to be able to identify and explain unexpected situations in the world; in other words, to perform explanatory reasoning. In this paper, we introduce a framework for explanatory reasoning that describes a way for cognitive agents to achieve this capability.
    Biologically Inspired Cognitive Architectures 07/2012; 1:23–31. DOI:10.1016/j.bica.2012.03.001
  • J. Gregory Trafton · Allison Jacobs · Anthony M. Harrison ·
    [Show abstract] [Hide abstract]
    ABSTRACT: We built and evaluated a predictive model for resuming after an interruption. Two different experiments were run. The first experiment showed that people used a transactive memory process, relying on another person to keep track of where they were after being interrupted while retelling a story. A memory for goals model was built using the ACT-R/E cognitive architecture that matched the cognitive and behavioral aspects of the experiment. In a second experiment, the memory for goals model was put on an embodied robot that listened to a story being told. When the human storyteller attempted to resume the story after an interruption, the robot used the memory for goals model to determine if the person had forgotten the last thing that was said. If the model predicted that the person was having trouble remembering the last thing said, the robot offered a suggestion on where to resume. Signal detection analyses showed that the model accurately predicted when the person needed help.
    Proceedings of the IEEE 03/2012; 100(3):648-659. DOI:10.1109/JPROC.2011.2175149 · 4.93 Impact Factor

Publication Stats

3k Citations
94.41 Total Impact Points


  • 2002-2013
    • United States Naval Research Laboratory
      • The Navy Center for Applied Research in Artificial Intelligence
      Washington, Washington, D.C., United States
  • 2011
    • Louisiana State University
      • Department of Psychology
      Baton Rouge, LA, United States
  • 2004-2008
    • George Mason University
      • Department of Psychology
      페어팩스, Virginia, United States
  • 2006
    • NASA
      Вашингтон, West Virginia, United States
    • Rensselaer Polytechnic Institute
      Troy, New York, United States
  • 1995
    • United States Air Force
      New York, New York, United States