[Show abstract][Hide abstract] ABSTRACT: We describe a novel computational theory of how individuals segment perceptual information into representations of events. The theory is inspired by recent findings in the cognitive science and cognitive neuroscience of event segmentation. In line with recent theories, it holds that online event segmentation is automatic, and that event segmentation yields mental simulations of events. But it posits two novel principles as well: first, discrete episodic markers track perceptual and conceptual changes, and can be retrieved to construct event models. Second, the process of retrieving and reconstructing those episodic markers is constrained and prioritized. We describe a computational implementation of the theory, as well as a robotic extension of the theory that demonstrates the processes of online event segmentation and event model construction. The theory is the first unified computational account of event segmentation and temporal inference. We conclude by demonstrating now neuroimaging data can constrain and inspire the construction of process-level theories of human reasoning.
Preview · Article · Oct 2015 · Frontiers in Human Neuroscience
[Show abstract][Hide abstract] ABSTRACT: We describe a vigilance experiment of a successive task and a simultaneous task. Successive tasks require comparing the current stimulus on the screen to a representation in memory (i.e. making a declarative memory retrieval), whereas simultaneous tasks require making a comparative judgment based on information that is available on the screen. When analyzing the data from this experiment using conventional methods, there was an effect of time-on-task (i.e. block), an effect of task type, and an interaction between block and task type. These findings were consistent with previously reported studies regarding the successive and simultaneous vigilance task distinction, which interpret such findings as evidence that the decrement is more severe for successive tasks. But different results and conclusions are made when more appropriate analyses of the data are used, such as: including block as an interval variable instead of a categorical variable and making the dependent variable detection of critical signals instead of using A’. When these analysis techniques were used, there was no effect of task type and there was no interaction with time on task. This raises questions about many of the findings in the literature, especially those regarding the successive and simultaneous distinction.
[Show abstract][Hide abstract] ABSTRACT: Interruptions are disruptive in that they can decrease accuracy and the time taken to complete a task. In fields such as aviation and medicine, interruptions can not only reduce performance but lead to egregious outcomes. In such situations, confidence in whether a procedure has been completed may become a crucial aspect of judging where to resume a task. This paper demonstrates that interruptions both decrease accuracy and reduce confidence. More importantly, interruptions change the relationship between accuracy and confidence, reducing the likelihood that participants can determine where to resume appropriately.
[Show abstract][Hide abstract] ABSTRACT: We discuss a computational process model of action selection in routine procedures. The model explains several types of human error—omissions, perseverations, and postcompletion error (PCE)— as natural consequences of its action selection mechanisms. Those mechanisms include associative spreading activation for prospective memory and explicit rehearsal strategies for retrospective memory. The model fits empirical data from multiple tasks and from multiple labs.
[Show abstract][Hide abstract] ABSTRACT: Associative learning is an important part of human cognition, and is thought to play key role in list learning. We present here an account of associative learning that learns asymmetric item-to-item associations, strengthening or weakening associations over time with repeated exposures. This account, combined with an existing account of activation strengthening and decay, predicts the complicated results of a multi-trial free and serial recall task, including asymmetric contiguity effects that strengthen over time (Klein, Addis, & Kahana, 2005).
[Show abstract][Hide abstract] ABSTRACT: Abstract We propose a theory of immediate inferences from assertions containing a single quantifier, such as: All of the artists are bakers; therefore, some of the bakers are artists. The theory is based on mental models and implemented in a computer program, mReasoner. It predicts three main levels of increasing difficulty: 1. immediate inferences in which the premise and conclusion have identical meanings, 2. those in which the initial mental model of the premise yields the correct conclusion, and 3. those in which only an alternative to the initial model establishes the correct conclusion. These levels of difficulty were corroborated for inferences to necessary conclusions (in a re-analysis of data from Newstead & Griggs, 1983), for inferences to modal conclusions, such as, it is possible that all of the bakers are artists (Experiment 1), for inferences with unorthodox quantifiers such as, most of the artists (Experiment 2), and for inferences about the consistency of pairs of quantified assertions (Experiment 3). The theory also includes three parameters in a stochastic system that predicted quantitative differences in accuracy within the three main sorts of inference.
No preview · Article · Jan 2015 · Quarterly journal of experimental psychology (2006)
[Show abstract][Hide abstract] ABSTRACT: We examined effects of adding brief (1 second) lags between trials in a task designed to study errors in interrupted sequential performance. These randomly occurring lags could act as short breaks and improve performance or as short interruptions and impair performance. The lags improved placekeeping accuracy, and to interpret this effect we developed a cognitive model of placekeeping operations, which accounts for the effect in terms of the lag making memory for recent performance more distinct. Self-report data suggest that rehearsal was the dominant strategy for maintaining placekeeping information during interruptions, and we incorporate a rehearsal mechanism in the model. To evaluate the model we developed a simple new goodness-of-fit test based on analysis of variance that offers an inferential basis for rejecting models that do not accommodate effects of experimental manipulations.
No preview · Article · Jan 2015 · International Journal of Human-Computer Studies
[Show abstract][Hide abstract] ABSTRACT: Performance on tasks that require sustained attention can be impacted by various factors that include: signal duration, the use of declarative memory in the task, the frequency of critical stimuli that require a response, and the event-rate of the stimuli. A viable model of the ability to maintain vigilance ought to account for these phenomena. In this paper, we focus on one of these critical factors: signal duration. For this we use results from Baker (1963), who manipulated signal duration in a clock task where the second hand moved in a continuous swipe motion. The critical stimuli were stoppages of the hand that lasted for 200, 300, 400, 600, or 800 ms. The results provided evidence for an interaction between condition and time-on-task, where performance declined at a faster rate as the signal duration decreased. In this paper, we describe an ACT-R model that uses fatigue mechanisms from Gunzelmann et al. (2009) that were proposed to account for the impact of sleep loss on sustained attention performance. The research demonstrates how those same mechanisms can be used to understand vigilance task performance. This illustrates an important foundation for predicting and tracking vigilance decrements in applied settings, and validates a mechanism that creates a theoretical link between the vigilance decrement and sleep loss.
[Show abstract][Hide abstract] ABSTRACT: This paper discusses the first experiment in a series designed to systematically understand the different characteristics of an automated system that lead to trust in automation. We also discuss a simple process model, which helps us understand the results. Our experimental paradigm suggests that participants are agnostic to the automation’s behavior; instead, they merely focus on alarm rate. A process model suggests this is the result of a simple reward structure and a non-explicit cost of trusting the automation.
[Show abstract][Hide abstract] ABSTRACT: Mitigating the effects of interruptions is important for tackling the increasing number of possible disruptions at home, at work, and online. Previous work has shown that the benefits of practice can decrease the amount of time it takes to resume a task after an interruption. This paper demonstrates that the same benefit can be extended to error rates at the post-completion step in a simulated computerized physician order entry (CPOE) system. This example of a real-world procedural task demonstrates that a general increase in interruptions leads to changes in performance for the final step of the task.
[Show abstract][Hide abstract] ABSTRACT: Contextual information can greatly improve both the speed and accuracy of object recognition. Context is most often viewed as a static concept, learned from large image databases. We build upon this concept by exploring cognitive context, demonstrating how rich dynamic context provided by computational cognitive models can improve object recognition. We demonstrate the use cognitive context to improve recognition using a small database of objects.
[Show abstract][Hide abstract] ABSTRACT: Objective: This work investigated the impact of uncertainty representation on performance in a complex authentic visualization task, submarine localization.
Background: Because passive sonar does not provide unique course, speed, and range information on a contact, the submarine operates under significant uncertainty. There are many algorithms designed to address this problem, but all are subject to uncertainty. The extent of this solution uncertainty can be expressed in several ways, including a table of locations (course, speed, range) or a graphical area of uncertainty.
Method: To test the hypothesis that the representation of uncertainty that more closely matches the experts’ preferred representation of the problem would better support performance, even for the nonexpert., performance data were collected using displays that were either stripped of the spatial or the tabular representation.
Results: Performance was more accurate when uncertainty was displayed spatially. This effect was only significant for the nonexperts for whom the spatial displays supported almost expert-like performance. This effect appears to be due to reduced mental effort.
Conclusion: These results suggest that when the representation of uncertainty for this spatial task better matches the expert’s preferred representation of the problem even a nonexpert can show expert-like performance.
Application: These results could apply to any domain where performance requires working with highly uncertain information.
Full-text · Article · May 2014 · Human Factors The Journal of the Human Factors and Ergonomics Society
[Show abstract][Hide abstract] ABSTRACT: Many interfaces have been designed to prevent or reduce errors. These interfaces may, in fact, reduce the error rate of specific error classes, but may also have unintended consequences. In this paper, we show a series of studies where a better interface did not reduce the number of errors but instead shifted errors from one error class (omissions) to another error class (perseverations). We also show that having access to progress tracking (a progress bar) does not reduce the number of errors. We propose and demonstrate a solution -- a predictive error system -- that reduces errors based on the error class, not on the type of interface.
[Show abstract][Hide abstract] ABSTRACT: In this paper, we describe a large-scale (over 4000 participants) observational field study at a public venue, designed to explore how social a robot needs to be for people to engage with it. In this study we examined a prediction of Computers Are Social Actors (CASA) framework: the more machines present human-like characteristics in a consistent manner, the more likely they are to invoke a social response. Our humanoid robot's behavior varied in the amount of social cues, from no active social cues to increasing levels of social cues during story-telling to human-like game-playing interaction. We found several strong aspects of support for CASA: the robot that provides even minimal social cues (speech) is more engaging than a robot that does nothing, and the more human-like the robot behaved during story-telling, the more social engagement was observed. However, contrary to the prediction, the robot's game-playing did not elicit more engagement than other, less social behaviors.
[Show abstract][Hide abstract] ABSTRACT: Developments in autonomous agents for Human-Robot Interaction (HRI), particularly social, are gathering pace. The typical approach to such efforts is to start with an application to a specific interaction context (problem, task, or aspect of interaction) and then try to generalise to different contexts. Alternatively however, the application of Cognitive Architectures emphasises generality across contexts in the first instance. While not the "silver-bullet" solution, this perspective has a number of advantages both in terms of the functionality of the resulting systems, and indeed in the process of applying these ideas. Centred on invited talks to present a range of perspectives, this workshop provides a forum to introduce and discuss the application (both existing and potential) of Cognitive Architectures to HRI, particularly in the social domain. Participants will gain insight into how such a consideration of Cognitive Architectures complements the development of autonomous social robots.
[Show abstract][Hide abstract] ABSTRACT: Crandall and Cummings & Mitchell introduced fan-out as a measure of the maximum number of robots a single human operator can supervise in a given single-human-multiple-robot system. Fan-out is based on the time constraints imposed by limitations of the robots and of the supervisor, e.g., limitations in attention. Adapting their work, we introduced a dynamic model of operator overload that predicts failures in supervisory control in real time, based on fluctuations in time constraints and in the supervisor's allocation of attention, as assessed by eye fixations. Operator overload was assessed by damage incurred by unmanned aerial vehicles when they traversed hazard areas. The model generalized well to variants of the baseline task. We then incorporated the model into the system where it predicted in real time, when an operator would fail to prevent vehicle damage and alerted the operator to the threat at those times. These model-based adaptive cues reduced the damage rate by one-half relative to a control condition with no cues.
No preview · Article · Feb 2014 · IEEE Transactions on Human-Machine Systems
[Show abstract][Hide abstract] ABSTRACT: Complexion plays a remarkably important role in recog-nition. Experiments with human subjects have shown that complexion provides as much distinctiveness as other well-known features such as the shape of the face. From the perspective an autonomous robot, changes in lighting (e.g., intensity, orientation) and camera parameters (e.g., white balance) can make capturing complexion challenging. In this paper, we evaluate complexion as a soft biometric us-ing color (histograms) and texture (local binary patterns). We train a linear SVM to distinguish between the individual and impostors. We demonstrate the performance of this ap-proach on a database of over 200 individuals collected to study biometrics in human-robot interaction. In our exper-iment, we identify 9 individuals that interact with the robot on a regular basis, rejecting all others as unknown.
[Show abstract][Hide abstract] ABSTRACT: Adaptive automation (AA) can improve performance while addressing the problems associated with a fully automated system. The best way to invoke AA is unclear, but two ways include critical events and the operator's state. A hybrid model of AA invocation, the dynamic model of operator overload (DMOO), that takes into account critical events and the operator's state was recently shown to improve performance. The DMOO initiates AA using critical events and attention allocation, informed by eye movements. We compared the DMOO with an inaccurate automation invocation system and a system that invoked AA based only on critical events. Fewer errors were made with DMOO than with the inaccurate system. In the critical event condition, where automation was invoked at an earlier point in time, there were more memory and planning errors, while for the DMOO condition, which invocated automation at a later point in time, there were more perceptual errors. These findings provide a framework for reducing specific types of errors through different automation invocation.
[Show abstract][Hide abstract] ABSTRACT: Person identification is a fundamental robotic capability for long-term interactions with people. It is important to know with whom the robot is interacting for social reasons, as well as to remember user preferences and interaction histories. There exist, however, a number of different features by which people can be identified. This work describes three alternative, soft biometrics (clothing, complexion, and height) that can be learned in real-time and utilized by a humanoid robot in a social setting for person identification. The use of these biometrics is then evaluated as part of a novel experiment in robotic person identification carried out at Fleet Week, New York City in May, 2012. In this experiment, Octavia employed soft biometrics to discriminate between groups of 3 people. 202 volunteers interacted with Octavia as part of the study, interacting with the robot from multiple locations in a challenging environment.
[Show abstract][Hide abstract] ABSTRACT: Object recognition is a practical problem with a wide variety of potential applications. Recognition becomes substantially more difficult when objects have not been presented in some logical, "posed" manner selected by a human observer. We propose to solve this problem using active object recognition, where the same object is viewed from multiple viewpoints when it is necessary to gain confidence in the classification decision. We demonstrate the effect of unposed objects on a state-of-the-art approach to object recognition, then show how an active approach can increase accuracy. The active approach works by attaching confidence to recognition, prompting further inspection when confidence is low. We demonstrate a performance increase on a wide variety of objects from the RGB-D database, showing a significant increase in recognition accuracy.