ArticlePDF Available

Visual Momentum: A Concept to Improve the Cognitive Coupling of Person and Computer

Authors:

Abstract

Computer display system users must integrate data across successive displays. This problem of across-display processing is analogous to the question of how the visual system combines data across successive glances (fixations). Research from cognitive psychology on the latter question is used in order to formulate guidelines for the display designer. The result is a new principle of person-computer interaction, visual momentum, which captures knowledge about the mechanisms that support the identification of “relevant” data in human perception so that display system design can support an effective distribution of user attention. The negative consequences of low visual momentum on user performance are described, and display design techniques are presented to improve user across-display information extraction.
A preview of the PDF is not available
... We can think of a movement information signal with rich spatial structure as an Exponential distribution and random background noise as a Gaussian distribution. This is similar in principle to the concept of visual momentum [6], or the reinforcement of relevant data to support an effective distribution of direct perceptual information, allowing an agent to extract and integrate information over time, and is driven by spatial parallelism. Athalye et al. [7] have observed that neural activity patterns leading to the reinforcement of environmental structure receive more frequent feedback. ...
... There is information overload. The objective of data visualization therefore is to enhance situational awareness about the patient state through intuitive, user-friendly graphical displays [109][110][111][112][113]. Graphical displays have outperformed traditional text displays in numerous studies. ...
Article
Full-text available
Purpose of Review To describe predictive data and workflow in the intensive care unit when managing neurologically ill patients. Recent Findings In the era of Big Data in medicine, intensive critical care units are data-rich environments. Neurocritical care adds another layer of data with advanced multimodal monitoring to prevent secondary brain injury from ischemia, tissue hypoxia, and a cascade of ongoing metabolic events. A step closer toward personalized medicine is the application of multimodal monitoring of cerebral hemodynamics, bran oxygenation, brain metabolism, and electrophysiologic indices, all of which have complex and dynamic interactions. These data are acquired and visualized using different tools and monitors facing multiple challenges toward the goal of the optimal decision support system. Summary In this review, we highlight some of the predictive data used to diagnose, treat, and prognosticate the neurologically ill patients. We describe information management in neurocritical care units including data acquisition, wrangling, analysis, and visualization.
... We can think of a movement information signal with rich spatial structure as an Exponential distribution (Power or Poisson) and random background noise as a Gaussian distribution. This is similar, at least in principle, to the concept of visual momentum (Woods, 1984). Visual momentum is the reinforcement of relevant data to support an effective distribution of direct perceptual information, allowing an agent to extract and integrate information over time, and is driven by spatial parallelism. ...
Preprint
Full-text available
We propose a new way to quantitatively characterize information: Gibsonian Information (GI). This framework is relevant to both the study of cognitive agents and single cell systems that exhibit cognitive behaviors. GI provides a means to characterize how agents extract information from direct perceptual signals. This differs from existing information theories in two ways. The first involves an emphasis on sensory processing, engagement in collective behaviors, and the dynamic evolution of such interactions. GI is useful for understanding first-order sensory inputs in terms of agent interactions with naturalistic contexts and higher-order representations. This allows us to extend GI to cybernetic and other types of symbolic systems representations. GI also emphasizes the role of information content in the relationship between ecology and nervous systems. Along with direct sensory input and simple internal representations, statistical affordances (clustered information that is spatiotemporally dependent perceptual input) facilitate extraction of GI from the environment. As a quantitative accounting of perceptual information, GI provides a means to measure a generalized indicator of nervous system input, and can be characterized by three scenarios: disjoint distributions, contingent action, and coherent movement. All of these cases provide a means to create a differential system between both motion (information) and random noise/stasis (non-information). By applying this framework to a variety of specific contexts, including a four-channel model of multisensory embodiment, we demonstrate how GI is essential to understanding the full scope of cognitive information processing.
... Similar to the physical store experience where shoppers can walk through a store with their spatial representations of a store layout to find their desired products (Titus and Everett, 1995), the structure of online navigation design is critical to effectively engage users to identify right product information (Hoque and Lohse, 1999). Disorientation (Jarvenpaa and Todd, 1997;Smith, 1996) is claimed as the oldest and the most devastating problem of web navigation (Danielson, 2003), given that it causes users to get lost without clues about their current location in the navigation system, which makes it difficult to further seek their needed information or actions with the system (Woods, 1984). This phenomenon is defined as "lost in hyperspace" (Smith, 1996, p. 365), where users have cognitive difficulties in finding their way. ...
Article
Purpose The paper aims to investigate how a tablet's design features, namely, its navigation design and visual appearance, influence users' enjoyment, concentration and control, when using tablets for problem-solving, and thereafter how their core flow experiences impact their perceived performance and efficiency with problem-solving. Design/methodology/approach This study uses a field survey approach to engage 87 participants in a decision sciences class to use eTextbooks and a few other associated educational apps including CourseSmart app for e-notes and highlighting, sketchbook app and a calculator app in tablets to resolve class problems at a large US university. Findings This study finds that the tablet's interface design features (navigation and visual appearance) make users engrossed in their problem-solving processes with perceived enjoyment, concentration and control. This, in turn, impacts their perceived performance and efficiency. Moreover, visual appearance plays the most significant role in arousing users' affective emotions (i.e. enjoyment), while interface navigation is crucial to engage users' deep concentration (i.e. cognition) and control for problem-solving. Practical implications Modern tablets are being used widely in various sectors. More in-depth user flow experience design associated with tablet use for problem-solving contexts should be further advocated in order to provide more engaging and meaningful flow experiences to users. Originality/value This study shows that the design of the tablet interface can engage users in problem-solving processes in both affective and cognitive ways. It provides valuable insights on tablet interface design for problem-solving.
Article
Objective Examine the effects of decision risk and automation transparency on the accuracy and timeliness of operator decisions, automation verification rates, and subjective workload. Background Decision aids typically benefit performance, but can provide incorrect advice due to contextual factors, creating the potential for automation disuse or misuse. Decision aids can reduce an operator’s manual problem evaluation, and it can also be strategic for operators to minimize verifying automated advice in order to manage workload. Method Participants assigned the optimal unmanned vehicle to complete missions. A decision aid provided advice but was not always reliable. Two levels of decision aid transparency were manipulated between participants. The risk associated with each decision was manipulated using a financial incentive scheme. Participants could use a calculator to verify automated advice; however, this resulted in a financial penalty. Results For high- compared with low-risk decisions, participants were more likely to reject incorrect automated advice and were more likely to verify automation and reported higher workload. Increased transparency did not lead to more accurate decisions and did not impact workload but decreased automation verification and eliminated the increased decision time associated with high decision risk. Conclusion Increased automation transparency was beneficial in that it decreased automation verification and decreased decision time. The increased workload and automation verification for high-risk missions is not necessarily problematic given the improved automation correct rejection rate. Application The findings have potential application to the design of interfaces to improve human–automation teaming, and for anticipating the impact of decision risk on operator behavior.
Article
Data videos, a storytelling genre that visualizes data facts with motion graphics, are gaining increasing popularity among data journalists, non‐profits, and marketers to communicate data to broad audiences. However, crafting a data video is often time‐consuming and asks for various domain knowledge such as data visualization, animation design, and screenwriting. Existing authoring tools usually enable users to edit and compose a set of templates manually, which still cost a lot of human effort. To further lower the barrier of creating data videos, this work introduces a new approach, AutoClips, which can automatically generate data videos given the input of a sequence of data facts. We built AutoClips through two stages. First, we constructed a fact‐driven clip library where we mapped ten data facts to potential animated visualizations respectively by analyzing 230 online data videos and conducting interviews. Next, we constructed an algorithm that generates data videos from data facts through three steps: selecting and identifying the optimal clip for each of the data facts, arranging the clips into a coherent video, and optimizing the duration of the video. The results from two user studies indicated that the data videos generated by AutoClips are comprehensible, engaging, and have comparable quality with human‐made videos.
Article
Both conflict resolution aid (CRA) and vertical situation display (VSD) systems may contribute to air traffic control (ATC) operations. However, their effectiveness still needs to be examined before being widely adopted in ATC facilities. This study aims to examine empirically the use of CRA and VSD as well as the systems’ interaction in ATC operations. It was found that CRA benefited conflict resolution performance by 13⋅7% and lowered workload by 46⋅4% compared with manually performing the task. The VSD could also reduce the air traffic controllers’ (ATCOs) workload and improve their situation awareness. Ultimately, when the first CRA failure occurred, the situation awareness supported by VSD offset the performance decrements by 30%. The findings from this study demonstrate that integrating VSD with CRA would benefit ATC operations, regardless of the CRA's imperfection.
Article
Methods are needed for implementing findings of theoretical research early in the design phase and tracing them through to final designs. This paper describes one such approach in applying what is known about cognitive psychology, human factors, and development techniques to interface design. The basic technique used to provide a design framework was an adaptation of the Quality Function Deployment (QFD) house of quality. This paper describes the QFD structure and how it was adapted to provide that critical link between theoretical research findings and resulting interface design concepts. The discussion focuses on three topics: basic concepts within the house of quality, the house of quality adapted for interface design, and application to the design process. A number of benefits are realized from use of this approach. First, it describes directly the relationship between human processing characteristics, design requirements, and design solutions. Second, it characterizes the nature of conflicts among alternative design solutions. Third, it indicates areas of potential applied research. Finally, it provides a single, hierarchical construct that carries through from the initial conceptual design to final product evaluation. The benefit of this approach to interface design is that a broad spectrum of theoretical and experimental research is summarized into a manageable design tool, which may provide insights to human factors practitioners, design engineers, and subject matter experts alike.
Chapter
Full-text available
Little or no direct experimental work exists on the role of attention in error detection and diagnosis. Therefore, this paper draws on well established approaches to the understanding of human information processing to suggest the direction in which may lie a model of the way the need to pay attention to many sources of information, and to the information received from those sources, gives rise to difficulties for the human operator in monitoring large automatic and semiautomatic systems.
Article
In previous papers, the need to depart from traditional "one sensor - one indicator" approaches in order to take full advantage of the possibilities offered by the computer was pointed out and an approach based on the conception of the operator as a multi-mode information processor was advanced. This formulation of operator functioning as being categorisable into elements of skill-based, rule-based and knowledge-based behaviour gives, among other things, a structure for generating suitable forms for information presentation which is compatible with these elements and thus can counteract certain recognized tendencies for human malfunction arising from insufficient or inadequate displays of information.
Chapter
Two studies are presented that probed map readers’ abilities to isolate and process cartographic information visually. The first study describes a limited investigation of the relationship between three fixation-related variables and the informational characteristics of a map. The results indicated that map reading was typified by intelligent scanning. The second study followed from the first and describes six experiments investigating the relationship between the amount of graphic information and the accuracy of processing during a map-like visual choice task. The results of these experiments indicated that increasing the amount of graphic information decreased the accuracy of response when a basic level of information was presented.
The human/computer interface for many information retrieval systems is a hierarchical menu structure. Although this type of interface has several advantages for novice users, there is great potential for confusion and disorientation when menu structures are large and complex. This experiment demonstrated that subjects given the opportunity to study a map of the menu organization for a prototype database showed an overall improvement in information retrieval performance across time. Subjects who studied a linear index of correct menu choice sequences showed a general decline in performance as the time between studying the index and new retrieval tasks increased. A control group was slower and made more unnecessary choices than either the map or index group. The results suggest that access to a pictorial representation of a menu structure facilitates the development of a workable, relatively long-lasting mental model of that structure.