Article

Designing for Ambiguity in Visual Analytics: Lessons from Risk Assessment and Prediction

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Ambiguity is pervasive in the complex sensemaking domains of risk assessment and prediction but there remains little research on how to design visual analytics tools to accommodate it. We report on findings from a qualitative study based on a conceptual framework of sensemaking processes to investigate how both new visual analytics designs and existing tools, primarily data tables, support the cognitive work demanded in avalanche forecasting. While both systems yielded similar analytic outcomes we observed differences in ambiguous sensemaking and the analytic actions either afforded. Our findings challenge conventional visualization design guidance in both perceptual and interaction design, highlighting the need for data interfaces that encourage reflection, provoke alternative interpretations, and support the inherently ambiguous nature of sensemaking in this critical application. We review how different visual and interactive forms support or impede analytic processes and introduce “gisting” as a significant yet unexplored analytic action for visual analytics research. We conclude with design implications for enabling ambiguity in visual analytics tools to scaffold sensemaking in risk assessment.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Given potential disconnects between the assumptions of scenarios, we cannot apply classical statistical or interpolation techniques to the data, because we lose the narrative components that are core to scenarios. Therefore, applying ensemble and uncertainty visualization techniques used in previous work to scenario data is not straightforward [13,21,22,26]. ...
Conference Paper
Full-text available
Scenario studies are a technique for representing a range of possible complex decisions through time, and analyzing the impact of those decisions on future outcomes of interest. It is common to use scenarios as a way to study potential pathways towards future build-out and decarbonization of energy systems. The results of these studies are often used by diverse energy system stakeholders — such as community organizations, power system utilities, and policymakers — for decision-making using data visualization. However, the role of visualization in facilitating decision-making with energy scenario data is not well understood. In this work, we review common visualization designs employed in energy scenario studies and discuss the effectiveness of some of these techniques in facilitating different types of analysis with scenario data.
Conference Paper
Full-text available
Ambiguity, the state in which alternative interpretations are plausible or even desirable, is an inexorable part of complex sensemaking. Its challenges are compounded when analysis involves risk, is constrained , and needs to be shared with others. We report on several studies with avalanche forecasters that illuminated these challenges and identified how visualization designs can better support ambiguity. Like many complex analysis domains, avalanche forecasting relies on highly heterogeneous and incomplete data where the relevance and meaning of such data is context-sensitive, dependant on the knowledge and experiences of the observer, and mediated by the complexities of communication and collaboration. In this paper, we characterize challenges of ambiguous interpretation emerging from data, analytic processes, and collaboration and communication and describe several management strategies for ambiguity. Our findings suggest several visual analytics design approaches that explicitly address ambiguity in complex sensemaking around risk.
Article
Full-text available
Working with data in table form is usually considered a preparatory and tedious step in the sensemaking pipeline; a way of getting the data ready for more sophisticated visualization and analytical tools. But for many people, spreadsheets – the quintessential table tool – remain a critical part of their information ecosystem, allowing them to interact with their data in ways that are hidden or abstracted in more complex tools. This is particularly true for data workers [61], people who work with data as part of their job but do not identify as professional analysts or data scientists. We report on a qualitative study of how these workers interact with and reason about their data. Our findings show that data tables serve a broader purpose beyond data cleanup at the initial stage of a linear analytic flow: users want to see and “get their hands on” the underlying data throughout the analytics process, reshaping and augmenting it to support sensemaking. They reorganize, mark up, layer on levels of detail, and spawn alternatives within the context of the base data. These direct interactions and human-readable table representations form a rich and cognitively important part of building understanding of what the data mean and what they can do with it. We argue that interactive tables are an important visualization idiom in their own right; that the direct data interaction they afford offers a fertile design space for visual analytics; and that sense making can be enriched by more flexible human-data interaction than is currently supported in visual analytics tools.
Book
Full-text available
Nothing has been more prolific over the past century than human/machine interaction. Automobiles, telephones, computers, manufacturing machines, robots, office equipment, machines large and small; all affect the very essence of our daily lives. However, this interaction has not always been efficient or easy and has at times turned fairly hazardous. Cognitive Systems Engineering (CSE) seeks to improve this situation by the careful study of human/machine interaction as the meaningful behavior of a unified system. Written by pioneers in the development of CSE, Joint Cognitive Systems: Foundations of Cognitive Systems Engineering offers a principled approach to studying human work with complex technology. The authors use a top-down, functional approach and emphasize a proactive (coping) perspective on work that overcomes the limitations of the structural human information processing view. They describe a conceptual framework for analysis with concrete theories and methods for joint system modeling that can be applied across the spectrum of single human/machine systems, social/technical systems, and whole organizations. The book explores both current and potential applications of CSE illustrated by examples. Understanding the complexities and functions of the human/machine interaction is critical to designing safe, highly functional, and efficient technological systems. This is a critical reference for students, designers, and engineers in a wide variety of disciplines.
Article
Full-text available
This paper discusses how epistemic uncertainties are currently considered in the most widely occurring natural hazard areas, including floods, landslides and debris flows, dam safety, droughts, earthquakes, tsunamis, volcanic ash clouds and pyroclastic flows, and wind storms. Our aim is to provide an overview of the types of epistemic uncertainty in the analysis of these natural hazards and to discuss how they have been treated so far to bring out some commonalities and differences. The breadth of our study makes it difficult to go into great detail on each aspect covered here; hence the focus lies on providing an overview and on citing key literature. We find that in current probabilistic approaches to the problem, uncertainties are all too often treated as if, at some fundamental level, they are aleatory in nature. This can be a tempting choice when knowledge of more complex structures is difficult to determine but not acknowledging the epistemic nature of many sources of uncertainty will compromise any risk analysis. We do not imply that probabilistic uncertainty estimation necessarily ignores the epistemic nature of uncertainties in natural hazards; expert elicitation for example can be set within a probabilistic framework to do just that. However, we suggest that the use of simple aleatory distributional models, common in current practice, will underestimate the potential variability in assessing hazards, consequences, and risks. A commonality across all approaches is that every analysis is necessarily conditional on the assumptions made about the nature of the sources of epistemic uncertainty. It is therefore important to record the assumptions made and to evaluate their impact on the uncertainty estimate. Additional guidelines for good practice based on this review are suggested in the companion paper (Part 2).
Conference Paper
Full-text available
Forecasting avalanche problems and avalanche danger is a judgmental assessment that is highly susceptible to interpretation and bias. Thus, congruency between forecasters is a significant challenge and much effort has been expended to harmonize assessment methods between different forecasters, regions, and even countries. While recent studies have helped to identify bias and inconsistency in avalanche danger ratings (Lazar et al. 2016; Techel et al. 2018), in-house feedback directly to forecasters is sometimes absent. Accurate, well-summarized feedback could provide the primary basis for avalanche forecasters to improve their judgmental forecasts. Using data from historic forecasts, this paper looks first at the consistency of how avalanche problems are applied by different agencies with adjacent regions in the Canadian Rockies. We show inconsistency in the distribution of avalanche problems published for adjacent regions with similar snowpacks, and by different forecasters within the same region. Although definitions exist for different types of avalanche problems (Statham et al. 2018), insufficient guidance for forecasters on how to apply avalanche problems consistently can lead to conflicting information and confusion for backcountry users. Next we look at the accuracy of 24, 48 and 72 hour forecasts of danger ratings when compared against real-time assessments. Drawing from 3,752 avalanche bulletins over seven seasons, we show an overall accuracy of 73%. Forecasts of Low danger are the most accurate (84%) and they become progressively less accurate as the forecast danger levels rise. We conclude by offering recommendations on the application of avalanche problems, enhanced forecaster training and encouragement for other agencies to analyze their own forecasting data. Feedback can have a pronounced effect on bias if incorporated more routinely into professional activities (Vick 2002) and with it, forecasters can become better calibrated.
Article
Full-text available
Risk communication is challenging since scientific knowledge is likely to be targeted to the public, which may have inadequate knowledge to understand jargons and expertise in risk messages. This study aims to construct a journalistic gist extraction typology, which can be useful for developing risk messages. Journalists' lead writing was conducted for 164 governmental press releases regarding food risks, and they were compared to factual information in original press releases. seven types of gist extraction were identified: ‘exemplifying,’ ‘contextualizing,’ ‘grouping,’ ‘identifying likely victims,’ ‘emotional appeal,’ ‘separating verbatim,’ and ‘sense-making numbers.’ The typology was valid with 92% of the total leads made by nine reporters being applicable to it. The content analysis revealed that ‘exemplifying’ was the most frequent gist extraction type, followed by ‘contextualizing’ and ‘separating verbatim.’
Article
Full-text available
This conceptual model of avalanche hazard identifies the key components of avalanche hazard and structures them into a systematic, consistent workflow for hazard and risk assessments. The method is applicable to all types of avalanche forecasting operations, and the underlying principles can be applied at any scale in space or time. The concept of an avalanche problem is introduced, describing how different types of avalanche problems directly influence the assessment and management of the risk. Four sequential questions are shown to structure the assessment of avalanche hazard, namely: (1) What type of avalanche problem(s) exists? (2) Where are these problems located in the terrain? (3) How likely is it that an avalanche will occur? and (4) How big will the avalanche be? Our objective was to develop an underpinning for qualitative hazard and risk assessments and address this knowledge gap in the avalanche forecasting literature. We used judgmental decomposition to elicit the avalanche forecasting process from forecasters and then described it within a risk-based framework that is consistent with other natural hazards disciplines.
Article
Full-text available
Interactive visual data analysis is most productive when users can focus on answering the questions they have about their data, rather than focusing on how to operate the interface to the analysis tool. One viable approach to engaging users in interactive conversations with their data is a natural language interface to visualizations. These interfaces have the potential to be both more expressive and more accessible than other interaction paradigms. We explore how principles from language pragmatics can be applied to the flow of visual analytical conversations, using natural language as an input modality. We evaluate the effectiveness of pragmatics support in our system Evizeon, and present design considerations for conversation interfaces to visual analytics tools.
Article
Full-text available
The Avalanche Danger Scale is an ordinal, five-level warning system that is a cornerstone of public avalanche information. The system was developed in Europe in 1993, and introduced to North America in 1994. Although both Canada and the United States adopted the system, different descriptors of the danger levels were developed in each country. Fifteen years of practical use revealed numerous deficiencies in this danger scale, most notably a lack of clarity during low probability/high consequence avalanche conditions. In 2005, a group of Canadian and American avalanche forecasters and researchers began to revise the system, with the goal of improving clarity and developing a single standard for North America. Initial explorations to define the problem resulted in more questions and uncovered an almost complete absence of formal underpinnings for the danger scale. The magnitude of the project subsequently changed, and in 2007 the project objectives were clarified as: 1) definitions of avalanche hazard, danger and risk; 2) methodology for assessing avalanche danger; and 3) revisions to the danger scale as a public communication tool. This paper concentrates on the third and final objective, and describes the methods and results of producing the North American Public Avalanche Danger Scale. Emphasis is placed on best practice in warning system design and the principles of risk communication, which helped reshape the avalanche danger scale into a more effective communication tool. The revised danger scale will be implemented across Canada and the United States for the 2010/11 season.
Conference Paper
Full-text available
This state of the art report focuses on glyph-based visualization, a common form of visual design where a data set is depicted by a collection of visual objects referred to as glyphs. Its major strength is that patterns of multivariate data involving more than two attribute dimensions can often be more readily perceived in the context of a spatial relationship, whereas many techniques for spatial data such as direct volume rendering find difficult to depict with multivariate or multi-field data, and many techniques for non-spatial data such as parallel coordinates are less able to convey spatial relationships encoded in the data. This report fills several major gaps in the literature, drawing the link between the fundamental concepts in semiotics and the broad spectrum of glyph-based visualization, reviewing existing design guidelines and implementation techniques, and surveying the use of glyph-based visualization in many applications.
Article
Full-text available
Anticipatory thinking is a critical macrocognitive function of individuals and teams. It is the ability to prepare in time for problems and opportunities. We distinguish it from prediction because anticipatory thinking is functional—people are preparing themselves for future events, not simply predicting what might happen. And it is aimed at potential events including low-probability high threat events, not simply the most predictable events. Anticipatory thinking includes active attention management—focusing attention on likely sources of critical information. We distinguish three types of anticipatory thinking: pattern matching to react to individual cues, trajectory tracking to react to trends, and a conditional form of anticipatory thinking in which people react to the implications of combinations of events. We discuss some individual and team-level barriers and suggest some ways to enhance anticipatory thinking.
Article
The trouble with data is that it frequently provides only an imperfect representation of a phenomenon of interest. Experts who are familiar with their datasets will often make implicit, mental corrections when analyzing a dataset, or will be cautious not to be overly confident about their findings if caveats are present. However, personal knowledge about the caveats of a dataset is typically not incorporated in a structured way, which is problematic if others who lack that knowledge interpret the data. In this work, we define such analysts' knowledge about datasets as data hunches . We differentiate data hunches from uncertainty and discuss types of hunches. We then explore ways of recording data hunches, and, based on a prototypical design, develop recommendations for designing visualizations that support data hunches. We conclude by discussing various challenges associated with data hunches, including the potential for harm and challenges for trust and privacy. We envision that data hunches will empower analysts to externalize their knowledge, facilitate collaboration and communication, and support the ability to learn from others' data hunches.
Chapter
In the final stage of visual perception, patterns are resolved into objects and scenes, and links are formed between the visual objects and the rich semantics accumulated from our knowledge of the world. There are two main theories of object perception, a structural theory which holds that objects are structurally analyzed into their component parts and an image-based theory which holds that object recognition is largely based on generalizable two-dimensional patterns. We argue that both theories are probably correct. The concepts of visual working memory and verbal working memory are introduced. These temporary stores hold the stuff of ongoing thought processes and their limited capacity is the reason why visualizations can be such powerful cognitive tools, since they can act as memory extensions. The working memories link information about objects in the external world with the rich semantics of language, ongoing thought processes and nascent predictive action plans. Objects and images also have the potential to elicit strong emotional responses and this can be important in motivating people to action. Using identifiable objects in a visualization can result in greater memorability, but this may come at the cost of bias. The value of novelty in visual imagery is discussed for its potential to attract and hold attention.
Article
While we know that the visualization of quantifiable uncertainty impacts the confidence in insights, little is known about whether the same is true for uncertainty that originates from aspects so inherent to the data that they can only be accounted for qualitatively. Being embedded within an archaeological project, we realized how assessing such qualitative uncertainty is crucial in gaining a holistic and accurate understanding of regional spatio-temporal patterns of human settlements over millennia. We therefore investigated the impact of visualizing qualitative implicit errors on the sense-making process via a probe that deliberately represented three distinct implicit errors, i.e., differing collection methods, subjectivity of data interpretations and assumptions on temporal continuity. By analyzing the interactions of 14 archaeologists with different levels of domain expertise, we discovered that novices became more actively aware of typically overlooked data issues and domain experts became more confident of the visualization itself. We observed how participants quoted social factors to alleviate some uncertainty, while in order to minimize it they requested additional contextual breadth or depth of the data. While our visualization did not alleviate all uncertainty, we recognized how it sparked reflective meta-insights regarding methodological directions of the data. We believe our findings inform future visualizations on how to handle the complexity of implicit errors for a range of user typologies and for highly data-critical application domains such as the digital humanities.
Conference Paper
Ambiguity, an information state where multiple interpretations are plausible, is a common challenge in visual analytics (VA) systems. We discuss lessons learned from a case study designing VA tools for Canadian avalanche forecasters. Avalanche forecasting is a complex and collaborative risk-based decision-making and analysis domain, demanding experience and knowledge-based interpretation of human reported and uncertain data. Differences in reporting practices, organizational contexts, and the particularities of individual reports result in a variety of potential interpretations that have to be negotiated as part of the forecaster's sensemaking processes. We describe our preliminary research using glyphs to support sensemaking under ambiguity. Ambiguity is not unique to public avalanche forecasting. There are many other domains where the way data are measured and reported vary in ways not accounted explicitly in the data and require analysts to negotiate multiple potential meanings. We argue that ambiguity is under-served by visualization research and would benefit from more explicit VA support.
Article
Data workers are people who perform data analysis activities as a part of their daily work but do not formally identify as data scientists. They come from various domains and often need to explore diverse sets of hypotheses and theories, a variety of data sources, algorithms, methods, tools, and visual designs. Taken together, we call these alternatives. To better understand and characterize the role of alternatives in their analyses, we conducted semi-structured interviews with 12 data workers with different types of expertise. We conducted four types of analyses to understand 1) why data workers explore alternatives; 2) the different notions of alternatives and how they fit into the sensemaking process; 3) the high-level processes around alternatives; and 4) their strategies to generate, explore, and manage those alternatives. We find that participants' diverse levels of domain and computational expertise, experience with different tools, and collaboration within their broader context play an important role in how they explore these alternatives. These findings call out the need for more attention towards a deeper understanding of alternatives and the need for better tools to facilitate the exploration, interpretation, and management of alternatives. Drawing upon these analyses and findings, we present a framework based on participants' 1) degree of attention, 2) abstraction level, and 3) analytic processes. We show how this framework can help understand how data workers consider such alternatives in their analyses and how tool designers might create tools to better support them.
Chapter
The core of the workshop would have to be an example application that called for cognitive systems engineering. We considered several different possibilities. One option was to design a kitchen. Another option was to redesign a global positioning system device for helping rental car customers navigate in unfamiliar cities. We initially selected the kitchen design exercise as one that would allow participants to immerse themselves in the design problem without any prior need for familiarization with the problem domain. We anticipated that workshop participants would be able to link kitchen design issues to the workshop exercises that were to be introduced through the remainder of the day.
Article
This paper presents a framework for externalizing and analyzing expert knowledge about discrepancies in data through the use of visualization. Grounded in an 18-month design study with global health experts, the framework formalizes the notion of data discrepancies as implicit error, both in global health data and more broadly. We use the term implicit error to describe measurement error that is inherent to and pervasive throughout a dataset, but that isn't explicitly accounted for or defined. Instead, implicit error exists in the minds of experts, is mainly qualitative, and is accounted for subjectively during expert interpretation of the data. Externalizing knowledge surrounding implicit error can assist in synchronizing, validating, and enhancing interpretation, and can inform error analysis and mitigation. The framework consists of a description of implicit error components that are important for downstream analysis, along with a process model for externalizing and analyzing implicit error using visualization. As a second contribution, we provide a rich, reflective, and verifiable description of our research process as an exemplar summary toward the ongoing inquiry into ways of increasing the validity and transferability of design study research.
Article
To complement the currently existing definitions and conceptual frameworks of visual analytics, which focus mainly on activities performed by analysts and types of techniques they use, we attempt to define the expected results of these activities. We argue that the main goal of doing visual analytics is to build a mental and/or formal model of a certain piece of reality reflected in data. The purpose of the model may be to understand, to forecast or to control this piece of reality. Based on this model-building perspective, we propose a detailed conceptual framework in which the visual analytics process is considered as a goal-oriented workflow producing a model as a result. We demonstrate how this framework can be used for performing an analytical survey of the visual analytics research field and identifying the directions and areas where further research is needed.
Book
A detailed study of research on the psychology of expertise in weather forecasting, drawing on findings in cognitive science, meteorology, and computer science. This book argues that the human cognition system is the least understood, yet probably most important, component of forecasting accuracy. Minding the Weather investigates how people acquire massive and highly organized knowledge and develop the reasoning skills and strategies that enable them to achieve the highest levels of performance. The authors consider such topics as the forecasting workplace; atmospheric scientists' descriptions of their reasoning strategies; the nature of expertise; forecaster knowledge, perceptual skills, and reasoning; and expert systems designed to imitate forecaster reasoning. Drawing on research in cognitive science, meteorology, and computer science, the authors argue that forecasting involves an interdependence of humans and technologies. Human expertise will always be necessary.
Article
Conventional avalanche forecasting is practiced as a mix of deterministic treatment for snow and weather parameters and inductive logic to reach actual forecast decisions. Inductive logic of the scientific method dominates, making frequent use of iteration and redundancy to minimize decision uncertainties. The mental processes involved are holistic rather than analytical. Elementary information theory can be used rationally to sort data categories for minimum entropy and optimize inductive reasoning. Recognizing these principles affords a chance to improve the practice and teaching of conventional forecasting techniques.
Article
We investigate whether the notion of active reading for text might be usefully applied to visualizations. Through a qualitative study we explored whether people apply observable active reading techniques when reading paper-based node-link visualizations. Participants used a range of physical actions while reading, and from these we synthesized an initial set of active reading techniques for visualizations. To learn more about the potential impact such techniques may have on visualization reading, we implemented support for one type of physical action from our observations (making freeform marks) in an interactive node-link visualization. Results from our quantitative study of this implementation show that interactive support for active reading techniques can improve the accuracy of performing low-level visualization tasks. Together, our studies suggest that the active reading space is ripe for research exploration within visualization and can lead to new interactions that make for a more flexible and effective visualization reading experience.
Conference Paper
Uncertainty plays an important and complex role in data analysis, where the goal is to find pertinent patterns, build robust models, and support decision making. While these endeavours are often associated with professional data scientists, many domain experts engage in such activities with varying skill levels. To understand how these domain experts (or "data workers") analyse uncertain data we conducted a qualitative user study with 12 participants from a variety of domains. In this paper, we describe their various coping strategies to understand, minmise, exploit or even ignore this uncertainty. The choice of the coping strategy is influenced by accepted domain practices, but appears to depend on the types and sources of uncertainty and whether participants have access to support tools. Based on these findings, we propose a new process model of how data workers analyse various types of uncertain data and conclude with design considerations for uncertainty-aware data analytics.
Conference Paper
" Answering questions with data is a difficult and time-consuming process. Visual dashboards and templates make it easy to get started, but asking more sophisticated questions often requires learning a tool designed for expert analysts. Natural language interaction allows users to ask questions directly in complex programs without having to learn how to use an interface. However, natural language is often ambiguous. In this work we propose a mixed-initiative approach to managing ambiguity in natural language interfaces for data visualization. We model ambiguity throughout the process of turning a natural language query into a visualization and use algorithmic disambiguation coupled with interactive ambiguity widgets. These widgets allow the user to resolve ambiguities by surfacing system decisions at the point where the ambiguity matters. Corrections are stored as constraints and influence subsequent queries. We have implemented these ideas in a system, DataTone. In a comparative study, we find that DataTone is easy to learn and lets users ask questions without worrying about syntax and proper question form.
Article
Ensemble coding supports rapid extraction of visual statistics about distributed visual information. Researchers typically study this ability with the goal of drawing conclusions about how such coding extracts information from natural scenes. Here we argue that a second domain can serve as another strong inspiration for understanding ensemble coding: graphs, maps, and other visual presentations of data. Data visualizations allow observers to leverage their ability to perform visual ensemble statistics on distributions of spatial or featural visual information to estimate actual statistics on data. We survey the types of visual statistical tasks that occur within data visualizations across everyday examples, such as scatterplots, and more specialized images, such as weather maps or depictions of patterns in text. We divide these tasks into four categories: identification of sets of values, summarization across those values, segmentation of collections, and estimation of structure. We point to unanswered questions for each category and give examples of such cross-pollination in the current literature. Increased collaboration between the data visualization and perceptual psychology research communities can inspire new solutions to challenges in visualization while simultaneously exposing unsolved problems in perception research.
Article
User tasks play a pivotal role in visualization design and evaluation. However, the term ‘task’ is used ambiguously within the visualization community. In this article, we critically analyze the relevant literature and systematically compare definitions of ‘task’ and the usage of related terminology. In doing so, we identify a three-dimensional conceptual space of user tasks in visualization, referred to as the task cube, and the more precise concepts ‘objective’ and ‘action’ for tasks. We illustrate the usage of the task cube’s dimensions in an objective-driven visualization process, in different scenarios of visualization design and evaluation, and for comparing categorizations of abstract tasks. Thus, visualization researchers can better formulate their contributions which helps advance visualization as a whole.
Article
This paper discusses and clarifies the meaning of ambiguity in risk assessment, identifies sources and manifestations of ambiguity in risk assessment, and outlines a procedure for approaching ambiguity in risk-informed decision-making. Existing definitions of ambiguity are reviewed and argued to be of limited relevance for engineering risk assessment. A new overall definition of ambiguity as a challenge to risk-informed decision-making is proposed, and linguistic, contextual, and normative ambiguity are defined as distinct categories of ambiguity. Three tables identify sources and manifestations of ambiguity in preassessment, risk analysis, and risk evaluation. The tables provide the basis for a new procedure for identifying and resolving ambiguity in an analytic-deliberative approach to risk-informed decision-making.
Chapter
Novel graphical and direct-manipulation approaches to query formulation and information visualization are now possible. A useful starting point for designing advanced graphical user interfaces is the Visual Information-Seeking Mantra: first overview, followed by zoom and filter, and then details-on-demand. This chapter offers a task by data type taxonomy with seven data types (1D, 2D, 3D data, temporal data, multi-dimensional data, tree data, and network data) and seven tasks (overview, zoom, filter, details-on-demand, relate, history, and extracts). The success of direct-manipulation interfaces is indicative of the power of using computers in a more visual or graphical manner. Visual displays become even more attractive to provide orientation or context, to enable selection of regions, and to provide dynamic feedback for identifying changes (for example, a weather map). Scientific visualization has the power to make atomic, cosmic, and common 3D phenomena (for example, heat conduction in engines, airflow over wings, or ozone holes) visible and comprehensible. In the visual representation of data, users can scan, recognize, and recall images rapidly and can detect changes in size, color, shape, movement, or texture. They can point to a single pixel, even in a megapixel display, and can drag one object to another to perform an action. The novel-information exploration tools—such as dynamic queries, treemaps, fisheye views, parallel coordinates, starfields, and perspective walls—are a few of the inventions that will have to be validated.
Article
Problem detection is the process by which people first become concerned that events may be taking an unacceptable direction that may require action. Despite its importance, there is surprisingly little empirical or theoretical literature about the cognitive aspects of problem detection. Drawing on previous cognitive task analysis accounts, 52 incidents involving problem detection were selected. Additional interviews were conducted with wildland firefighters and with surgeons. A description of problem detection was developed that emphasizes the role of expertise in detecting and interpreting the significance of subtle cues, as opposed to passively accumulating deviations from expectancies. Further, problem detection is seen as a process of re-conceptualizing the nature of the situation.
Article
The considerable previous work characterizing visualization usage has focused on low-level tasks or interactions and high-level tasks, leaving a gap between them that is not addressed. This gap leads to a lack of distinction between the ends and means of a task, limiting the potential for rigorous analysis. We contribute a multi-level typology of visualization tasks to address this gap, distinguishing why and how a visualization task is performed, as well as what the task inputs and outputs are. Our typology allows complex tasks to be expressed as sequences of interdependent simpler tasks, resulting in concise and flexible descriptions for tasks of varying complexity and scope. It provides abstract rather than domain-specific descriptions of tasks, so that useful comparisons can be made between visualization systems targeted at different application domains. This descriptive power supports a level of analysis required for the generation of new designs, by guiding the translation of domain-specific problems into abstract tasks, and for the qualitative evaluation of visualization usage. We demonstrate the benefits of our approach in a detailed case study, comparing task descriptions from our typology to those derived from related work. We also discuss the similarities and differences between our typology and over two dozen extant classification systems and theoretical frameworks from the literatures of visualization, human-computer interaction, information retrieval, communications, and cartography.
Article
Conventional avalanche forecasting is practiced as a mix of deterministic treatment for snow and weather parameters and inductive logic to reach actual forecast decisions. Inductive logic of the scientific method dominates, making frequent use of iteration and redundancy to minimize decision uncertainties. The mental processes involved are holistic rather than analytical. Elementary information theory can be used rationally to sort data categories for minimum entropy and optimize inductive reasoning. Recognizing these principles affords a chance to improve the practice and teaching of conventional forecasting techniques.
Article
The subject of graphical methods for data analysis and for data presentation needs a scientific foundation. In this article we take a few steps in the direction of establishing such a foundation. Our approach is based on graphical perception—the visual decoding of information encoded on graphs—and it includes both theory and experimentation to test the theory. The theory deals with a small but important piece of the whole process of graphical perception. The first part is an identification of a set of elementary perceptual tasks that are carried out when people extract quantitative information from graphs. The second part is an ordering of the tasks on the basis of how accurately people perform them. Elements of the theory are tested by experimentation in which subjects record their judgments of the quantitative information on graphs. The experiments validate these elements but also suggest that the set of elementary tasks should be expanded. The theory provides a guideline for graph construction: Graphs should employ elementary tasks as high in the ordering as possible. This principle is applied to a variety of graphs, including bar charts, divided bar charts, pie charts, and statistical maps with shading. The conclusion is that radical surgery on these popular graphs is needed, and as replacements we offer alternative graphical forms—dot charts, dot charts with grouping, and framed-rectangle charts.
Article
This paper (Part II) constitutes the second of a two part series todefine the seven elements of avalanche forecasting. Part I contains the first four elements which are neededto present the human issues. This paper contains the last three elements which deal mostly with thephysical issues and their use in the decision-making process. Some basic rules of applied avalancheforecasting are included here, for the first time, to illustrate physically based principleswhich are used in applied avalanche forecasting and their link to data analysis and decisions.Since the seven elements of applied avalanche forecasting are strongly connected, the reader should consultPart I (this journal issue) as a prelude to the present paper. Part II contains sections about dataand information, scale issues in time and space, decision making and errors and physical rules ofapplied forecasting. Since all seven elements of applied avalanche forecasting are connected, Part II does not stand alone.
Article
Avalanche forecasting has traditionally been defined from the perspective of a geophysical problem with respect to the state of stability of the snow cover. In thistwo-part treatise, avalanche forecasting is described in a broader sense by dividing it into seven inter-connected elements: I. definition; II. goal; III. human factors and perception; IV. reasoning process; V. information types and informational entropy; VI. scales in space and time; and VII. decision-making. Part I (this paper), contains the first four elements which are mostly about the human issues and Part II (the following paper) contains the last three elements, which are mostly about the physical issues, and some basic Rules of applied avalanche forecasting. A principal thesis is that all seven elements must be mastered for optimal avalanche forecasting. In addition to the seven elements, the connection to avalanche forecasting as an exercise in risk analysis is made. Inherent in the argument is that avalanche forecasting is a dynamic problem dealing with variations and interaction of a human (avalanche forecaster) and natural system (temporal and spatially varying state of instability of the snow cover). The primary result of the two papers is a first attempt to formally integrate human influences with a new interpretation of the geophysical problem. Since most avalanche accidents result from human errors, no description of avalanche forecasting is complete unless the human component is addressed.