ArticlePDF Available

Usability of interactive and non-interactive visualisation of uncertain geospatial

Authors:

Abstract and Figures

Showing uncertainties of geospatial data in maps in a useful and comprehensible way for skilled and unskilled users is a problem that is still not solved ultimately. To evaluate the usability of some commonly used visualisation techniques with a special focus on interactivity, an explorative study has been conducted with ten interviewees with a geosciences background. Each participant was asked to solve tasks and give personal opinions on three methods applied to the same data set. As an outcome, uncertainty was considered as being helpful for decision making in general. The results also show a clear preference for the simplest method of displaying value and uncertainty in adjacent maps, whereas the more sophisticated Aguila visualisation system was judged as helpful for expert users. Interactivity for the methods was preferred by the majority of the users.
Content may be subject to copyright.
Usability of interactive and non-interactive
visualisation of uncertain geospatial information
Lydia E. Gerharz1 & Edzer J. Pebesma1
1Institute for Geoinformatics, University of Münster
Lydia.Gerharz@uni-muenster.de
Abstract. Showing uncertainties of geospatial data in maps in a useful and
comprehensible way for skilled and unskilled users is a problem that is still
not solved ultimately. To evaluate the usability of some commonly used
visualisation techniques with a special focus on interactivity, an explorative
study has been conducted with ten interviewees with a geosciences
background. Each participant was asked to solve tasks and give personal
opinions on three methods applied to the same data set. As an outcome,
uncertainty was considered as being helpful for decision making in general.
The results also show a clear preference for the simplest method of
displaying value and uncertainty in adjacent maps, whereas the more
sophisticated Aguila visualisation system was judged as helpful for expert
users. Interactivity for the methods was preferred by the majority of the
users.
1 INTRODUCTION
Geospatial data is collected and processed to represent and describe real world
characteristics. This representation is naturally restricted by computational
power, model approximation, and limits to measurement availability and
accuracy. Input data and each modelling step are sources of errors that are
propagated through the processing chain to the final results. As we base our
decisions for planning or assessment tasks on those results, it is inevitable to
include also the reliability of the data to allow meaningful reasoning. Showing
not only what we know, but also the degree of information we do not know, can
be realized for instance by giving interval estimates instead of single point
estimates for the results of an analysis. We limit ourselves here to the uncertainty
of attributes, and will not address positional uncertainty.
1.1 Visualising spatial uncertainties
For geospatial information the data reliability can be included into map
representations. This yields the problem of mapping two dimensions, the value
and its uncertainty, in one spatial representation, under avoidance of visual and
cognitive overload of the user. To face this challenge, different types of
presentation techniques have emerged (MacEachren 1992), such as:
Adjacent map pairs, displaying results and their uncertainty (e.g. standard
error) separately
Sequential presentation of results and uncertainty
Bi-variate map, merging results and uncertainty in a single map
Additionally, different visualisation modes can be distinguished, namely:
Static, as one or more static maps, e.g. printed on paper
Dynamic, e.g. automatic animation of realisations on a computer screen
Interactive, the user interacts with the uncertainty representation
From the combination of these presentation techniques and modes, numerous
different methods were designed for varying purposes and user groups.
Metaphors, like for instance blurring the uncertain regions as if they were out of
focus, are commonly applied for qualitative uncertainty information, making use
of the intuitive perception for uncertainty of the user. Transparent overlays and
mixing pixel methods can be used for static bi-variate maps and blinking pixels,
highlighting certain regions, are an example for dynamic representations.
Generally, it is assumed that static methods are easier comprehensible especially
for non-experts, whereas interactive methods offer the control over the amount of
information shown which can be useful for understanding the structure of the
data. It is also hypothesised that bi-variate maps are less useful as adjacent maps,
because bi-variate maps probably contain too much information for the user.
Although many different methods exist for representing uncertainty in geospatial
information, not all of them seem equally helpful. Which representation method
is most helpful also depends on the user’s knowledge. Tversky and Kahneman
(1974) warn that users with different levels of experience use different criteria
for their decisions. For instance, novice users base their decision often on
heuristics rather than on statistical uncertainties.
1.2 Usability testing
However, only few investigations have been conducted to test the performance
and acceptance of different uncertainty mapping techniques for a use case
(MacEachren et al. 2005). Furthermore, the findings of these studies vary
significantly regarding which method is most useful, depending on the purpose
and design of the study. Interactive methods are usually covered by toggling
between the value and uncertainty map lying upon each other. Aerts et al. (2003)
found this to be a helpful method, compared to static representation of adjacent
maps. Evans (1997) in contrast found that toggling is less helpful than other
static and dynamic methods. However, most of the studies results are conform in
the fact that giving uncertainty information is helpful rather than confusing for
the user if it is presented in a useful way.
To verify the hypotheses mentioned above, we decided to perform a usability
study with a small set of test persons. The aim was to compare the usefulness for
decision support and user acceptance of three different uncertainty representation
methods with a focus on interactive vs. static methods. We hypothesise that
interactive uncertainty representation methods take more time to learn but are
more useful for quantification and decision making tasks. Influence of user
experience on the performance was neglected in this study, as the number of
participants was too low.
2 STUDY DESIGN & METHODOLOGY
For the study, ten participants with backgrounds in geography or geoinformatics,
all non-experts in statistics were interviewed for approximately 30 minutes. All
three visualisation methods were presented to them in different order on a
monitor while all answers and comments were recorded. The data set used for the
testing was a residual kriging analysis for annual PM10 concentration over
Europe in 2005, using the kriging standard deviation as uncertainty estimate. The
study had two main parts, one on task performance and one on user opinion.
2.1 Visualisation techniques
To cover interactive and non-interactive methods as well as bi-variate and
separate maps we chose three different methods. First, adjacent maps of the value
and the uncertainty were used as a simple and commonly used static visualisation
method. The legend for the value maps showed concentrations whereas the
legend for the uncertainty map only ranged from low to high. For the bi-variate
map type, a method called Whitening (Hengl & Toomanian 2006) was applied.
Whitening uses the Hue-Saturation-Intensity colour model to combine value and
uncertainty in one map. Uncertain values are displayed with reduced colour
saturation in the map, yielding paler regions that move out of the user’s focus.
This resulted in a two dimensional legend, ranging from high to low
concentrations and 40 to 100 % normalised error.
As a third and interactive method we used Aguila (Pebesma et al., 2007), an
interactive visualisation tool that stores value and uncertainty as cumulative
probability functions for each pixel in space and time. Fig. 1 shows Aguila with
the interactive linked windows of the map, the cursor & value window and the
cumulative probability function. In the map, the threshold values associated to a
certain probability, here 0.7, is shown. The user controls the threshold probability
of the values shown in the map by moving the line control in the probability
distribution function window. By moving the cursor in the map window, the
cumulative probability distribution function and concentration values of each
pixel could be visualised. Alternatively, Aguila can display the cumulative
probability of exceeding (POE) a certain value in the map. The control in the
cumulative probability function window changes to a vertical line and the user is
able to control the threshold value for the POE map.
After introducing the data set, each method was shown and explained to the
interviewee. In Aguila, the different options were demonstrated with the data set
and the user got the possibility to explore the functions himself as long as desired
before answering the questions.
Figure 1: Map, values and cumulative distribution function window of Aguila.
2.2 Tasks
The first two tasks had to be solved by the participant once for each method
consecutively, whereas the third task was only asked once and could be solved
by using any of the methods. Task performance was not measured by time or
errors, but comments of the participants were demanded and recorded. The
participant had to assess how easy it was to solve with the current method the
following tasks:
1. Identify the approximate concentration and its uncertainty for Norway and
North Italy.
2. Imagine being a European policy maker: Where is the annual threshold of
50 µg/m³ exceeded? Can you say how likely this will happen?
3. If you could decide where new measurement stations should be built,
which criteria would you use to identify possible locations? Which
methods do you use for this task?
The aim of the second part was to identify the preferred methods of the
participants and their opinion on uncertainty visualisation in general. For the
presented methods, each participant had to judge if he/she understood all three
methods, which method was easiest to comprehend, and which method was
preferred. To get an overview of the usability of uncertainty mapping in general,
each participant was asked to give his/her opinion on three questions:
Do you prefer generally static or interactive methods?
Does the uncertainty visualisation make the map too complex?
Do you think visualising uncertainty is beneficial for making decisions?
The study was conducted as an interview, so these questions were aimed as
starting point for further comments of the participants. A statistical analysis of
the task performance and answer was not planned for this explorative study, due
to the limited number of interviewees.
3 RESULTS
The results for the first task in tab. 1 shows that the best performance is obtained
by adjacent maps. The concentration for both regions was identified correctly by
all participants with all three methods. Not all participants were able to identify
the uncertainty correctly, especially with the Whitening method. The opinion of
the participants reflects the difficulties that occurred during the task, leading to
the poorer performance of Whitening and Aguila.
Table 1: Results for task 1.
Concentration
correct Uncertainty
correct Easy to anwer
Aguila 10/10 5/10 4/10
Whitening 10/10 3/10 5/10
Adjacent maps 10/10 9/10 10/10
The second task showed generally bad results. Only one person identified the
fully correct answer by using Aguila and none answered it correctly using the
other methods. People tended to identify too many regions with the Whitening
and adjacent maps for threshold exceedance. Some users were able to identify the
correct regions with Aguila, but most could not identify the probability of
exceedance for the threshold with any of the methods, although some suspected
it should be possible with Aguila. Again, the participants judged adjacent maps
as the easiest one to answer this task.
For the third task all participants preferred to use the adjacent maps, yielding
overall good results. The majority of participants used the concentration as well
as the uncertainty information to identify potential locations.
During the user opinion part, all participants stated they understood Whitening
and adjacent maps, whereas only six of them thought they understood Aguila
(tab. 2). None of the interviewees preferred Whitening, whereas half of them
judged adjacent maps as their preferred method and some preferred a
combination of adjacent maps and Aguila. Adjacent maps were rated to be useful
for getting a brief overview, whereas Aguila could be useful for a more
thoroughly analysis to assess the uncertainty quantitatively. This corresponds
also to the results that all ten participants rated adjacent maps as most easily
comprehensible method. Several people found Aguila helpful but too
complicated to understand. Some of them suspected it to be helpful as an expert
tool that would take more time to learn than was given in this study. It was also
clear during the interviews that the time was too short to learn and remember all
the functions of Aguila. The Whitening method was mentioned by some
immediately as useless. They stated that it was too difficult to distinguish
between color hue and saturation although the principle was easy to comprehend.
Two participants explained adjacent maps to be so easy comprehensible and
usable through the separation of value and uncertainty into two maps.
Table 2: Results for opinion questions.
Understood Preferred Most easily to understand
Aguila 6/10 1/10 (4 with
adjacent maps) 0/10
Whitening 10/10 0/10 0/10
Adjacent maps 10/10 5/10 (4 with
Aguila) 10/10
For the general opinion questions, seven of ten participants stated to prefer
interactive methods over static. Some suggested making the adjacent maps
method interactive, either by overlay and toggling or by using interactive linked
windows, allowing scrolling a cursor through both maps simultaneously. All ten
participants found uncertainty helpful for decision making. Half of the
participants thought uncertainty makes maps more complex but not too complex,
whereas some stated for the case of Aguila and Whitening the representations are
too complex.
During the whole study it became clear that the term “uncertainty” was not
equally clear to every participant and got mixed up with “error”, “probability”
and “certainty”. A couple of participants criticised that uncertainty should be
shown quantitatively, which was only the case for Aguila. Others suggested to
translate the uncertainty for non-statisticians as Van de Kassteele & Velders
(2006) did by applying the IPCC terminology on probabilities of exceedances.
4 DISCUSSION
Due to the small number of participants, the results could not be tested on
statistical significance and should be treated cautiously rather as indicators.
Nevertheless, the outcomes of this explorative study clearly support the
hypothesis that visualising uncertainties in geospatial information supports
decision making processes, consistent to the findings of previous studies (Evans
1997, Aerts et al. 2003)
The results for the first task part indicate that adjacent maps are easier to use
for the participants but perform not always best in solving the tasks where
quantification is required. People had problems using Aguila, but suspected it
being helpful if they had more time to learn the program. This could mean that
the users understood the principle of Aguila but could not use it adequately with
the limited knowledge given by the short introduction. Whitening seemed to be
not useful or preferred and showed the poorest results for the three tasks in
general. It seems that the principle of Whitening is immediately easy to
comprehend but not useful for getting the information necessary for making the
decisions. Also, many interviewees did not feel comfortable in using Whitening
as also some felt in using Aguila, leading to the preference of adjacent maps.
Especially while using Aguila, the users seemed to be uncertain if they
understood and correctly applied the program.
The hypothesis that bi-variate maps are overloaded by information and too
complex for the user is supported by the results of this study. The two
dimensional legend of Whitening overstrained most of the users, whereas the
adjacent maps could be applied by the participants much easier. Although Aguila
also separates the statistical dimension from the map, this method seemed to be
much more complicated for the participants to understand and use. Also the
interactivity makes the learning process more complicated but is necessary to
make use of the full advantages in comparison to static methods.
5 CONCLUSION
This work was intended and conducted as an explorative study to investigate
the usability of interactive and non-interactive visualisation in decision making
processes. As an outcome we found the simplest method of showing value and
uncertainty map next to each other the most efficient and preferred one. On the
other hand, interactivity is suspected to support the perception of uncertainty
better than static representations with an adequate learning period.
For decision support this could mean that different methods should be used
and offered, depending on the user’s background, experience and task. A
combination of simple adjacent representations for a first overview and a more
complex system like Aguila for thorough and detailed analyses could possibly be
a solution in future decision support systems.
6 ACKNOWLEDGEMENTS
This study was realised in the HEIMTSA project funded by the European Union
Sixth Framework programme contract no. 036913. We also thank Bruce Denby
from NILU, Norway for the preparation of the residual kriging results, and the
interviewees for their help.
7 REFERENCES
Aerts, J. C. J. H., K. C. Clarke and A. D. Keuper (2003) Testing Popular
Visualization Techniques for Representing Model Uncertainty. Cartography
and Geographic Information Science 30(3): 237-248.
Evans, B. J. (1997). Dynamic display of spatial data reliability: does it benefit the
map user? Computers and Geosciences 23: 409–422.
Hengl, T. and N. Toomanian (2006) Maps are not what they seem: representing
uncertainty in soil-property maps.) Proceedings of the 7th International
Symposium on Spatial Accuracy Assessment in Natural Resources and
Environmental Sciences (Accuracy 2006). M. Caetano and M. Painho. pp.
805-813.
MacEachren, A. M. (1992). Visualising Uncertain Information. Cartographic
Perspective 13: 10-19.
MacEachren, A. M., A. Robinson, S. Hopper, S. Gardner, R. Murray, M.
Gahegan and E. Hetzler (2005). Visualising Geospatial Information
Uncertainty: What We Know and What We Need to Know. Cartography and
Geographic Information Science 32(3): 139-160.
Pebesma, E. J., K. de Jong and D. J. Briggs (2007). Interactive visualisation of
uncertain spatial and spatio-temporal data under different scenarios: an air
quality example. International Journal of Geographical Information Science
21(5): 515–527.
Tversky, A, and D. Kahneman (1974). Judgement under uncertainty: Heuristics
and biases. Science 185: 1124-1131.
van de Kassteele, J. and G. J. M. Velders (2006). Uncertainty assessment of local
NO2 concentrations derived from error-in-variable external drift kriging and
its relationship to the 2010 air quality standard. Atmospheric Environment
40(14): 2583-2595.
... This is in line with observations made by other authors (MacEachren et al., 1998). With coincident maps, users cannot ignore the uncertainty information (Evans, 1997;Edwards and Nelson, 2001;Viard et al., 2011), but the perception depends on their expertise (Tversky and Kahneman, 1974;Gerharz and Pebesma, 2009). Using this visualization technique, one can interpret both parameters at a glance, as the textures from the upper layer do not interfere with interpretation of the underlying choropleth map. Figure 3 provides two versions of a value-by-alpha map. ...
... Instead an interactive dialogue between end-users and epidemiologist has to be established in order to identify which technique should be used in the specific setting. This selection process should also consider the professional background and experiences of map readers and balance this with the objectives to be achieved with the map (Gerharz and Pebesma, 2009). It is recommended to test the usability of different visualization techniques beforehand to identify the most effective visualization technique. ...
... In addition, it is also necessary to judge carefully on the variables to be represented, in order to avoid the cluttering of information (Leitner and Buttenfield, 2000;Viard et al., 2011). In the public health domain, it could be useful to combine simple maps with more sophisticated ones, in order to get a general view and a deeper assessment of the epidemiological data (Gerharz and Pebesma, 2009). ...
Article
Full-text available
Within the European activities for the ‘Monitoring and Collection of Information on Zoonoses’, annually EFSA publishes a European report, including information related to the prevalence of Campylobacter spp. in Germany. Spatial epidemiology becomes here a fundamental tool for the generation of these reports, including the representation of prevalence as an essential element. Until now, choropleth maps are the default visualization technique applied in epidemiological monitoring and surveillance reports made by EFSA and German authorities. However, due to its limitations, it seems to be reasonable to explore alternative chart type. Four maps including choropleth, cartogram, graduated symbols and dot-density maps were created to visualize real-world sample data on the prevalence of Campylobacter spp. in raw chicken meat samples in Germany in 2011. In addition, adjacent and coincident maps were created to visualize also the associated uncertainty. As an outcome, we found that there is not a single data visualization technique that encompasses all the necessary features to visualize prevalence data alone or prevalence data together with their associated uncertainty. All the visualization techniques contemplated in this study demonstrated to have both advantages and disadvantages. To determine which visualization technique should be used for future reports, we recommend to create a dialogue between end-users and epidemiologists on the basis of sample data and charts. The final decision should also consider the knowledge and experience of end-users as well as the specific objective to be achieved with the charts.
... Starting with the earliest research on uncertainty visualisation, coincident approaches (with data and uncertainty integrated in the existing display) have been contrasted with adjacent approaches with data and uncertainty in separate views (MacEachren, 1992). While most studies assess coincident approaches, there are a number of studies that involve a direct comparison between adjacent and coincident views (Aerts et al., 2003;Edwards and Nelson, 2001;Evans, 1997;Gerharz and Pebesma, 2009;Kardos, 2003;Kardos, 2007;Kubíček and Šašinka, 2011;Kunz et al., 2011;MacEachren et al., 1998;Retchless, 2012;Senaratne et al., 2012;Viard et al., 2011). ...
... As the display typically can become complex when uncertainty is added to data depictions, there have also been numerous attempts to utilize dynamic views. Some of these use non-interactive animation (Aerts et al., 2003;Blenkinsop et al., 2000;Evans, 1997;Kardos et al., 2003;Kardos et al., 2007;Zhang et al., 2008) and some incorporate interactive interfaces (Alberti, 2013;Blenkinsop et al., 2000;Evans, 1997;Gerharz and Pebesma, 2009;Slocum et al., 2003;Senaratne et al., 2012). ...
... Expertise is described in many ways that are usually not directly comparable across studies, e.g. experience in using geographic information (Gerharz and Pebesma, 2009;Kardos et al., 2008), experience with the concept of uncertainty and its visualisation (Kardos et al., 2008;Kinkeldey et al., 2014), experience in maps and mapping (Evans, 1997, MacEachren et al., 2012, training or knowledge in the application domain (Aerts et al., 2003;Kolbeinsson, 2013;Kunz et al., 2011;Senaratne et al., 2012) or computer literacy more generally (Newman and Lee, 2004). Self-assessment was often used to determine the subjects' expertise, especially when participants were recruited via the web (Aerts et al., 2003;Kinkeldey et al., 2014;Senaratne et al., 2012). ...
Article
For decades, uncertainty visualisation has attracted attention in disciplines such as cartography and geographic visualisation, scientific visualisation and information visualisation. Most of this research deals with the development of new approaches to depict uncertainty visually; only a small part is concerned with empirical evaluation of such techniques. This systematic review aims to summarize past user studies and describe their characteristics and findings, focusing on the field of geographic visualisation and cartography and thus on displays containing geospatial uncertainty. From a discussion of the main findings, we derive lessons learned and recommendations for future evaluation in the field of uncertainty visualisation. We highlight the importance of user tasks for successful solutions and recommend moving towards task-centered typologies to support systematic evaluation in the field of uncertainty visualisation.
... Amongst focus-based techniques, blur, which is defined as the removal of high-frequency spatial detail from the information [8] has widely been used to indicate fuzziness and ambiguity in the data [6,15,29]. For example, Bisantz et al. [5] applied blur to a set of airplane symbols to provide decision makers with a fast way to understand the level and uncertainty of a given threat. ...
... MacEachren [29] showed, for example, that subjects cannot spontaneously order colors into a legend arrangement for bi-variate choropleth maps but that they can recognize order in that arrangement. The question of user preference is also pertinent to the problem of uncertainty visualization; in a user opinion survey conducted by Gerharz et al. [15] in the context of geographical information systems, people disliked whitening [23] to convey uncertainty. The authors argue that the principle of whitening is easy to understand but getting detail information from it is difficult. ...
... They found that participants were able to determine the amount of uncertainty using colors with 96.7% success rate. Similarly to the findings by Gerharz et al. [15], however, access to detail was difficult especially for neighboring color ranges. ...
Article
Full-text available
We report on results of a series of user studies on the perception of visual variables that are commonly used in the literature to depict uncertainty. To the best of our knowledge, we provide the first formal evaluation of the use of these variables to facilitate an easier reading of uncertainty in visualizations that rely on line graphical primitives. In addition to blur, dashing and grayscale, we investigate the use of 'sketchiness' as a visual variable because it conveys visual impreciseness that may be associated with data quality. Inspired by work in non-photorealistic rendering and by the features of hand-drawn lines, we generate line trajectories that resemble hand-drawn strokes of various levels of proficiency--ranging from child to adult strokes--where the amount of perturbations in the line corresponds to the level of uncertainty in the data. Our results show that sketchiness for the visualization of uncertainty in lines is as intuitive as blur; although people subjectively prefer dashing style over blur, grayscale and sketchiness. We discuss advantages and limitations of each technique and conclude with design considerations on how to deploy these visual variables to effectively depict various levels of uncertainty for line marks.
... Traditionally, in geodata uncertainty research a distinction between three categories of uncertainty is made: attribute/thematic (what), positional/geometric (where), and temporal (when) uncertainty (MacEachren et al. 2005)). With respect to dynamic uncertainty display, a small number of studies use animated and/or interactive approaches to depict uncertainty, comparing them to static ones (Aerts, Clarke, and Keuper 2003; Bisantz et al. 2011; Ferreira, Fisher, and König 2014; Gerharz and Pebesma 2009; Senaratne et al. 2012). ...
... Aerts, Clarke, and Keuper (2003), for example, report that the majority of their participants (planners and decision makers) agreed that " [u] ncertainty visualization improves the decision-makers' view, analyses and model simulations " (258). Other studies also report positive reactions from participants towards visual depiction of uncertainty (Gerharz and Pebesma 2009; Pyysalo and Oksanen 2014; Scholz and Lu 2014). In contrast, there are studies reporting less positive results with respect to acceptance of uncertainty visualization. ...
Article
For many years, uncertainty visualization has been a topic of research in several disparate fields, particularly in geographical visualization (geovisualization), information visualization, and scientific visualization. Multiple techniques have been proposed and implemented to visually depict uncertainty, but their evaluation has received less attention by the research community. In order to understand how uncertainty visualization influences reasoning and decision-making using spatial information in visual displays, this paper presents a comprehensive review of uncertainty visualization assessments from geovisualization and related fields. We systematically analyze characteristics of the studies under review, i.e., number of participants, tasks, evaluation metrics, etc. An extensive summary of findings with respect to the effects measured or the impact of different visualization techniques helps to identify commonalities and differences in the outcome. Based on this summary, we derive “lessons learned” and provide recommendations for carrying out evaluation of uncertainty visualizations. As a basis for systematic evaluation, we present a categorization of research foci related to evaluating the effects of uncertainty visualization on decision-making. By assigning the studies to categories, we identify gaps in the literature and suggest key research questions for the future. This paper is the second of two reviews on uncertainty visualization. It follows the first that covers the communication of uncertainty, to investigate the effects of uncertainty visualization on reasoning and decision-making.
... Visual variables are classified into two techniques to indicate uncertainty: (i) intrinsic techniques alter the existing display to indicate uncertainty using visual variables such as opacity, colour value, colour hue and saturation; and (ii) extrinsic techniques add objects to the display to represent uncertainty such as glyphs, dots or lines [1]. Although some research shows participants can prefer a method for indicating uncertainty that is not necessarily the most effective [3], there are studies showing that participants accurately retrieved the information depicted by uncertainty using their preferred method [4]. To our knowledge, exploring students' preferred method for visualising uncertainty in open learner models (OLM) has yet to be explored. ...
Conference Paper
User preferences for indicating uncertainty using specific visual variables have been explored outside of educational reporting. Exploring students’ preferred method to indicate uncertainty in open learner models can provide hints about which approaches students will use, so further design approaches can be considered. Participants were 67 students exploring 6 visual variables applied to a learner model visualisation (skill meter). Student preferences were ordered along a scale, which showed the size, numerosity, orientation and added marks visual variables were near one another in the learner’s preference space. Results of statistical analyses revealed differences in student preferences for some variables with opacity being the most preferred and arrangement the least preferred. This result provides initial guidelines for open learner model and learning dashboard designers to represent uncertainty information using students’ preferred method of visualisation.
... Exceptions include a number of qualitative studies, for instance, a focus group study by Roth (2009a) to investigate the impacts of uncertainty visualization on decision making in the context of floodplain mapping. Other authors conducted interviews to evaluate the usability of a tool utilizing uncertainty visualization (Slocum et al., 2003) and the usefulness of different visualization techniques to depict uncertainty (Gerharz and Pebesma, 2009). Apart from this, mixed methods (combining quantitative and qualitative methods) have been applied, but remain very rare (e.g., Štěrba et al., 2014). ...
Article
Extensive research on geodata uncertainty has been conducted in the past decades, mostly related to modeling, quantifying, and communicating uncertainty. But findings on if and how users can incorporate this information into spatial analyses are still rare. In this paper we address these questions with a focus on land cover change analysis. We conducted semi-structured interviews with three expert groups dealing with change analysis in the fields of climate research, urban development, and vegetation monitoring. During the interviews we used a software prototype to show change scenarios that the experts had analyzed before, extended by visual depiction of uncertainty related to land cover change. This paper describes the study, summarizes results, and discusses findings as well as the study method. Participants came up with several ideas for applications that could be supported by uncertainty, for example, identification of erroneous change, description of change detection algorithm characteristics, or optimization of change detection parameters. Regarding the aspect of reasoning with uncertainty in land cover change data the interviewees saw potential in better-informed hypotheses and insights about change. Communication of uncertainty information to users was seen as critical, depending on the users’ role and expertize. We judge semi-structured interviews to be suitable for the purpose of this study and emphasize the potential of qualitative methods (workshops, focus groups etc.) for future uncertainty visualization studies.
... A handful of papers have been dedicated to evaluating visualizations in the context of uncertainty. Most of these look at the method of visual encoding such error bars, glyph size, and colormapping in 1 and 2D [82], glyph type in 3D [61], or comparing adjacent, sequential, integrated, and static vs dynamic displays [26]. While each work identified a "better" technique for their unique study; surface and glyph color work better than size, multi-point glyphs perform better than ball, arrow, and cone glyphs, and adjacent displays with simple indications of data and uncertainty were preferred by the users, however none of the techniques performed well enough to be called the best display of uncertainty in all circumstances. ...
Article
Full-text available
Quantifying uncertainty is an increasingly important topic across many domains. The uncertainties present in data come with many diverse representations having originated from a wide variety of disci-plines. Communicating these uncertainties is a task often left to visu-alization without clear connection between the quantification and vi-sualization. In this paper, we first identify frequently occurring types of uncertainty. Second, we connect those uncertainty representations to ones commonly used in visualization. We then look at various approaches to visualizing this uncertainty by partitioning the work based on the di-mensionality of the data and the dimensionality of the uncertainty. We also discuss noteworthy exceptions to our taxonomy along with future research directions for the uncertainty visualization community.
Chapter
The damage of subsurface infrastructure under the auspices of excavation is a long-standing global problem, which causes great financial losses as a consequence of project delays, disruptions of public supply, and the increased life-cycle costs of utility lines. The primary causes of excavation damage are attributed to the lack of reliable utility information and inadequate approaches to communicating the positional uncertainties to the end users. Accordingly, this study presents a deterministic uncertainty-aware approach for visualising subsurface infrastructure in 3D using augmented reality (AR). The prototype was presented and evaluated in a focus group interview with five respondents with experience from the utility sector. The participants agreed, that the insufficient availability of vertical coordinates for the cables at present constitutes the biggest challenge. However, they emphasised the future potential of the AR solution in the prospect of ongoing improvements in data quality prompted by the new Danish data model for exchanging utility information.
Article
The visual communication of climate information is one of the cornerstones of climate services. It often requires the translation of multidimensional data to visual channels by combining colors, distances, angles, and glyph sizes. However, visualizations including too many layers of complexity can hinder decision-making processes by limiting the cognitive capacity of users, therefore affecting their attention, recognition, and working memory. Methodologies grounded on the fields of user-centered design, user interaction and cognitive psychology, which are based on the needs of the users, have a lot to contribute to the climate data visualization field. Here, we apply these methodologies to the redesign of an existing climate service tool tailored to the wind energy sector. We quantify the effect of the redesign on the users’ experience performing typical daily tasks, using both quantitative and qualitative indicators that include response time, success ratios, eye-tracking measures, user perceived effort and comments among others. Changes in the visual encoding of uncertainty and the use of interactive elements in the redesigned tool reduced the users’ response time by half, significantly improved success ratios, and eased decision making by filtering non-relevant information. Our results show that the application of user-centered design, interaction, and cognitive aspects to the design of climate information visualizations reduces the cognitive load of users during tasks performance, thus improving user experience. These aspects are key to successfully communicating climate information in a clearer and more accessible way, making it more understandable for both technical and non-technical audiences.
Article
Full-text available
The paper discusses use of static visualization techniques for representation of uncertainty in spatial prediction models illustrated with examples from soil mapping. The uncertainty of a prediction model, represented with the prediction error, is commonly ignored or only visualized separately from the predictions. Two techniques that can be used to visualize the uncertainty are colour mixing (whitening) and pixel mixing. In both cases, the uncertainty is coded with the white colour and quantitative values are coded with Hues. Additional hybrid static visualization technique (pixel mixing with simulations) that shows both the short-range variation and the overall uncertainty is described. Examples from a case study from Central Iran (42×71 km) were used to demonstrate the possible applications and emphasize the importance of visualizing the uncertainty in maps. The soil predictions were made using 118 soil profiles and 16 predictors ranging from terrain parameters to Landsat 7 bands. All variables were mapped using regression-kriging and grid resolution of 100 m. Final maps of texture fractions, EC and organic matter in topsoil were visualized using the whitening, pixel missing and pixel mixing combined with simulations. Visualization of uncertainty allows users to compare success of spatial prediction models for various variables. In this case study, the results showed that there can be quite some differences in the achieved precision of predictions for various variables and that some soil variables need to be collected with much higher inspection density to satisfy the required precision. Visualization of uncertainty also allows users to dynamically improve the precision of predictions by collecting additional samples. Links to scripts that the users can download and use to visualize their datasets are given.
Article
Full-text available
Many land allocation issues, such as land-use planning, require input from extensive spatial databases and involve complex decision-making. Spatial decision support systems (SDSS) are designed to make these issues more transparent and to support the design and evaluation of land allocation alternatives. In this paper we analyze techniques for visualizing uncertainty of an urban growth model called SLEUTH, which is designed to aid decision-makers in the field of urban planning and fits into the computational framework of an SDSS. Two simple visualization techniques for portraying uncertainty—static comparison and toggling—are applied to SLEUTH results and rendered with different background information and color schemes. In order to evaluate the effectiveness of the two visualization techniques, a web-based survey was developed showing the visualizations along with questions about the usefulness of the two techniques. The web survey proved to be quickly accessible and easy to understand by the participants. Participants in the survey were mainly recruited among planners and decision-makers. They acknowledged the usefulness of portraying uncertainty for decision-making purposes. They slightly favored the static comparison technique over toggling. Both visualization techniques were applied to an urban growth case study for the greater Santa Barbara area in California, USA.
Article
Full-text available
Developing reliable methods for representing and managing information uncertainty remains a persistent and relevant challenge to GIScience. Information uncertainty is an intricate idea, and recent examinations of this concept have generated many perspectives on its representation and visualization, with perspectives emerging from a wide range of disciplines and application contexts. In this paper, we review and assess progress toward visual tools and methods to help analysts manage and understand information uncertainty. Specifically, we report on efforts to conceptualize uncertainty, decision making with uncertainty, frameworks for representing uncertainty, visual representation and user control of displays of information uncertainty, and evaluative efforts to assess the use and usability of visual displays of uncertainty. We conclude by identifying seven key research challenges in visualizing information uncertainty, particularly as it applies to decision making and analysis.
Article
Many decisions are based on beliefs concerning the likelihood of uncertain events such as the outcome of an election, the guilt of a defendant, or the future value of the dollar. Occasionally, beliefs concerning uncertain events are expressed in numerical form as odds or subjective probabilities. In general, the heuristics are quite useful, but sometimes they lead to severe and systematic errors. The subjective assessment of probability resembles the subjective assessment of physical quantities such as distance or size. These judgments are all based on data of limited validity, which are processed according to heuristic rules. However, the reliance on this rule leads to systematic errors in the estimation of distance. This chapter describes three heuristics that are employed in making judgments under uncertainty. The first is representativeness, which is usually employed when people are asked to judge the probability that an object or event belongs to a class or event. The second is the availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development, and the third is adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available.
Article
When a GIS is used to drive map-based visualization, exploration of potential relationships takes precedence over presentation of facts. In these early stages of scientific analysis or policy formulation, providing a way for analysts to assess uncertainty in the data they are exploring is critical to the perspectives they form and the approaches they decide to pursue. As a basis from which to develop methods for visualizing uncertain information, this paper addresses the difference between data quality and uncertainty, the application of Berlin's graphic variables to the representation of uncertainty, conceptual models of spatial uncertainty as they relate to kinds of cartographic symbolization, and categories of user interfaces suited to presenting data and uncertainty about that data. Also touched on is the issue of how we might evaluate our attempts to depict uncertain information on maps.
Article
Local NO2 concentrations near Rotterdam (Netherlands) were assessed for the year 2010, focusing on the uncertainties and the changes in exceedance of European air quality standards. In the first step of the 2-step assessment method, the background contribution was determined by error-in-variable external drift kriging, where measurements and dispersion model output in the 1987–2003 period were combined. The result was subsequently extrapolated using dispersion model output and an emission scenario for 2010. In the second step, the local traffic contribution was added on the basis of a local generic dispersion model with use of an emission scenario for 2010. This resulted in maps showing local NO2 concentrations, upper and lower limits, and probabilities of exceeding the air quality standard. The probabilistic measures were calculated in numbers and translated into words for easier communication. Using this method and scenario we found that within about 100 m from the highways near Rotterdam the mean NO2 concentrations are likely to exceed the standard in 2010. The chance of exceeding the standard is unlikely up to 1 km from the highways, where the mean is expected to be below the standard in 2010.
Article
As users of maps we are dependent upon their veracity and by extension the reliability of the data they contain. Several research projects have explored possible methods of visually representing data certainty, a kind of metadata; methods considered include depicting the metadata as a map that is separate from the data map, imbedding the metadata into the data map, and creating an interactive environment allowing simultaneous viewing of both data and metadata. A practical consideration, as we develop methods for graphic depiction of data reliability, is the reaction to and acceptance of proposed methods by the map user. This research studied how maps containing graphically depicted reliability information are used. Potential “usability” of the cartographic display of data reliability is explored by the type of map user (novices versus experts, and males versus females) and the type of map use (assessment of map reliability, confidence in data reliability assessments, and ability to judge the proportion of the areas within the map containing highly reliable data). This study addressed these issues by exploring and analyzing subject responses to an interactive cartographic display of data and its level of reliability. The graphic depiction of reliability information was found to be accessible and comprehensible by all subjects; novice or expert, and male or female. Two methods of combining data and reliability information, as a composite static display and as an animation, were both found to be helpful by the subjects tested. Two other methods of obtaining reliability information, a map displaying only reliability information and an interactive “toggling” between the data and reliability information were not found to be as efficient or effective as the combination methods.
Chapter
This article described three heuristics that are employed in making judgements under uncertainty: (i) representativeness, which is usually employed when people are asked to judge the probability that an object or event A belongs to class or process B; (ii) availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development; and (iii) adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available. These heuristics are highly economical and usually effective, but they lead to systematic and predictable errors. A better understanding of these heuristics and of the biases to which they lead could improve judgements and decisions in situations of uncertainty.
Article
This paper introduces a method for visually exploring spatio‐temporal data or predictions that come as probability density functions, e.g. output of statistical models or Monte Carlo simulations, under different scenarios. For a given moment in time, we can explore the probability dimension by looking at maps with cumulative or exceedance probability while varying the attribute level that is exceeded, or by looking at maps with quantiles while varying the probability value. Scenario comparison is done by arranging the maps in a lattice with each panel reacting identically to legend modification, zooming, panning, or map querying. The method is illustrated by comparing different modelling scenarios for yearly NO2 levels in 2001 across the European Union.