Content uploaded by David Baum
Author content
All content in this area was uploaded by David Baum on Dec 01, 2016
Content may be subject to copyright.
A User-centered Approach for Optimizing Information
Visualizations
David Baum, Pascal Kovacs, Ulrich Eisenecker and Richard Müller
Leipzig University
Grimmaische Strasse 12
04109 Leipzig, Germany
[baum, kovacs, eisenecker, rmueller]@wifa.uni-leipzig.de
ABSTRACT
The optimization of information visualizations is time consuming and expensive. To reduce this we propose an
improvement of existing optimization approaches based on user-centered design, focusing on readability, compre-
hensibility, and user satisfaction as optimization goals. The changes comprise (1) a separate optimization of user
interface and representation, (2) a fully automated evaluation of the representation, and (3) qualitative user studies
for simultaneously creating and evaluating interface variants. On the basis of these results we are able to find a
local optimum of an information visualization in an efficient way.
Keywords
Evaluation, Information Visualization, Optimization, Usability, User-centered design
1 INTRODUCTION
Over the last years, a considerable number of visualiza-
tions has been presented [CZ11, LBAAL09, LCWL14,
TC08, vLKS+11, ZSAvL14]. The benefit of a spe-
cific visualization depends on many factors, such as
addressed stakeholder (e.g. project manager, analyst,
scientist, or developer), the chosen methods of rep-
resentation and interaction, and the supported tasks
[LCWL14, vLKS+11]. Because of the number of fac-
tors and their connections evaluating visualizations is a
big challenge. Nevertheless, in most cases more time is
spent on developing entirely new visualizations than to
evaluate them and some of them have not been evalu-
ated at all [WLR11, TC08].
Empirical quantitative studies are an established type
of evaluation and can prove that one visualization is
superior over another one. However, planning, con-
ducting and analyzing such a quantitative study is diffi-
cult, time-consuming and causes a huge effort [And06,
CC00, KSFN08, LBIP14, Pla04]. Especially, recruiting
a sufficient number of participants is hard if they have
to meet certain criteria such as specific profession (e.g.
software developer with industrial experience). Tasks
are another critical aspect in such studies, because sim-
Permission to make digital or hard copies of all or part of
this work for personal or classroom use is granted without
fee provided that copies are not made or distributed for profit
or commercial advantage and that copies bear this notice and
the full citation on the first page. To copy otherwise, or re-
publish, to post on servers or to redistribute to lists, requires
prior specific permission and/or a fee.
ple tasks are easier to create but are not representing a
real world scenario, which is a thread to the external
validity[vLKS+11]. Complex tasks on the other side
are more difficult to create, because of the higher risk of
misinterpretation by the participants. Despite of these
difficulties, a quantitative study may lead to significant
results, but often gives not enough insight into the de-
tails, why a visualization is superior [LBIP14]. Further-
more, the choice which visualizations or visualization
variants should be investigated is a critical part of what
the results are useful for, but in most cases it is not ex-
actly reasoned.
These obstacles apply to evaluation of visualizations in
general and even more to their optimization, because to
achieve a satisfying visualization several improvements
and therefore evaluations have to be done. Thus, not
only a single visualization has to be evaluated but also
several variants differing in representation details and
interaction options. Due to the complexity of most visu-
alizations the amount of possible variants is far too high
for evaluating every variant. Therefore, it is necessary
to apply reasonable strategies to reduce the number of
variants to be evaluated down to a manageable number.
In this paper we present our approach for the optimiza-
tion of visualizations regarding readability, comprehen-
sibility, and user satisfaction, derived on our experi-
ence of evaluating software visualizations. We com-
bine computational, qualitative and quantitative meth-
ods into a well-structured and repeatable process, based
on existing processes for user-centered design (UCD)
and considering the specific characteristics of infor-
mation visualizations. By adopting this process a re-
searcher can reduce time and effort finding a local opti-
mum in an efficient heuristic way to improve any visu-
alization.
2 APPROACH
User-centered approaches are usually based on at least
four iterative steps [PvES06]:
1. identify need for UCD
2. produce design solutions
3. evaluate designs against requirements
4. final design
Compared to existing approaches we alter steps 2 and
3 by optimizing user interface (UI) and representation
separately. Thereby, for each part of the optimization
the most efficient and suitable method can be chosen.
We propose an aesthetics-based approach for represen-
tation optimization (section 3) and user studies for UI
optimization (section 4) as shown in figure 1. Like ex-
isting UCD approaches, the whole process is repeated
until a certain criterion is reached [WKD09]. Depend-
ing on the motivation of the optimization this can be,
e.g., a targeted deadline or the detection of merely non-
significant improvements.
Munzner [Mun09] introduced a four-layered model
for visualization design and validation. Accord-
ing to this model, our approach refers to the layer
encoding/interaction technique design.
3 OPTIMIZE REPRESENTATION
Aesthetics are visual properties of a representation that
are observable for human readers as well as measur-
able in an automated way [Bau15]. Some of them af-
fect the readability and comprehensibility significantly,
either in a positive or in a negative way [Pur97]. The
effects on user performance can be measured in a quan-
titative study using the time a user needs to solve a task
and the number of errors he makes [Hua14]. Based on
aesthetics, representations that are optimized for read-
ability and comprehensibility can be designed.
As aesthetics emerge from the properties of the
depicted elements, such as color, shape, size, and
positioning, they are specific for every representation
[BRSG07, PAC02]. For some basic representations like
node-link diagrams aesthetics and their influence on
readability and comprehensibility are well-understood
[BRSG07]. In this case the process of optimization
becomes easier, since some part of the work is already
done. The gathered knowledge about aesthetics can be
reused in further iterations. Hence, the effort is reduced
with every iteration and quantitative studies might even
become obsolete.
3.1 Produce Representation Variants
The previous work of Baum [Bau15] describes how the
repertory grid technique can be used to identify rele-
vant aesthetics for any representation in a structured
and reproducible way. The resulting list of aesthetics
is narrowed down by two requirements that have to be
fulfilled. First, no information may be lost; second,
there must be a significant effect on user performance.
A solely aesthetics-based optimization of readability is
not meaningful if the changes imply an adulteration of
the visualized content. For example, a layout algorithm
might convey information via the order of the depicted
elements. If this order is changed, e.g., to reduce space
consumption, the result may be more readable but some
information is tampered. Further, it is unlikely that all
identified aesthetics have a significant effect on user
performance, based on the experiences with node-link
diagrams [WPCM02]. To reveal the relations between
aesthetics and user performance quantitative studies are
still required. Every examined visualized data set is
based on the same visualization but holds different val-
ues for one or multiple aesthetics. Measuring the time
needed by a user to solve a task and the number of er-
rors made while doing so yields two important findings.
First, the aesthetics that have a significant effect on user
performance; second, the weighting of those aesthetics
since they differ in their impact.
Eventually, one or more variants of the original repre-
sentation can be created with respect to the most influ-
ential aesthetics, e.g., by applying another layout algo-
rithm. Except during the first iteration the results of
the user studies can be used as additional source of in-
formation. Producing variants still requires the creativ-
ity of the researcher since aesthetics only determine the
goal of the optimization but not how it can be achieved.
For example, our approach does not help to develop
completely new layout algorithms, but aesthetics pro-
vide assessment criteria for automatic evaluation.
3.2 Evaluate Representation Variants
Aesthetics allow a fully automatized evaluation
[Pur97]. For every created variant its effect on read-
ability and comprehensibility can be automatically
calculated by making use of the gathered information.
Hence, the evaluation is very efficient and even a large
amount of variants can be evaluated without difficulty.
The outcome of the evaluation is a representation
variant that will be further optimized.
4 OPTIMIZE INTERACTION
The interaction between the user and a visualization
is realized through the UI, which is a complex com-
bination of interaction techniques (ITEC). Yi et al.
[YaKSJ07] define ITECs in information visualization
as "[...] the features that provide users with the
Figure 1: Optimization process for information visualizations
ability to directly or indirectly manipulate and interpret
representations". To categorize ITECs they propose a
taxonomy of seven categories: select, explore, recon-
figure, encode, abstract/elaborate, filter, and connect.
Hence, evaluating the interaction with a UI via ITECs
could be done in four different levels of detail, from
low to high, by
•comparing full UIs against each other,
•integrating ITECs in the UI,
•pairwise comparisons of ITECs of one category, and
•scrutinizing details of a single ITEC.
With the target of optimizing the interaction as a whole,
a quantitative evaluation in one of these levels is not
suitable, because either the reasons why one UI is supe-
rior over another UI can not be identified or the context
of the target domain is lost when evaluating only the de-
tails of one ITEC. A quantitative evaluation of all four
levels is also not feasible, because of the huge effort and
the difficulties, even when comparing only two variants
per level[LM08]. Furthermore, the space of possible
variants is huge, thus choosing the variants for further
evaluation and improvement is a critical part.
Therefore, we propose iterative qualitative user studies
in a within-subject design as a heuristic to find a lo-
cal optimum in the huge space of possible UI variants.
One iteration consists of a couple of runs, where every
participant solves a set of randomized tasks using an
optimized representation variant and more than one UI
variant. Each UI variant differs in at least one detail of
ITECs, e.g., one variant has zoom by mouse wheel to
the position of the cursor, the other one zooms by dou-
ble click on an element, and a third one zooms twice
as fast as the second one using an addition button. The
first iteration starts with some UI variants chosen by
the researcher, which are derived from his own ideas
or by other visualizations or guidelines. Further itera-
tions may contain subsequent UI variants triggered by
analyzing the feedback of participants. Additionally,
tasks may be altered, bugs in the visualization can be
fixed, and ideas for representational variants could be
identified, which will be used during representation op-
timization. If the optimization process is terminated a
final UI is derived from the evaluation of the investi-
gated UI variants.
To get as much detail about the interaction as possible,
qualitative data is collected about each UI variant and
also about the tasks and their descriptions. Therefore,
the feedback and questions during and after each task
execution as well as the instructions and observations
of the experimenter are gathered. The user actions in-
cluding their timestamps and the time- and error-rate
for the solved task are recorded too. However, with re-
spect to the bias of giving feedback during the task, the
possible misinterpretation of the task description and
the variance in user skills coupled with a low number
of participants, the time- and error-rate have to be inter-
preted with caution. After solving the full task set, the
participant eventually has to rank all UI variants from
best to worst. The ranking of all participants of one it-
eration shows which UI variants support the set of tasks
better than others. Furthermore, it may give hint to fac-
tors explaining the improvements.
Beside changing UI variants, tasks and their descrip-
tions can be changed or improved between iterations
as well, because designing tasks is not straightforward.
Too simple tasks, e.g., identify the largest element, are
not suitable as a real world task for visualization analy-
sis. On the other hand, a complex task is more difficult
to explain, may be misinterpreted by the participant, or
needs too much time to be solved [Nor06]. Thus, creat-
ing and describing a perfect set of complex tasks from
scratch is nearly impossible. To overcome this problem
a pilot study is an established way to find weaknesses
in tasks and their descriptions. However, the possible
task modifications found this way are only a subset and
every modification can lead to new weaknesses. Hence,
an iterative improvement is a better solution to optimize
the tasks. By analyzing the instructions and observa-
tions of the experimenter as well as the questions and
feedback of the participants the researcher draws sev-
eral conclusions about the comprehensibility and feasi-
bility. As a result, the complexity of the tasks can be
reduced, the descriptions can be remastered, or entire
tasks can be replaced.
4.1 Produce Interface Variants
To produce new variants the within-subject design is
chosen to encourage the participants to think about the
differences between the variants. Therefore, the par-
ticipants feedback and questions are collected during
the whole process and associated to the following cate-
gories:
•advantages of variants
•disadvantages of variants
•improvements for variants
•ideas for new variants
By summarizing and interpreting the categorized state-
ments and their rate the researcher draws several con-
clusions about possible changes. This interpretation
process is not of straightforward structure because the
researcher and his or her freedom to design the UI is
also part of it. For example, the number of gathered
disadvantages for one variant may lead the researcher
to an idea how to improve this variant to overcome this
disadvantages. So the freedom in designing the UI us-
ing the qualitative data is the crucial part to find a local
optimum in the huge space of possible variants. Never-
theless, the researcher should pay attention to explicitly
record his or her decision with respect to further plan-
ning of the optimization process. The result of this anal-
ysis is an overview following possible changes for the
next iteration, weighted by the potential contribution to
the effectiveness of the UI:
•adding a complete new variant
•adding an altered existing variant
•adding a combination of existing variants
Attention should be paid to the differences between the
variants in one iteration. If they differ in every possible
detail of the UI or the ITECs the participants may be-
come confused and the comparison of the variants may
not lead to relevant feedback. This would also lead to
very long instruction-phases with broad tutorials to ex-
plain each variant in detail. Hence, the changes should
at least be focused on one category of ITECs, e.g., ex-
plore or connect. However, the level of detail in the dif-
ferences should be taken into account too. The details
of the ITECs and their integration into the UI should be
investigated after evaluating if and under which condi-
tions a certain ITEC is superior.
Depending on the amount of existing variants and the
size of the task set one or more variants can be added
for the next iteration. To consider a bigger amount of
variants new tasks could be added too, but with respect
to the overall length for solving all tasks of the set. On
the other side, old variants can be removed if they are
ranked low by the participants or have many disadvan-
tages.
4.2 Evaluate Interface Variants
The evaluation of the variants is mainly driven by the
user satisfaction, recorded as the ranking from best to
worst for all variants after solving the complete task set.
To get a ranking for the whole iteration the medians for
each variant are computed. An aggregated ranking for
all investigated variants in all iterations is built by com-
puting the medians of this iteration rankings, so new
variants will not be outnumbered by older ones. This
way less effective UI variants are identified and can be
excluded from the next iteration. If the process of op-
timization comes to an end a final variant out of the
remaining variants has to be derived. Beside the rank-
ing the circumstances why and when a variant is more
effective than another one are also part of this final de-
cision. Therefore, at least all the best ranked variants
are investigated further as final candidates by analyzing
the advantages and disadvantages as well as comparing
the quantitative data of time- and error-rate or the user
actions. This may lead to the following four cases:
1. Interpreting the advantages and disadvantages can
lead to the conclusion that a final candidate is only
superior for a specific type of task. In this case either
a new variant should be built upon this insight or, if
not possible, all these remaining candidates should
be integrated in the final UI with respect to aesthetics
of the graphical elements of the UI [ZV14]. Thus the
user can decide which variant to use for a task.
2. Computing the relevant statistical parameters of
time- and error-rates identifies one final candidate
as noticeably superior over the others.
3. One of the candidates has a noticeably lower rate in
user actions to solve the tasks than the others. In a
long term usage this candidate should have a higher
acceptance by the users.
4. The differences between the candidates are only on
a low level of detail, so they could be integrated in
the final UI by a configuration option.
If the result of analyzing the final candidates can not be
classified as one of these cases either a further investi-
gation by conducting a quantitative study could be done
or the researcher eventually has to choose the final UI.
5 DISCUSSION
In this paper we propose some relevant changes to ex-
isting UCD processes to reduce the effort for optimiz-
ing visualizations. Although we were able to apply the
process successfully an evaluation against other evalu-
ation approaches is outstanding due to the required ef-
fort. Especially the implementation of the variants is
still time-consuming. Since we consider interaction as
a crucial factor of success of a visualization we decided
against paper prototyping and similar methods. To fur-
ther increase the efficiency, and thereby being able to
evaluate more variants, it is essential to at least partially
automate the evaluation of UI variants. However, the
current understanding of UI aesthetics is not yet deep
enough [ZV14].
Among others, we use quantitative studies to optimize
the representation. Even though their number is re-
duced over time, the first iterations might be even more
extensive than existing approaches. However, experi-
ence shows that usually many iterations are required
and in that case our approach becomes less extensive.
The described approach finds only a local optimum,
since it is unfeasible to evaluate all possible variants.
This limitation is common to all optimization processes
in the area of information visualization. However, our
approach comes with a highly efficient evaluation. User
studies are used simultaneously for creating and evalu-
ating UI variants in smaller iterations, by analyzing the
qualitative data and user ranking. Then the evaluation
of the representation is fully automated. Thus, we can
investigate a much bigger space to find the local opti-
mum. In turn, the evaluation results are less reliable
compared to quantitative studies. Therefore, we pro-
pose to finish the optimization process with a controlled
experiment to make sure it was successful.
6 RELATED WORK
Several papers address the methodology of evaluating
information visualizations [Car08, HWT06, LBIP14,
MDF12, MTW+12, SBCS14, TM05]. But they only fo-
cus on single evaluations, not on an iterative process as
described in this paper. However, iterative optimization
is an essential part of UCD. Some authors described
such user-centered approaches for information visual-
ization [FZH13, LD11, WKD09]. As we, they try to re-
duce the resulting effort, e.g., by combining controlled
experiments and qualitative methods. Unfortunately,
this is achieved at the expense of a drastically reduced
interaction evaluation. In contrast, we stress the impor-
tance of the interaction but still achieve a reduced effort.
7 CONCLUSION
In this paper, we proposed an improved process for op-
timizing information visualization regarding readabil-
ity, comprehensibility, and user satisfaction. Among a
heuristic process of finding a local optimum in the huge
space of UI variants, we introduced a fully automated
evaluation of the representation variants. Although we
were able to apply the process successfully an evalu-
ation against other evaluation approaches is outstand-
ing.
8 REFERENCES
[And06] Keith Andrews. Evaluating Information Visual-
isations. Proceedings of the 2006 AVI workshop on
BEyond time and errors novel evaluation methods for
information visualization - BELIV ’06, page 1, 2006.
[Bau15] David Baum. Introducing Aesthetics to Software
Visualization. In Short papers proceedings, volume 23,
page 9, 2015.
[BRSG07] Chris Bennett, Jody Ryall, Leo Spalteholz, and
Amy Gooch. The Aesthetics of Graph Visualization. In
Proceedings of the 2007 Computational Aesthetics in
Graphics, Visualization, and Imaging, 2007.
[Car08] Sheelagh Carpendale. Evaluating Information Vi-
sualizations. In Information Visualization: Human-
Centered Issues and Perspectives, pages 19–45. 2008.
[CC00] Chaomei Chen and Mary P. Czerwinski. Empirical
evaluation of information visualizations: an introduc-
tion. International Journal of Human-Computer Stud-
ies, 53(5):631–635, 2000.
[CZ11] P. Caserta and O. Zendra. Visualization of the static
aspects of software: A survey. Visualization and Com-
puter Graphics, IEEE Transactions on, 17(7):913–933,
July 2011.
[FZH13] Diana Fernández, Dirk Zeckzer, and José Hernán-
dez. A User-Centered Approach for the Design of In-
teractive Visualizations to Support Urban and Regional
Planning. IADIS International Journal on Computer
Science and Information Systems, 8(2):27–39, 2013.
[Hua14] Weidong Huang. Evaluating Overall Quality of
Graph Visualizations Indirectly and Directly. In Wei-
dong Huang, editor, Handbook of Human Centric Visu-
alization. 2014.
[HWT06] Nathan Holmberg, Burkhard Wünsche, and Ewan
Tempero. A framework for interactive web-based vi-
sualization. In AUIC ’06 Proceedings of the 7th Aus-
tralasian User interface conference, pages 137–144.
Australian Computer Society, Inc., January 2006.
[KSFN08] Andreas Kerren, John T Stasko, Jean-Daniel
Fekete, and Chris North. Information Visualization:
Human-Centered Issues and Perspectives. 2008.
[LBAAL09] H. Ltifi, M. Ben Ayed, A.M. Alimi, and S. Lep-
reux. Survey of information visualization techniques for
exploitation in kdd. In Computer Systems and Appli-
cations, 2009. AICCSA 2009. IEEE/ACS International
Conference on, pages 218–225, May 2009.
[LBIP14] Heidi Lam, Enrico Bertini, Petra Isenberg, and
Catherine Plaisant. Empirical Studies in Information
Visualization: Seven Scenarios. Visualization and Com-
puter Graphics, IEEE Transactions on, 18(9):1520–
1536, 2014.
[LCWL14] Shixia Liu, Weiwei Cui, Yingcai Wu, and
Mengchen Liu. A survey on information visualization:
recent advances and challenges. The Visual Computer,
30(12):1373–1393, 2014.
[LD11] David Lloyd and Jason Dykes. Human-centered
approaches in geovisualization design: Investigating
multiple methods through a long-term case study. IEEE
Transactions on Visualization and Computer Graphics,
17(12):2498–2507, 2011.
[LM08] Heidi Lam and Tamara Munzner. Increasing the util-
ity of quantitative empirical studies for meta-analysis.
In Proceedings of the 2008 Workshop on BEyond Time
and Errors: Novel evaLuation Methods for Information
Visualization, BELIV ’08, pages 2:1–2:7, New York,
NY, USA, 2008. ACM.
[MDF12] Luana Micallef, Pierre Dragicevic, and Jean-
Daniel Fekete. Assessing the effect of visualizations
on bayesian reasoning through crowdsourcing. IEEE
Transactions on Visualization and Computer Graphics,
18(12):2536–2545, 2012.
[MTW+12] AV Moere, M Tomitsch, C Wimmer, Boesch
C, and T Grechenig. Evaluating the effect of style in
information visualization. IEEE Transactions on Visu-
alization and Computer Graphics, 18(12):2739–2748,
2012.
[Mun09] Tamara Munzner. A Nested Model for Visual-
ization Design and Validation. IEEE Transactions on
Visualization and Computer Graphics, 15(6):921–928,
2009.
[Nor06] C. North. Toward measuring visualization insight.
Computer Graphics and Applications, IEEE, 26(3):6–9,
May 2006.
[PAC02] Helen C. Purchase, Jo-Anne Allder, and David Car-
rington. Graph Layout Aesthetics in UML Diagrams:
User Preferences. Journal of Graph Algorithms and
Applications, 6(3):255–279, 2002.
[Pla04] Catherine Plaisant. The Challenge of Information
Visualization Evaluation. In Proceedings of the Work-
ing Conference on Advanced Visual Interfaces, 2004.
[Pur97] Helen C. Purchase. Which Aesthetic Has the Great-
est Effect on Human Understanding? In Proceedings
of the 5th International Symposium on Graph Drawing,
1997.
[PvES06] E Poppe, C van Elzakker, and JE Stoter. Towards
a method for automated task-driven generalisation of
base maps. UDMS 2006 - 25th Urban Data Manage-
ment Symposium, pages 51–64, 2006.
[SBCS14] Abderrahmane Seriai, Omar Benomar, Benjamin
Cerat, and Houari Sahraoui. Validation of Software Vi-
sualization Tools: A Systematic Mapping Study. In
2014 Second IEEE Working Conference on Software
Visualization, pages 60–69. IEEE, September 2014.
[TC08] Alfredo R Teyseyre and Marcelo R Campo. An
overview of 3D software visualization. IEEE transac-
tions on visualization and computer graphics, 15(1):87–
105, 2008.
[TM05] Melanie Tory and Torsten Möller. Evaluating visu-
alizations: Do expert reviews work? IEEE Computer
Graphics and Applications, 25(5):8–11, 2005.
[vLKS+11] T. von Landesberger, A. Kuijper, T. Schreck,
J. Kohlhammer, J.J. van Wijk, J.-D. Fekete, and D.W.
Fellner. Visual analysis of large graphs: State-of-the-
art and future research challenges. Computer Graphics
Forum, 30(6):1719–1749, 2011.
[WKD09] I Wassink, O Kulyk, and B Van Dijk. Apply-
ing a user-centered approach to interactive visualisation
design. Trends in Interactive Visualization, pages 175–
199, 2009.
[WLR11] Richard Wettel, Michele Lanza, and Romain
Robbes. Software systems as cities: a controlled exper-
iment. 2011 33rd International Conference on Software
Engineering (ICSE), pages 551–560, 2011.
[WPCM02] Colin Ware, Helen C. Purchase, Linda Colpoys,
and Matthew McGill. Cognitive measurements of graph
aesthetics. Information Visualization, 1(2):103–110,
2002.
[YaKSJ07] Ji Soo Yi, Youn ah Kang, J.T. Stasko, and J.A.
Jacko. Toward a deeper understanding of the role of
interaction in information visualization. Visualiza-
tion and Computer Graphics, IEEE Transactions on,
13(6):1224–1231, Nov 2007.
[ZSAvL14] Elena Zudilova-Seinstra, Tony Adriaansen, and
Robert van Liere. Trends in Interactive Visualization:
State-of-the-Art Survey. Springer, 2014.
[ZV14] Mathieu Zen and Jean Vanderdonckt. Towards an
evaluation of graphical user interfaces aesthetics based
on metrics. Proceedings - International Conference on
Research Challenges in Information Science, 2014.