ThesisPDF Available

Impact of Subjective Visual Perception on Automatic Evaluation of Dashboard Design

Authors:

Abstract and Figures

Using metrics and quantitative design guidelines to analyze design aspects of user interfaces (UI) seems to be a promising way for the automatic evaluation of the visual quality of user interfaces. Since this approach is not able to replace user testing, it can provide additional information about possible design problems in early design phases and save time and expenses in the future. Analyses of used colors or UI layout are the examples of such evaluation. UI designers can use known pixel-based (e.g., Colorfulness) or object-based (e.g., Balance or Symmetry) metrics which measure chosen UI characteristics, based on the raster or structural representation of UI. The problem of the metric-based approach is that it does not usually consider users' subjective perception (e.g., subjective perception of color and graphical elements located on a screen). Today's user interfaces (e.g., dashboards) are complex. They consist of several color layers, contain overlapping graphical elements, which might increase ambiguity of users' perception. It might be complicated to select graphical elements for the metric-based analysis of UI, so the selection reflects users' perception and principles of a visual grouping of the perceived shapes (as described by Gestalt psychology). Development of objective metrics and design guidelines usually requires a sufficiently large training set of user interface samples annotated by a sufficient number of users. This thesis focuses on the automatic evaluation of dashboard design. It combines common knowledge about dashboards with the findings in the field of data visualization, visual perception and user interface evaluation, and explores the idea of the automatic evaluation of dashboard design using the metric-based approach. It analyzes chosen pixel-based and object-based metrics. It gathers the experience of users manually segmenting dashboard screens and uses the knowledge in order to analyze the ability of the object-based metrics to distinguish well-designed dashboards objectively. It establishes a framework for the design and improvement of metrics and proposes an improvement of selected metrics. It designs a new method for segmentation of dashboards into regions which are used as inputs for object-based metrics. Finally, it compares selected metrics with user reviews and asks questions suggesting future research tasks.
Content may be subject to copyright.
A preview of the PDF is not available
Thesis
Full-text available
The thesis examines, whether the usability evaluation of prototypes for user interfaces can be automated and how that could be realized. Automation in the context of this thesis means, that no human testers are involved in the evaluation. Designers in small and medium companies are the main target group. Therefore, the proposed implementation is based on the constraints to which these companies are subject, such as limited resources in terms of finances, personnel and expertise. In order to collect existing approaches of automatic evaluation, that could serve as the basis for a concept, a literature review was conducted. It revealed, that a metric-based method is suitable for the goal of the thesis, but that none of the considered sets of usability metrics is applicable. Accordingly, it was necessary to develop a new set of metrics. For this purpose, metrics were collected from different sources, criteria for the suitability of the metrics were established and the metrics were filtered accordingly. Furthermore, the up-to-dateness of the metrics was checked and necessary adjustments were made. In addition, new metrics were derived from existing ones, if an adaptation to the previously defined specifications was not possible. Initially, the concept was created as a design and in the course of this, some details were optimized. Subsequently, the prototypical implementation followed, in which the basic structure of the plugin and the exemplary calculation of a metric were realized. Thus, the feasibility of the concept was proven. The work shows, that the automatic evaluation of prototypes is possible with the use of metrics. The developed set of metrics could be associated with nine out of ten of the considered aspects of usability. At the same time it was shown, that there is a gap in the existing research regarding usability metrics, that are not determined on the basis of user tests.
Article
Full-text available
The analysis of user interfaces using quantitative metrics is a straightforward way to quickly measure interface usability and other various design aspects (such as the suitability of page layout or selected colors). Development and evaluation of objective metrics corresponding with user perception, however, usually requires a sufficiently large training set of user interface samples. Finding real user interface samples might not be easy. Therefore, we rather use generated samples. In such case, we need to provide a realistic‐looking appearance of samples. This paper describes a workflow of the preparation of such samples. It presents a configurable generator based on the composition of simple widgets according to a predefined model. It also describes a reusable library for simple creation of widgets using capabilities of the JavaScript framework Vue.js. Finally, we demonstrate the applicability of the generator on a generation of dashboard samples which are used to evaluate existing metrics of interface aesthetics and show the possibility of their improvement.
Article
Measuring the characteristics of visually emphasized objects displayed on a screen seems to be a promising way to rate user interface quality. On the other hand, it brings us problems regarding the ambiguity of object recognition caused by the subjective perception of the users. The goal of this research is to analyze the applicability of chosen object-based metrics for the evaluation of dashboard quality and the ability to distinguish well-design samples, with the focus on the subjective perception of the users. This article presents the model for the rating and classification of object-based metrics according to their ability to objectively distinguish well-designed dashboards. We use the model to rate 13 existing object-based metrics of aesthetics. Then, we present a new approach for the improvement of the rating of one object-based metric—Balance. We base the improvement on the combination of the object-based metric with the pixel-based analysis of color distribution on the screen.
Book
Available again, an influential book that offers a framework for understanding visual perception and considers fundamental questions about the brain and its functions. David Marr's posthumously published Vision (1982) influenced a generation of brain and cognitive scientists, inspiring many to enter the field. In Vision, Marr describes a general framework for understanding visual perception and touches on broader questions about how the brain and its functions can be studied and understood. Researchers from a range of brain and cognitive sciences have long valued Marr's creativity, intellectual power, and ability to integrate insights and data from neuroscience, psychology, and computation. This MIT Press edition makes Marr's influential work available to a new generation of students and scientists. In Marr's framework, the process of vision constructs a set of representations, starting from a description of the input image and culminating with a description of three-dimensional objects in the surrounding environment. A central theme, and one that has had far-reaching influence in both neuroscience and cognitive science, is the notion of different levels of analysis—in Marr's framework, the computational level, the algorithmic level, and the hardware implementation level. Now, thirty years later, the main problems that occupied Marr remain fundamental open problems in the study of perception. Vision provides inspiration for the continuing efforts to integrate knowledge from cognition and computation to understand vision and the brain.
Book
"Take fundamental principles of psychology. Illustrate. Combine with Fundamental Principles of Design. Stir gently until fully blended. Read daily until finished. Caution: The mixture is addictive."- Don Norman, Nielsen Norman group, Author of Design of Future Things. "This book is a primer to understand the why of the larger human action principles at work-a sort of cognitive science for designers in a hurry. Above all, this is a book of profound insight into the human mind for practical people who want to get something done."- Stuart Card, Senior Research Fellow and the manager of the User Interface Research group at the Palo Alto Research Centerfrom the foreword "If you want to know why design rules work, Jeff Johnson provides fresh insight into the psychological rationale for user-interface design rules that pervade discussions in the world of software product and service development."-Aaron Marcus, President, Aaron Marcus and Associates, Inc. Early user interface (UI) practitioners were trained in cognitive psychology, from which UI design rules were based. But as the field evolves, designers enter the field from many disciplines. Practitioners today have enough experience in UI design that they have been exposed to design rules, but it is essential that they understand the psychology behind the rules in order to effectively apply them. In Designing with the Mind in Mind, Jeff Johnson, author of the best selling GUI Bloopers, provides designers with just enough background in perceptual and cognitive psychology that UI design guidelines make intuitive sense rather than being just a list of rules to follow. * The first practical, all-in-one source for practitioners on user interface design rules and why, when and how to apply them. * Provides just enough background into the reasoning behind interface design rules that practitioners can make informed decisions in every project. * Gives practitioners the insight they need to make educated design decisions when confronted with tradeoffs, including competing design rules, time constrictions, or limited resources. The author of "GUI Bloopers" presents the first practical, all-in-one source for practitioners on user interface design rules and why, when, and how to apply them. User interface (UI) design rules and guidelines, developed by early HCI gurus and recognized throughout the field, were based on cognitive psychology (study of mental processes such as problem solving, memory, and language), and early practitioners were well informed of its tenants. But today, practitioners with backgrounds in cognitive psychology are a minority, as user interface designers and developers enter the field from a wide array of disciplines. HCI practitioners today have enough experience in UI design that they have been exposed to UI design rules, but it is essential that they understand the psychological basis behind the rules in order to effectively apply them. Jeff Johnson, the author of Morgan Kaufmann's successful GUI Bloopers presents the first practical guide to help designers and developers understand the psychology behind these tried and tested user interface design rules. Johnson applies his engaging, often humorous style-already well known to designers and developers-to describe, in practical terms, the psychological basis for each rule, the value of understanding the reasons for each rule, how they interact in actual systems, and the tradeoffs designers have to make when confronted with conflicting rules or with tight budgets and deadlines. Johnson is not attempting to redefine rules-he is simply taking the existing rules and presenting them in a practical way for current practitioners who either do not have a background in psychology or took the classes so long ago, the fundamentals have faded-tantamount to if you learn a language but don't practice it, the nuances fade. The book will explain what interactive system designers and usability testers need to know about human perception and cognition. It will give designers just enough of a background in psychology that user-interface design guidelines make intuitive sense rather than being just a list of rules to learn and follow. * The first practical, all-in-one source for practitioners on user interface design rules and why, when and how to apply them. * Provides just enough background into the reasoning behind interface design rules that practitioners can make informed decisions in every project. * Gives practitioners the insight they need to make educated design decisions when confronted with tradeoffs, including competing design rules, time constrictions, or limited resources. The first practical, all-in-one source for practitioners on user interface design rules and why, when and how to apply them.
Article
Gestalt psychology is often criticized as lacking quantitative measurements and precise mathematical models. While this is true of the early Gestalt school, today there are many quantitative approaches in Gestalt perception and the special issue of Vision Research "Quantitative Approaches in Gestalt Perception" showcases the current state-of-the-art. In this article we give an overview of these current approaches. For example, ideal observer models are one of the standard quantitative tools in vision research and there is a clear trend to try and apply this tool to Gestalt perception and thereby integrate Gestalt perception into mainstream vision research. More generally, Bayesian models, long popular in other areas of vision research, are increasingly being employed to model perceptual grouping as well. Thus, although experimental and theoretical approaches to Gestalt perception remain quite diverse, we are hopeful that these quantitative trends will pave the way for a unified theory.