Article

The Visual Display of Quantitative Information / E.R. Tufte.

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

A text on the graphical display of quantitative data, using combinations of points, lines, coordinate systems, numbers, symbols, words, shading and colour. The book is divided into two major sections; the first reviews past and present graphical practice, such as thematic mapping, time-series plots, narrative and relational graphics. It also discusses graphical integrity, and miscommunication and distortion. Section two covers a theory of data graphics. This includes considerations of physical tools such as ink, design factors, optical art and perceptual factors, colour usage, information content, aesthetics, text and tables. The volume is illustrated with copious examples from past and present literature.-M.Blakemore

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Instead, we promote comprehensive, but flexible, low-tech visualization techniques, such as dashboards or small multiples. 62 These techniques show data at high granularity and without loss of information, which is essential for the correct contextualization of scenarios. ...
Article
Full-text available
The goal of limiting global warming to well below 2°C as set out in the Paris Agreement calls for a strategic assessment of societal pathways and policy strategies. Besides policy makers, new powerful actors from the private sector, including finance, have stepped up to engage in forward-looking assessments of a Paris-compliant and climate-resilient future. Climate change scenarios have addressed this demand by providing scientific insights on the possible pathways ahead to limit warming in line with the Paris climate goal. Despite the increased interest, the potential of climate change scenarios has not been fully unleashed, mostly due to a lack of an intermediary service that provides guidance and access to climate change scenarios. This perspective presents the concept of a climate change scenario service, its components, and a prototypical implementation to overcome this shortcoming aiming to make scenarios accessible to a broader audience of societal actors and decision makers.
Chapter
Full-text available
This chapter moves the discussion of how to design an operation center down a level towards implementation. We present user-centered design (UCD) as a distinct design philosophy to replace user experience (UX) when designing systems like the Water Detection System (WDS). Just like any other component (e.g., electrical system, communications networks), the operator has safe operating conditions, expected error rates, and predictable performance, albeit with a more variable range for the associated metrics. However, analyzing the operator’s capabilities, like any other component in a large system, helps developers create reliable, effective systems that mitigate risks of system failure due to human error in integrated human–machine systems (e.g., air traffic control). With UCD as a design philosophy, we argue that situation awareness (SA) is an effective framework for developing successful UCD systems. SA is an established framework that describes operator performance via their ability to create and maintain a mental model of the information necessary to achieve their task. SA describes performance as a function of the operator’s ability to perceive useful information, comprehend its significance, and predict future system states. Alongside detailed explanations of UCD and SA, this chapter presents further guidance and examples demonstrating how to implement these concepts in real systems.
Chapter
This concluding chapter offers a look back at six thoughts that were key to framing our rethink of map literacy. It then provides a look sideways at the interactions between map literacy and other “literacies” and how the device of word problems can both demonstrate such interactions and be used to enhance the development of knowledge and skills. Finally, we look forward, with a closing thought, and a hopeful one, that we have managed to provide some navigational tools in this book for one of the more unknown of the many “seas of literacy.”
Chapter
In this chapter, a conceptual triangular-plot model is introduced to discuss how maps vary according to two parameters that we consider important to map literacy and to the distribution of map-reading knowledge and skills. The graphic is an upright equilateral triangle. The first parameter represents a map’s position on a continuum from purely locational information on the left to purely thematic information on the right. The second parameter, which represents the level (a judgment) of the map’s generalization and distortion, positions the map vertically in the triangle.
Chapter
En route to a comprehensive literature review of map literacy in the next chapter, we come at the subject with an arc through “quantitative literacy,” the term by which numeracy is more generally known in the United States. Our goal in this targeted review of numeracy and quantitative literacy is to build a directed concept chain – namely, literacy → numeracy → quantitative literacy → graph literacy → graphicacy → maps – the next step of which is map literacy.
Chapter
Full-text available
This introduction provides a brief survey of the evolution of data visualization from its eighteenth-century beginnings, when the Scottish engineer and political scientist William Playfair created the first statistical graphs, to its present-day developments and use in period-related digital humanities projects. The author highlights the growing use of data visualization in major institutional projects, provides a literature review of representative works that employ data visualizations as a methodological tool, and highlights the contribution that this collection makes to digital humanities and the Enlightenment studies. Addressing essential period-related themes—from issues of canonicity, intellectual history, and book trade practices to canonical authors and texts, gender roles, and public sphere dynamics—, this collection also makes a broader argument about the necessity of expanding the very notion of “Enlightenment” not only spatially but also conceptually, by revisiting its tenets in light of new data. When translating the new findings afforded by the digital in suggestive visualizations, we can unveil unforeseen patterns, trends, connections, or networks of influence that could potentially revise existing master narratives about the period and the ideological structures at the core of the Enlightenment.
Chapter
Daten sind der Inhalt von Geschichten. Zudem bringt die Ausbreitung digitaler Kanäle eine neue Form des Geschichtenerzählens hervor. Mehr denn je werden visuelle Elemente zum Aufhänger und Anker von Geschichten. Gerade in einer durch Reize überfluteten Gesellschaft gewinnen starke Visualisierungen an Gewicht. Der Journalismus hat es vorgemacht, wie mittels Daten Geschichten entwickelt werden. Inzwischen sind diese Praktiken auch in Unternehmen angekommen. Wer aus Daten Geschichten machen will, braucht ein Team mit sehr unterschiedlichen Fähigkeiten. Kommunikations- und Marketingverantwortlichen sind daher gut beraten, Netzwerke zu bilden und eine gemeinsame Datenkultur zu entwickeln.
Chapter
Public transport is a challenging environment for both passenger and operator. The operation of large networks carries expectation of timely delivery of passenger-oriented services. With numerous stakeholders such as government, private industry, and the general public the expectation around the primary concept of mobility within transport networks often differs. This chapter examines how data visualisation has been engaged to better understand an enduring mobility challenge of Australian metropolitan rail networks, the universally accepted accessibility issue widely known as ‘the gap’. With increasing global requirements for disability compliance and anti-discrimination within public transport, data visualisation is applied within a broader design project. Engaged at the problem-definition stage this data-driven approach provides investigators greater insight and visual transparency towards understanding the impact this issue has on mobility for all users of public transport.
Chapter
Enhanced reality for immersive simulation (e-REAL®) is the merging of real and virtual worlds: a mixed reality environment for hybrid simulation where physical and digital objects co-exist and interact in real time, in a real place and not within a headset. The first part of this chapter discusses e-REAL: an advanced simulation within a multisensory scenario, based on challenging situations developed by visual storytelling techniques. The e-REAL immersive setting is fully interactive with both 2D and 3D visualizations, avatars, electronically writable surfaces and more: people can take notes, cluster key-concepts or fill questionnaires directly on the projected surfaces. The second part of this chapter summarizes an experiential coursework focused on learning and improving teamwork and event management during simulated obstetrical cases. Effective team management during a crisis is a core element of expert practice: for this purpose, e-REAL reproduces a variety of different emergent situations, enabling learners to interact with multimedia scenarios and practice using a mnemonic called Name-Claim-Aim. Learners rapidly cycle between deliberate practice and direct feedback within a simulation scenario until mastery is achieved. Early findings show that interactive immersive visualization allows for better neural processes related to learning and behavior change.
Article
Mining the distribution of features and sorting items by combined attributes are 2 common tasks in exploring and understanding multi-attribute (or multivariate) data. Up to now, few have pointed out the possibility of merging these 2 tasks into a united exploration context and the potential benefits of doing so. In this paper, we present SemanticAxis, a technique that achieves this goal by enabling analysts to build a semantic vector in two-dimensional space interactively. Essentially, the semantic vector is a linear combination of the original attributes. It can be used to represent and explain abstract concepts implied in local (outliers, clusters) or global (general pattern) features of reduced space, as well as serving as a ranking metric for its defined concepts. In order to validate the significance of combining the above 2 tasks in multi-attribute data analysis, we design and implement a visual analysis system, in which several interactive components cooperate with SemanticAxis seamlessly and expand its capacity to handle complex scenarios. We prove the effectiveness of our system and the SemanticAxis technique via 2 practical cases.Graphic abstract
Conference Paper
Full-text available
As the internet is increasingly embedded in the everyday things in our homes, we notice a need for greater focus on the role care plays in those relationships—and therefore an opportunity to realize unseen potential in reimagining home Internet of Things (IoT). In this paper we report on our inquiry of home dwellers’ relationships to caring for their everyday things and homes (referred to as thingcare). Findings from our design ethnography reveal four thematic qualities of their relationships to thingcare: Care Spectacle, Care Liminality, Ontological Binding, and Care Condition. Using these themes as touchstones, we co-speculated to produce four speculative IoT concepts to explore what care as a design ethic might look like for IoT and reflect on nascent opportunities and challenges for domestic IoT design. We conclude by considering structures of power and privilege embedded within care practices that critically open new design imaginaries for IoT.
Article
Full-text available
Graphing is an important practice for scientists and in K-16 science curricula. Graphs can be constructed using an array of software packages as well as by hand, with pen-and-paper. However, we have an incomplete understanding of how students' graphing practice vary by graphing environment; differences could affect how best to teach and assess graphing. Here we explore the role of two graphing environments in students' graphing practice. We studied 43 undergraduate biology students' graphing practice using either pen-and-paper (PP) (n = 21 students) or a digital graphing tool GraphSmarts (GS) (n = 22 students). Participants' graphs and verbal justifications were analyzed to identify features such as the variables plotted, number of graphs created, raw data versus summarized data plotted, and graph types (e.g., scatter plot, line graph, or bar graph) as well as participants' reasoning for their graphing choices. Several aspects of participant graphs were similar regardless of graphing environment, including plotting raw vs. summarized data, graph type, and overall graph quality, while GS participants were more likely to plot the most relevant variables. In GS, participants could easily make more graphs than in PP and this may have helped some participants show latent features of their graphing practice. Those students using PP tended to focus more on ease of constructing the graph than GS. This study illuminates how the different characteristics of the graphing environment have implications for instruction and interpretation of assessments of student graphing practices.
Article
Full-text available
Business process modeling is an important activity for developing software systems—especially within digitization projects and when realizing digital business models. Specifying requirements and building executable workflows is often done by using BPMN 2.0 process models. Although there are several style guides available for BPMN, e.g., by Silver and Richard (BPMN method and style, vol 2, Cody-Cassidy Press, Aptos, 2009), there has not been much empirical research done into the consequences of the diagram layout. In particular, layouts that require scrolling have not been investigated yet. The aim of this research is to establish layout guidelines for business process modeling that help business process modelers to create more understandable business process diagrams. For establishing benefits and penalties of different layouts, a controlled eye tracking experiment was conducted, in which data of 21 professional software developers was used. Our results show that horizontal layouts are less demanding and that as many diagram elements as possible should be put on the initially visible screen area because such diagram elements are viewed more often and longer. Additionally, diagram elements related to the reader’s task are read more often than those not relevant to the task. BPMN modelers should favor a horizontal layout and use a more complex snake or multi-line layout whenever the diagrams are too large to fit on one page in order to support BPMN model comprehension.
Article
Given a tabular dataset which should be graphically represented, how could the current complex visualization pipeline be improved? Could we produce a more visually enriched final representation, while minimizing the user intervention? Most of the existing approaches lack in capacity to provide a simplified end-to-end solution and leave the intricate process of setting up the data connections to the user. Their results mainly depend on necessary user actions at every step of the visualization pipeline and fail to consider the data structural properties and the constantly rising volume of open and linked data. This work is motivated by the need of a flexible framework which will improve the user experience and interaction by simplifying the process and enhancing the result, capitalizing on the enrichment of the final visualization based on the semantic analysis of linked data. We propose Lumina, a visualization framework, which : (a) builds on structural data analytics and semantic analysis principles, (b) increases the explainability and expressiveness of the visualization leveraging open data and semantic enrichment, (c) minimizes user interventions at every step of the visualization pipeline and (d) fulfills the growing need for open-source, modular and self-hosted solutions. Using publicly available read-world datasets, we validate the adaptability of Lumina and demonstrate the effectiveness and practicality of our method, in comparison to other open source solutions.
Chapter
The paper discusses the problem of using the digital educational footprint (DEF), formed during the interaction of a student with intelligent automated educational systems (IAES), as a copyright object. The DEF factors and data obtained during the educational process are highlighted. It is proposed to accumulate and visualize them in the form of a cognitive map of knowledge diagnosis (CMKD) by performing sequential statistical, metric, semantic and logical concentration of knowledge. The use of CMKD in the IAES decision-making mechanisms allows not only to increase the degree of individualization of the educative impact on the student, but also to model the process of reflective governance (according to Lefebvre). The possibility of displaying various aspects of CMKD and putting the maps together into the individual and group atlases is indicated.
Chapter
Philosophers have long debated the relative priority of thought and language, both at the deepest level, in asking what makes us distinctively human, and more superficially, in explaining why we find it so natural to communicate with words. The “linguistic turn” in analytic philosophy accorded pride of place to language in the order of investigation, but only because it treated language as a window onto thought, which it took to be fundamental in the order of explanation. The Chomskian linguistic program tips the balance further toward language, by construing the language faculty as an independent, distinctively human biological mechanism. In Ignorance of Language, Devitt attempts to swing the pendulum back toward the other extreme, by proposing that thought itself is fundamentally sentential, and that there is little or nothing for language to do beyond reflecting the structure and content of thought. I argue that both thought and language involve a greater diversity of function and form than either the Chomskian model or Devitt’s antithesis acknowledge. Both thought and language are better seen as complex, mutually supporting suites of interacting abilities.
Article
Background Complex electronic medical records (EMRs) presenting large amounts of data create risks of cognitive overload. We are designing a Learning EMR (LEMR) system that utilizes models of intensive care unit (ICU) physicians' data access patterns to identify and then highlight the most relevant data for each patient. Objectives We used insights from literature and feedback from potential users to inform the design of an EMR display capable of highlighting relevant information. Methods We used a review of relevant literature to guide the design of preliminary paper prototypes of the LEMR user interface. We observed five ICU physicians using their current EMR systems in preparation for morning rounds. Participants were interviewed and asked to explain their interactions and challenges with the EMR systems. Findings informed the revision of our prototypes. Finally, we conducted a focus group with five ICU physicians to elicit feedback on our designs and to generate ideas for our final prototypes using participatory design methods. Results Participating physicians expressed support for the LEMR system. Identified design requirements included the display of data essential for every patient together with diagnosis-specific data and new or significantly changed information. Respondents expressed preferences for fishbones to organize labs, mouseovers to access additional details, and unobtrusive alerts minimizing color-coding. To address the concern about possible physician overreliance on highlighting, participants suggested that non-highlighted data should remain accessible. Study findings led to revised prototypes, which will inform the development of a functional user interface. Conclusion In the feedback we received, physicians supported pursuing the concept of a LEMR system. By introducing novel ways to support physicians' cognitive abilities, such a system has the potential to enhance physician EMR use and lead to better patient outcomes. Future plans include laboratory studies of both the utility of the proposed designs on decision-making, and the possible impact of any automation bias.
Article
Colormap is a visualization tool to realize the mapping from data to colors. The data patterns can be revealed by color distribution, and the data details can be explored by the mapping. Most colormaps use a linear mapping between data and colors. When the data are unevenly distributed, most data samples are encoded in very few colors, so that the patterns hidden in these huge amount of data samples encoded by very small range of colors cannot be explored. Every data sample is equally important, it should map to the exclusive color in the colormap. Inspired by force-directed model-based node layout in network visualization, we propose a novel colormap optimization algorithm with data equality, called spring model-based colormap. It formulates the proposed proportionality rule and data ink rule by a dynamically balanced spring system. The proportionality rule is that the color perception difference is proportional to the values of data samples for better identification of data values. The data ink rule lets the spring system make colors associated with data samples as separate as possible in the color space for better data distribution reveal. To accelerate the colormap generation, a fast solution for the colormap optimization algorithm is proposed. The effectiveness of our method is evaluated by eye tracking experiments. The results show that the fixations on both our colormap and the encoded visualization are more dispersed, which indicates that our method is better at both data distribution reveal and identification of data values.
Chapter
Informationen bilden den Grundstein für Entscheidungen, und nie zuvor war der Zugriff auf Daten und ihre Visualisierung so einfach umzusetzen wie in Zeiten der fortschreitenden Digitalisierung. Informationsvisualisierung ist mehr als nur vermeintlich bunte Diagramme zu kreieren. Sie kann der entscheidende Faktor für das Verständnis von Informationen sein. Wichtige Inhalte können ihren Zweck nicht erfüllen, wenn sie aufgrund ihrer Darstellung nicht verstanden werden; umgekehrt kann eine gelungene Visualisierung eine fehlende Deckung des Informationsbedarfs nicht kompensieren. Erst die menschlichen kognitiven Fähigkeiten ermöglichen es, aus Daten wertvolle Informationen, Bedeutungen und Zusammenhänge herauszulesen. Dieser Beitrag unterstreicht die Relevanz einer kraftvollen Informationsvisualisierung, zeigt die Gefahren von unbewussten Visualisierungseffekten auf und liefert mithilfe des entwickelten © CLEAR(I) Leitfadens ein Hilfsmittel zur Erstellung effektiver Dashboards. Die Empfehlungen aus der Literatur zur effektiven Informationsvisualisierung wurden in einer empirischen Untersuchung bei der Containerreederei Hapag-Lloyd aus Anwendersicht überprüft. Der Leitfaden wird präsentiert und um Best-Practice-Beispiele aus dem Einsatz bei Hapag-Lloyd ergänzt.
Article
Full-text available
Background: Nearly half of US adults with diagnosed hypertension have uncontrolled blood pressure. Clinical inertia may contribute, including patient-physician uncertainty about how variability in blood pressures impacts overall control. Better information display may support clinician-patient hypertension decision making through reduced cognitive load and improved situational awareness. Methods: A multidisciplinary team employed iterative user-centered design to create a blood pressure visualization EHR prototype that included patient-generated blood pressure data. An attitude and behavior survey and 10 focus groups with patients (N = 16) and physicians (N = 24) guided iterative design and confirmation phases. Thematic analysis of qualitative data yielded insights into patient and physician needs for hypertension management. Results: Most patients indicated measuring home blood pressure, only half share data with physicians. When receiving home blood pressure data, 88% of physicians indicated entering gestalt averages as text into clinical notes. Qualitative findings suggest that including a data visualization that included home blood pressures brought this valued data into physician workflow and decision-making processes. Data visualization helps both patients and physicians to have a fuller understanding of the blood pressure 'story' and ultimately promotes the activated engaged patient and prepared proactive physician central to the Chronic Care Model. Both patients and physicians expressed concerns about workflow for entering and using home blood pressure data for clinical care. Conclusions: Our user-centered design process with physicians and patients produced a well-received blood pressure visualization prototype that includes home blood pressures and addresses patient-physician information needs. Next steps include evaluating a recent EHR visualization implementation, designing annotation functions aligned with users' needs, and addressing additional stakeholders' needs (nurses, care managers, caregivers). This significant innovation has potential to improve quality of care for hypertension through better patient-physician understanding of control and goals. It also has the potential to enable remote monitoring of patient blood pressure, a newly reimbursed activity, and is a strong addition to telehealth efforts.
Chapter
Full-text available
This chapter focuses on "images" in higher education as one of the two key modal ingredients of an inquiry graphic (IG) sign (as image-concept entities). It considers a selection of relevant perspectives to images in and for learning, from more expository to more creative approaches to images in education and higher education in particular, which an inquiry graphics practice incorporates and expands on. The chapter shows how an IG and IG learning designs relate to or incorporate many approaches to image-inspired learning, growth, and competences, such as: visual literacy, multiple representations, critical graphicacy, multimodality, photo elicitation, and photovoice. It establishes a relationality between these approaches and IG signs and practices.
Chapter
In visualization, there are many different wisdoms and opinions about why visualization works, what makes a good visualization, and how to design and evaluate visualization. Collectively these wisdoms and options have shaped a landscape of the schools of thought in the field of visualization. In this chapter, we examine various schools of thought in visualization, juxtaposing them with schools of thought in computer science and psychology. We deliberate the possibility that some schools of thought in computer science and psychology may have influenced those in visualization. Based on our observation of the development of schools of thought in the discipline of psychology, we believe that it is the empirical evidence that informs the development of theories, which are often embedded in some schools of thought. Meanwhile, empirical studies have a crucial role in visualization to inform and validate postulated theories.
Article
Full-text available
Conventional electronic health record information displays are not optimized for efficient information processing. Graphical displays that integrate patient information can improve information processing, especially in data-rich environments such as critical care. We propose an adaptable and reusable approach to patient information display with modular graphical components (widgets). We had two study objectives. First, reduce numerous widget prototype alternatives to preferred designs. Second, derive widget design feature recommendations. Using iterative human-centered design methods, we interviewed experts to hone design features of widgets displaying frequently measured data elements, e.g., heart rate, for acute care patient monitoring and real-time clinical decision-making. Participant responses to design queries were coded to calculate feature-set agreement, average prototype score, and prototype agreement. Two iterative interview cycles covering 64 design queries and 86 prototypes were needed to reach consensus on six feature sets. Interviewers agreed that line graphs with a smoothed or averaged trendline, 24-h timeframe, and gradient coloring for urgency were useful and informative features. Moreover, users agreed that widgets should include key functions: (1) adjustable reference ranges, (2) expandable timeframes, and (3) access to details on demand. Participants stated graphical widgets would be used to identify correlating patterns and compare abnormal measures across related data elements at a specific time. Combining theoretical principles and validated design methods was an effective and reproducible approach to designing widgets for healthcare displays. The findings suggest our widget design features and recommendations match critical care clinician expectations for graphical information display of continuous and frequently updated patient data.
Chapter
This chapter focuses on the right side of the triangular plot. Without map scale as an organizing framework, this chapter, instead, uses a broad selection of published thematic maps that address public interest or research questions. This chapter considers both the knowledge and skills involved in the map reading and interpretation of these various maps, along with where they would position in the triangular plot.
Article
Force-directed algorithms are a class of methods widely used to solve problems modeled via physics laws and resolved by particle simulation. Visualization of general graphs is one of the research fields which uses such algorithms and provides a vast knowledge about their benefits and challenges. Taking advantage of the knowledge provided by graph visualization theory, some authors have adopted force-directed algorithms as a tool to deal with the community detection problem. However, researches in that direction seem to be neglected by the literature of complex network. This paper explores the use of force-directed algorithms as a tool to solve the community detection problem. We revisit the works proposed in this area and point out the similarities, but mainly highlight the particularities of such a problem concerning the draw of a general graph. This literature review aims to organize the knowledge about the subject and highlight the state-of-the-art. To conduct our review, we followed a research protocol inspired by systematic review guidelines. Our review exposes that many works have chosen models that are not ideal for dealing with the community detection problem. Furthermore, we also highlight the most appropriate force-directed models for community detection.
Chapter
In this chapter I explore the audience-targeting function of metadiscourse in recent online genres for scientific dissemination and outreach, with a particular focus on the graphical abstract, increasingly demanded by high-impact specialised journals. I examine the changes undergone by digital metadiscourse and the challenges it poses to the transmission of science on the Internet. For this purpose, I scrutinise the exemplars provided by Elsevier’s JCR journals, together with other samples created by scholars and criticised by science bloggers, and a few instances produced by novice academic writers (engineering students in their senior year). My analytical framework is eclectic and draws on Critical Genre, Multimodal and Visual Analysis, Social Semiotics, Narrative Inquiry, Positioning Theory, the Conceptual Theory of Metaphor and Hyland’s metadiscourse model. Findings raise awareness of graphical abstracts as complex macro-metadiscourse items themselves that contain other interrelated—and even overlapping—metadiscourse categories in intersemiosis, of visual metadiscourse items as ‘narrative transformers’, and of ‘stylisation’ as a double-edged phenomenon liable to enhance and hinder scientific meaning, especially when metaphorical scenarios are used.
Chapter
As knowledge can be condensed in different non-verbal ways of representation, the integration of graphic and visual representations and design in research output helps to expand insight and understanding. Layers of visual charts, maps, diagrams not only aim at synergizing the complexity of a topic with visual simplicity, but also to guide a personal search for and insights into knowledge. However, from research over graphic representation to interpretation and understanding implies a move that is scientific, epistemic, artistic and, last but not least, ethical. This article will consider these four aspects from both the side of the researcher and the receiver/interpreter from three different perspectives. The first perspective will consider the importance of visual representations in science and its recent developments. As a second perspective, we will analyse the discussion concerning the use of diagrams in the philosophy of mathematics. A third perspective will be from an artistic perspective on diagrams, where the visual tells us (sometimes) more than the verbal.
Article
Full-text available
Data visualization for alternative narratives is a powerful adversarial tool that seeks to inquire about the existing conditions of conflicts. Nowadays, data visualization is widely used in activist communication to offer data-driven counternarratives to those with dominant power. However, a study of 64 cases shows that most visualizations are far from following the open-source ethos that data activism advocates. If visualizations for alternative narratives do not open up their processes and communicate how the meaning of data is shaped and translated visually, they risk turning into black-box devices that crystallize biased representations of conflict rather than investigating them. This paper presents a data design framework for visualizations used in alternative narratives. It is presented as a methodological tool that encourages actionable data practices that promote a more critical and reflective data culture. The framework is structured in two ways of approaching the process of working with data: from the parts of the process and from the process as a whole. The first approach presents four lenses intended to materialize aspects of the process of working with and making sense of data: Open/close, Composition, Zoom, and Sanitization. The second approach proposes a self-hacking operation of disclosing the production process of the visualizations. This paper argues that visualizations for alternative narratives ought to be open artifacts that promote the democratization of re-interpretation and critical data by disclosing the design decisions that were made on the dataset, as well as its visual, and interactive representation.
Chapter
Heuristic evaluation has been an important part of data visualization. Many heuristic rules and guidelines for evaluating data visualization have been proposed and reviewed. However, applying heuristic evaluation in practice is not trivial. First, the heuristic rules are discussed in different publications across different disciplines. There is no central repository of heuristic rules for data visualization. There are no consistent guidelines on how to apply them. Second, it is difficult to find multiple experts who are knowledgeable about the heuristic rules, their pitfalls, and counterpoints. To address this issue, we present a computer-assisted heuristic evaluation method for data visualization. Based on this method, we developed a Python-based tool for evaluating plots created by the visualization tool Plotly. Recent advances in declarative data visualization libraries have made it feasible to create such a tool. By providing advice, critiques, and recommendations, this tool serves as a knowledgeable virtual assistant to help data visualization developers evaluate their visualizations as they code.
Chapter
Technology can be designed to strengthen the participatory culture and giving a voice to people through User-Generated Content (UGC). Such practices may also influence the way we engage with places and communities. In this regard, we propose the conceptualization of an online platform to promote participation and preserve audiovisual records of shared experiences. Memories attached to places and cultural events, such as concerts, traditional celebrations and visits to landmarks and exhibitions, are frequently captured on multimedia records, which are often shared online by those who experienced them. The aggregation and correlation of these audiovisual resources, in a participatory platform, may enhance these experiences through forms of presentation based on multiple perspectives, making them collective. To gather insights and make proof of concept the method of exploratory interviews followed by a qualitative content analysis was adopted. Hence, the conceptualization of a digital platform that allows the creation of a living collaborative archive that preserves the uniqueness of each resource, but, in addition, allows an overview and combination of other participants’ contributions, was presented to experts within the areas of archives, museology, heritage, ethnography, community projects, cultural events, design, participatory media and digital platforms. This paper presents a segment of the interviews’ results concerning relevant use contexts along with recommendation and strategies for collection, visualization and participation to guide the development process of prototypes to be tested with target-users, whining a PhD research.
Chapter
This chapter will continue the theme of the previous one, but with more emphasis on how the audience and the medium chosen for communicating results can affect the statistical content of what is presented and how it is described.
Chapter
This chapter moves from having seen our raw data and having done some preparatory manipulation of that data, to the analysis of the data as specified by our statistical analysis plan. This single chapter on ‘analysis’ is not going to attempt to cover, or even introduce, all the types of analytical method within each study design setting in which those methods might be encountered. Instead, the chapter builds on and offers thoughts about those methods which might be commonly used in research studies in medicine and healthcare. The scope is far from being comprehensive but will hopefully include several analytical frameworks with which the reader is either already familiar or is seeking to become familiar.
Chapter
Understanding the complex relationships between a range of disparate types of data including (but not limited to) clinical signs and symptoms, socio-economic statuses, and environmental exposures is an ongoing struggle for researchers, administrators, clinicians, public health experts, and patients who struggle to use data to understand mental health. Information visualization techniques combining rich displays of data with highly responsive user interactions allow for dynamic exploration and interpretation of data to gain otherwise unavailable insights into these challenging datasets. To encourage broader adoption of visualization techniques in mental health, we draw upon research conducted over the past thirty years to introduce the reader to the field of interactive visualizations. We introduce theoretical models underlying information visualization and key considerations in the design of visualizations, including understanding user needs, managing data, effectively displaying information, and selecting appropriate approaches for interacting with the data. We introduce various types of mental health data, including survey data, administrative data, environmental data, and mobile health data, with a focus on focus on data integration and the use of predictive models. We introduce currently available open-source and commercial tools for visualization. Finally, we discuss two outstanding challenges in the field: uncertainty visualization and evaluation of visualization.
Chapter
The world is seeing rapid and dynamic technological innovations in the form of applications, tools, systems, or software that can help a nation's population, organisations and Government, make their administration and management more effective and efficient and most importantly at a more affordable price. Fusion technology, a hybrid concept practiced in Japan, Germany, which involves the integration of two or more technologies to develop products that can revolutionise the market. Thus, this paper highlights a fusion technology innovation (integration of vision and motion as well as analytical technologies) in the form of Box Robot application or PETS Robots that are programmable, called i-COMEL, to share STEM data in a class activity on a lesson related to Solar Systems. This activity was conducted to help primary school students enhance critical and scientific thinking through the use of Computational Thinking (CT) across STEM. In this activity students share data with other groups of students to prepare them for open data readiness. This was done through the use of PETS Robots that would use both vision and motion technologies to collect data based on the questions set, and these data were uploaded to the ThinkSpeak server on the Internet to visualise the data and displayed for all students to share during the presentation in the classroom. Learning to share data amongst the very young generation of the population, is important as Malaysia reinvents itself and moves towards a smart and digital data driven society, Malaysia 5.0. Findings of the proof of concept (POC) conducted on i-COMEL, showed that fusion technology used in the form PETS Robots and integrated with Computational Thinking (CT) across STEM for primary school students not only was a fun method of learning STEM subjects and acquiring critical and scientific skills but also an effective approach to open data readiness practice amongst primary school students.
Article
Full-text available
Data sharing is required for research collaborations, but effective data transfer performance continues to be difficult to achieve. The NetSage Measurement and Analysis Framework can assist in understanding research data movement. It collects a broad set of monitoring data and builds performance Dashboards to visualize the data. Each Dashboard is specifically designed to address a well-defined analysis need of the stakeholders. This paper describes the design methodology, the resulting architecture, the development approach and lessons learned, and a set of discoveries that NetSage Dashboards made possible.
Article
Full-text available
Data visualization blends art and science to convey stories from data via graphical representations. Considering different problems, applications, requirements, and design goals, it is challenging to combine these two components at their full force. While the art component involves creating visually appealing and easily interpreted graphics for users, the science component requires accurate representations of a large amount of input data. With a lack of the science component, visualization cannot serve its role of creating correct representations of the actual data, thus leading to wrong perception, interpretation, and decision. It might be even worse if incorrect visual representations were intentionally produced to deceive the viewers. To address common pitfalls in graphical representations, this paper focuses on identifying and understanding the root causes of misinformation in graphical representations. We reviewed the misleading data visualization examples in the scientific publications collected from indexing databases and then projected them onto the fundamental units of visual communication such as color, shape, size, and spatial orientation. Moreover, a text mining technique was applied to extract practical insights from common visualization pitfalls. Cochran’s Q test and McNemar’s test were conducted to examine if there is any difference in the proportions of common errors among color, shape, size, and spatial orientation. The findings showed that the pie chart is the most misused graphical representation, and size is the most critical issue. It was also observed that there were statistically significant differences in the proportion of errors among color, shape, size, and spatial orientation.
Article
Full-text available
The concept of black and white visual cryptography with two truly random shares, previously applied to color images, was improved by mixing the contents of the segments of each coding image, and by randomly changing a specified number of black pixels into color ones. This was done in such a way that the changes of the contents of the decoded image were as small as possible. These modifications made the numbers of color pixels in the shares close to balanced , which potentially made it possible for the shares to be truly random. The true randomness was understood as that the data pass the suitably designed randomness tests. The randomness of the shares was tested with the NIST randomness tests. Part of the tests passed successfully, while some failed. The target of coding a color image in truly random shares was approached, but not yet reached. In visual cryptography the decoding with the unarmed human eye is of primary importance, but besides this, simple numerical processing of the decoded image makes it possible to greatly improve the quality of the reconstructed image, so that it becomes close to that of the dithered original image.
Chapter
The scatterplot matrix is defined to be a standard method for multivariate data visualization; nonetheless, their use for decision-support in a corporate environment is scarce. Amongst others, longstanding criticism lies in the lack of empirical testing to investigate optimal design specifications as well as areas of application from a business related perspective. Thus, on the basis of an innovative approach to assess a visualization’s fitness for efficient and effective decision-making given a user’s situational cognitive load, this study investigates the usability of a scatterplot matrix while performing typical tasks associated with multidimensional datasets (correlation and distribution assessment). A laboratory experiment recording eye-tracking data investigates the design of the matrix and its influence on the decision-maker’s ability to process the presented information. Especially, the information content presented in the diagonal as well as the size of the matrix are tested and linked to the user’s individual processing capabilities. Results show that the design of the scatterplot as well as the size of the matrix influenced the decision-making greatly.
Chapter
The present chapter addresses the fundamental roles played by communication and mutual awareness in human/robot interaction and co-operation at the workplace. The chapter reviews how traditional industrial robots in the manufacturing sector have been used for repetitive and strenuous tasks for which they were segregated due to their hazardous size and strength, and so are still perceived as threatening by operators in manufacturing. This means that successful introduction of new collaborative systems where robotic technology will be working alongside and directly with human operators depends on human acceptance and engagement. The chapter discusses the important reassuring role played by communication in human–robot interaction and how involving users in the design process increases not only the efficiency of communication, but provides a reassuring effect.
Chapter
Full-text available
This chapter describes the various approaches to analyse, quantify and evaluate uncertainty along the phases of the product life cycle. It is based on the previous chapters that introduce a consistent classification of uncertainty and a holistic approach to master the uncertainty of technical systems in mechanical engineering. Here, the following topics are presented: the identification of uncertainty by modelling technical processes, the detection and handling of data-induced conflicts, the analysis, quantification and evaluation of model uncertainty as well as the representation and visualisation of uncertainty. The different approaches are discussed and demonstrated on exemplary technical systems.
Chapter
This paper presents a set of Design principles for the development of dashboards for the area of business management. The objective of this study is to guide and help the designer in his dashboard design process in order to obtain efficient results. The adopted methodologies were focused on literature review of design principles in dashboard creation, namely in the areas of interface design; data visualization; usability; UX and UI design; interaction design; and visual identity. The study demonstrates the importance of design, namely in establishing a convergent relationship between the graphic and functional components, in order to guarantee adequate and efficient interface solutions for the user.
Article
Full-text available
Abstract With the growing popularity of visualizations in various fields, visualization comprehension has gained considerable attention. In this work, we focus on the effect of data size and pattern salience on comprehension of scatterplot, a popular visualization type. We began with a preliminary study in which we interviewed 50 people in terms of comprehension difficulties of 90 different visualizations. The results reveal that data size is one of the top three factors affecting visualization comprehension. Besides, the effect of data size probably depends on the pattern salience within the data. Therefore, we carried out our experiment on the effect of data size and data-related pattern salience on three intermediate-level com- prehension tasks, namely finding anomalies, judging correlation, and identifying clusters. The tasks were conducted on the scatterplot due to its familiarity to users and ability to support diverse tasks. Through the experiment, we found a significant interaction effect of data size and pattern salience on the comprehension of the trends in scatterplots. In specific conditions of pattern salience, data size impacts the judgment of anomalies and cluster centers. We discussed the findings in our experiment and further summarized the factors in visualization comprehension.
Article
Full-text available
Adolescents can perceive parenting quite differently than parents themselves and these discrepancies may relate to adolescent well-being. The current study aimed to explore how adolescents and parents perceive daily parental warmth and criticism and whether these perceptions and discrepancies relate to adolescents’ daily positive and negative affect. The sample consisted of 80 adolescents ( M age = 15.9; 63.8% girls) and 151 parents ( M age = 49.4; 52.3% women) who completed four ecological momentary assessments per day for 14 consecutive days. In addition to adolescents’ perception, not parents’ perception by itself, but the extent to which this perception differed or overlapped with adolescents’ perception was related to adolescent affect. These findings highlight the importance of including combined adolescents’ and parents’ perspectives when studying dynamic parenting processes.
Article
Full-text available
In this paper we describe an approach for visualizing the textual information archived in the DBLP and the static and dynamic relations contained in it. Those relations are existing between authors and co-authors, between keywords, but also between authors and keywords. Visually representing them provides a way to quickly get an overview about emerging or disappearing topics as well as researchers and researcher groups. To reach our goal we apply node-link diagrams, word clouds, heatmaps, and area plots to the preprocessed and transformed DBLP data. Moreover, we use t-SNE as a way to compute groups of authors with similar keywords providing insights about similar research topics. All visualizations are equipped with interaction techniques and are built by using the functionality of the Bokeh library in Python, which enables the users to run the eDBLP in a web browser and to explore the dataset in an interactive and intuitive way. Finally, we discuss limitations and scalability issues of our approach. CCS CONCEPTS • Human-centered computing → Visualization design and evaluation methods.
Article
The established pillars of computational spectroscopy are theory and computer based simulations. Recently, artificial intelligence and virtual reality are becoming the third and fourth pillars of an integrated strategy for the investigation of complex phenomena. The main goal of the present contribution is the description of some new perspectives for computational spectroscopy, in the framework of a strategy in which computational methodologies at the state of the art, high-performance computing, artificial intelligence and virtual reality tools are integrated with the aim of improving research throughput and achieving goals otherwise not possible. Some of the key tools (e.g., continuous molecular perception model and virtual multifrequency spectrometer) and theoretical developments (e.g., non-periodic boundaries, joint variational-perturbative models) are shortly sketched and their application illustrated by means of representative case studies taken from recent work by the authors. Some of the results presented are already well beyond the state of the art in the field of computational spectroscopy, thereby also providing a proof of concept for other research fields.
Article
Full-text available
Background Transparent and accessible reporting of COVID-19 data is critical for public health efforts. Each Indian state has its own mechanism for reporting COVID-19 data, and the quality of their reporting has not been systematically evaluated. We present a comprehensive assessment of the quality of COVID-19 data reporting done by the Indian state governments between 19 May and 1 June, 2020. Methods We designed a semi-quantitative framework with 45 indicators to assess the quality of COVID-19 data reporting. The framework captures four key aspects of public health data reporting – availability, accessibility, granularity, and privacy. We used this framework to calculate a COVID-19 Data Reporting Score (CDRS, ranging from 0–1) for each state. Results Our results indicate a large disparity in the quality of COVID-19 data reporting across India. CDRS varies from 0.61 (good) in Karnataka to 0.0 (poor) in Bihar and Uttar Pradesh, with a median value of 0.26. Ten states do not report data stratified by age, gender, comorbidities or districts. Only ten states provide trend graphics for COVID-19 data. In addition, we identify that Punjab and Chandigarh compromised the privacy of individuals under quarantine by publicly releasing their personally identifiable information. The CDRS is positively associated with the state’s sustainable development index for good health and well-being (Pearson correlation: r =0.630, p =0.0003). Conclusions Our assessment informs the public health efforts in India and serves as a guideline for pandemic data reporting. The disparity in CDRS highlights three important findings at the national, state, and individual level. At the national level, it shows the lack of a unified framework for reporting COVID-19 data in India, and highlights the need for a central agency to monitor or audit the quality of data reporting done by the states. Without a unified framework, it is difficult to aggregate the data from different states, gain insights, and coordinate an effective nationwide response to the pandemic. Moreover, it reflects the inadequacy in coordination or sharing of resources among the states. The disparate reporting score also reflects inequality in individual access to public health information and privacy protection based on the state of residence.
Chapter
This chapter presents the practices for illustrating artifacts, plans and sections in archaeology, with emphasis on illustrations’ role in interpretation and the importance of a graphical information language for depicting archaeological results. After a brief review of equipment and supplies, it summarizes procedures and standards for illustrating lithics, pottery and small finds and introduces the topic of 3D scanning of artifacts and digital illustration. It also covers representation of uncertainty in illustrations, layout of artifact plates for publication, and the format of maps, plans, and architectural elevations. It concludes with a section on the publication process.
ResearchGate has not been able to resolve any references for this publication.