Article

The Visual Display of Quantitative Information

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Bertini et al. [14] described a systemic approach of using quality metrics for evaluating high-dimensional data visualization focusing on scatter plots and parallel coordinates plots. A variety of quality metrics have been proposed to measure many different attributes, such as abstraction quality [15][16][17], quality of scatter plots [18][19][20][21][22][23][24], quality of parallel coordinates plots [25], cluttering [26][27][28], aesthetics [29], visual saliency [30], and color mapping [31][32][33][34]. ...
... The average range of the estimations about the walk time by the 12 surveyees at KCL are: (i) 19.25 [8,30], (ii) 19.67 [5,30], (iii) 46.25 [10,240], and (iv) 59.17 [20,120] minutes. The estimations by the four surveyees at Oxford are: (i) 16.25 [15,20], (ii) 10 [5,15], (iii) 37.25 [25,60], and (iv) 33.75 [20,60] minutes. The values correlate better to the Google estimations than what would be implied by the similar distances on the deformed map. ...
... The average range of the estimations about the walk time by the 12 surveyees at KCL are: (i) 19.25 [8,30], (ii) 19.67 [5,30], (iii) 46.25 [10,240], and (iv) 59.17 [20,120] minutes. The estimations by the four surveyees at Oxford are: (i) 16.25 [15,20], (ii) 10 [5,15], (iii) 37.25 [25,60], and (iv) 33.75 [20,60] minutes. The values correlate better to the Google estimations than what would be implied by the similar distances on the deformed map. ...
Full-text available
Article
Many visual representations, such as volume-rendered images and metro maps, feature a noticeable amount of information loss due to a variety of many-to-one mappings. At a glance, there seem to be numerous opportunities for viewers to misinterpret the data being visualized, hence, undermining the benefits of these visual representations. In practice, there is little doubt that these visual representations are useful. The recently-proposed information-theoretic measure for analyzing the cost–benefit ratio of visualization processes can explain such usefulness experienced in practice and postulate that the viewers’ knowledge can reduce the potential distortion (e.g., misinterpretation) due to information loss. This suggests that viewers’ knowledge can be estimated by comparing the potential distortion without any knowledge and the actual distortion with some knowledge. However, the existing cost–benefit measure consists of an unbounded divergence term, making the numerical measurements difficult to interpret. This is the second part of a two-part paper, which aims to improve the existing cost–benefit measure. Part I of the paper provided a theoretical discourse about the problem of unboundedness, reported a conceptual analysis of nine candidate divergence measures for resolving the problem, and eliminated three from further consideration. In this Part II, we describe two groups of case studies for evaluating the remaining six candidate measures empirically. In particular, we obtained instance data for (i) supporting the evaluation of the remaining candidate measures and (ii) demonstrating their applicability in practical scenarios for estimating the cost–benefit of visualization processes as well as the impact of human knowledge in the processes. The real world data about visualization provides practical evidence for evaluating the usability and intuitiveness of the candidate measures. The combination of the conceptual analysis in Part I and the empirical evaluation in this part allows us to select the most appropriate bounded divergence measure for improving the existing cost–benefit measure.
... Before we consider their suggestions, we need to take into account the fact that if a particular visualisation is to be effective, in the sense of enabling individuals to not only comprehend it, but also interact with it, then their designers need to consider both the cognitive and perceptual processes and limitations of the human mind (Olshannikova, Ometov, Koucheryavy & Olsson 2015:9). On a cognitive level, poorly designed visual displays may lead, amongst other things, to ambiguous interpretation of data (Burkhard & Eppler 2005), cryptic encoding (Tufte 1986), the obscuring of important insights (Few 2006;Kosslyn 2006), over-complication in how information is represented 37 It should be noted that data visualisations may be visual and (occasionally) auditory. (Few 2006), and the absence of adherence to Gestalt principles (Tufte 1986). ...
... On a cognitive level, poorly designed visual displays may lead, amongst other things, to ambiguous interpretation of data (Burkhard & Eppler 2005), cryptic encoding (Tufte 1986), the obscuring of important insights (Few 2006;Kosslyn 2006), over-complication in how information is represented 37 It should be noted that data visualisations may be visual and (occasionally) auditory. (Few 2006), and the absence of adherence to Gestalt principles (Tufte 1986). These and many other consequences of poor design are discussed in the section below to highlight what Kennedy and Hill (2017a:769) have referred to as "the pleasure and pain" of visualisation. ...
... Another (perhaps unintended) consequence of badly designed visuals is that they may be misinterpreted in different cultural contexts, given that symbols and colours are not universal in their meanings, a consequence that is well documented in the literature (Nisbett 2003;Ewenstein & Whyte 2007;Avgerinou & Pettersson 2011;Bresciani 2014;Forsythe 2014;Jahns 2014). The recency effect is an additional drawback of information visualisation observed by Tufte (1986Tufte ( , 2006 and Nisbett (2003), amongst others. In terms of the recency effect, a viewer's interpretation of a graphical representation may be coloured by a recent experience or event. ...
... Even if the readers are trained to detect problems, they can still be confused or misled [15]. Several general principles for graph construction have been proposed to avoid these issues [16]. One of these principles is the principle of proportional ink. ...
... One of these principles is the principle of proportional ink. It is a specialized rule derived from one of the general graphical integrity principles introduced by [16] who stated that "the representation of numbers, as physically measured on the surface of the graphic itself, should be directly proportional to the numerical quantities represented." Further, the derivation of the definition is succinctly presented by [17], stating that "when a shaded region is used to represent a numerical value, the size (i.e., area) of that shaded region should be directly proportional to the corresponding value." ...
... In visualization design, graphical integrity requires designers to create graphs reflecting the actual data. With this purpose, researchers have proposed several graphical integrity principles in the literature, and two of the most common are the principle of proportional ink and the principle of data-ink [16]. The principle of proportional ink was proposed by [17] and was inspired by Tufte, who stated that "the representation of numbers, as physically measured on the surface of the graphic itself, should be directly proportional to the numerical quantities represented" [16]. ...
Full-text available
Article
Academic graphs are essential for communicating complex scientific ideas and results. To ensure that these graphs truthfully reflect underlying data and relationships, visualization researchers have proposed several principles to guide the graph creation process. However, the extent of violations of these principles in academic publications is unknown. In this work, we develop a deep learning-based method to accurately measure violations of the proportional ink principle (AUC = 0.917), which states that the size of shaded areas in graphs should be consistent with their corresponding quantities. We apply our method to analyze a large sample of bar charts contained in 300K figures from open access publications. Our results estimate that 5% of bar charts contain proportional ink violations. Further analysis reveals that these graphical integrity issues are significantly more prevalent in some research fields, such as psychology and computer science, and some regions of the globe. Additionally, we find no temporal and seniority trends in violations. Finally, apart from openly releasing our large annotated dataset and method, we discuss how computational research integrity could be part of peer-review and the publication processes.
... The financial section of annual reports has also become technical for users (David 2001). Given the problems experienced by users, companies are changing the format of their annual reports to enhance user's understanding with more creative annual reports being published, which comprise photographs, colours and visuals (Brasseur 2003;Ruiz-Garrido, Palmer-Silveira & Fortanet-Gómez 2005;Tufte 1983). ...
... Graphs can be distorted in three ways, namely measurement distortion, selectivity and presentational enhancement. Tufte (1983) defines measurement distortion as the occurrence of disproportion between the graph numbers and the underlying number. Selectivity relates only to disclosing positive and favourable information. ...
... The key feature of a graph is that the quantitative information, as measured on the surface of the graph, should be directly proportional to the numerical values (Tufte 1983). ...
Full-text available
Article
Background: South African state-owned entities (SOEs) have become synonymous with issues such as poor service delivery and wasteful expenditure. State-owned entities are accountable to various stakeholders with the annual report viewed as an accountability mechanism. Given the different components of the annual report, this provides management with the opportunity to use different elements to present a better image of the SOE. Some elements that can be used to manipulate information are graphs. Aim: The purpose of this study was to analyse the use of graphs in the annual reports of SOEs and to conclude whether SOEs use graphs to manipulate information presented. Setting: The annual reports of the 277 SOEs included in the Public Finance Management Act (PFMA) schedules as of 31 March 2018 were analysed. Methods: This study followed a quantitative research method. Content analysis is used to identify impression management techniques used in the graphs of SOEs. Results: The findings indicate that 64% of SOEs present graphs in their annual reports, with non-financial graphs being disclosed more than financial graphs. Using the graph discrepancy index (GDI), it was found that SOEs tend to overstate data trends more than understating trends resulting in a better image of the SOE being presented. The presentational features of graphs were not used excessively to influence users. Conclusion: Graphs appear to be used as a form of impression management to manage users’ perceptions of SOEs. Given the impact of the annual report on users’ decision, the distortion of graph may impact the decisions taken.
... Works at the New York Times, Propublica, the British Broadcasting Corporation, and other venues illustrate the possibilities of attending to aesthetics in communicating findings (see Bostock, 2014;Groeger, 2019). Many data visualizations and graphics found in these venues exemplify Tufte's (2001) assertion that "Graphical elegance is often found in simplicity of design and complexity of data" (p. 177). ...
... For example, Eisner's (1998b) arguments for the preparation of qualitative researchers include attention to the development of perception (or connoisseurship), representation (or criticism), and attending to the relationship of representation to epistemology. Similarly, Tufte's (1997Tufte's ( , 2001 recommendations for visually displaying quantitative information may be useful for those interested in awakening artistic sensibility in quantitative researchers. However, even those engaged directly in arts-based research are just beginning to shape discourse on artistic sensibilities in researchers (Blumenfeld-Jones, 2015). ...
Full-text available
Article
Artistic sensibility is defined in this work as the sensitivity and capacity to appreciate and act upon concerns of or pertaining to art and its production. This article contends that artistic sensibility is inherent to research. This contention is supported through three points which reveal a fourth: (1) Research requires dissemination. (2) Dissemination requires representation. (3) Representation requires artistic sensibility. These three points considered in conjunction illustrate a fourth: (4) Research requires artistic sensibility. This argument has implications for research venues, evaluations of research, and the preparation of researchers in all research disciplines. Namely, certain tenets of arts-based research may be applied to a much broader array of research methodologies. Identifying, honoring, and harnessing artistic sensibility in research has the potential to improve research products and enrich discourse.
... To determine whether the regions differ significantly from each other regarding the response variables (RV1 -RV3), we used t-test inferential statistics (Table 2). We observed some significant variations, and in (Figure 3), we indicate the geography of these variations by adopting Tufte's (1983) classification. 6 Generally, at least 4 out of the 16 regions are significantly different concerning all the three variables. ...
... Note: The maps show the combination of the differences between the means of each region compared with the rest of the regions (t-test in quartile) and their corresponding odds ratios. The classification is influenced by Tufte (1983). The darker shades indicate places with lower levels of being informed or satisfied. ...
Full-text available
Article
Scholars of natural resource governance argue that national and local governments must engage ordinary community members. When ordinary community members access information about the utilization of natural resource revenue and get an opportunity to provide feedback, the revenue management improves. In this article, the authors engaged Ghanaians through a spatial crowdsourcing platform for their opinion about petroleum management revenue in Ghana. The participants accessed the platform via their mobile phones and completed a survey on their opinions about petroleum revenue management, the Free Senior High School program, and their priority areas for petroleum revenue funding in Ghana. The results suggest that ordinary community members, and particularly women, seemed less informed about the management of petroleum revenue in Ghana. Furthermore, Ghanaians’ opinions regarding their prioritized projects for petroleum revenue funding vary geographically. The authors conclude that decision-makers can use spatial crowdsourcing to engage ordinary community members in natural resource revenue management.
... While some choices in plot design, such as whether to include 0 on graph axes, are purely functional, in general, a focus on design can improve a figure's visual appeal, attract and retain reader interest, and communicate ideas more clearly (Figure 8 and Figure S6). 47 ...
Full-text available
Article
Transparency is increasingly promoted to instill trust in nonrandomized studies using real‐world data. Graphics and data visualizations support transparency by aiding communication and understanding, and can inform study design and analysis decisions. However, other than graphical representation of a study design and flow diagrams (e.g., a Consolidated Standards of Reporting Trials [CONSORT] like diagram), specific standards on how to maximize validity and transparency with visualization are needed. This paper provides guidance on how to use visualizations throughout the life cycle of a pharmacoepidemiology study – from initial study design to final report – to facilitate rationalized and transparent decision‐making about study design and implementation, and clear communication of study findings. Our intent is to help researchers align their practices with current consensus statements on transparency.
... Neither the standard nor the general soundscape literature has settled on effective methods of analysing and representing the data that results from these protocols. Data visualisations are particularly important for understanding and communicating information as multifaceted as soundscape perception (Tufte, 2001). Although it is unlikely that any single method will be sufficient, attempts should be made to both facilitate future advancements in this realm and to develop a first step approach that captures the inherent uncertainty in perception studies, since including uncertainty is considered one of the core principles of good data visualisation (Midway, 2020). ...
Full-text available
Article
This study first examines the methods presented in ISO 12913 for analysing and representing soundscape data by applying them to a large existing database of soundscape assessments. The key issue identified is the inability of the standard methods to summarise the soundscape of locations and groups. The presented solution inherently considers the variety of responses within a group and provides an open-source visualisation tool to facilitate a nuanced approach to soundscape assessment and design. Several demonstrations of the soundscape distribution of urban spaces are presented, along with proposals for how this approach can be used and developed.
... Ensuring that dashboards are informative without overwhelming the user is a challenging balancing act. From an aesthetic perspective, Tufte (2001) cautions against use of 'non-data-ink' and 'chartjunk' in graphs, that is, he maintains that excessive use of colors, patterns or gridlines can confuse and clog the recipient's comprehension. Bera (2016) specifically mentions the overuse and misuse of color in business dashboards and the role this has on the users' decision-making abilities. ...
Full-text available
Article
This study investigates current approaches to learning analytics (LA) dashboarding while highlighting challenges faced by education providers in their operationalization. We analyze recent dashboards for their ability to provide actionable insights which promote informed responses by learners in making adjustments to their learning habits. Our study finds that most LA dashboards merely employ surface-level descriptive analytics, while only few go beyond and use predictive analytics. In response to the identified gaps in recently published dashboards, we propose a state-of-the-art dashboard that not only leverages descriptive analytics components, but also integrates machine learning in a way that enables both predictive and prescriptive analytics. We demonstrate how emerging analytics tools can be used in order to enable learners to adequately interpret the predictive model behavior, and more specifically to understand how a predictive model arrives at a given prediction. We highlight how these capabilities build trust and satisfy emerging regulatory requirements surrounding predictive analytics. Additionally, we show how data-driven prescriptive analytics can be deployed within dashboards in order to provide concrete advice to the learners, and thereby increase the likelihood of triggering behavioral changes. Our proposed dashboard is the first of its kind in terms of breadth of analytics that it integrates, and is currently deployed for trials at a higher education institution.
... In his numerous sketches and charts of fluids, he drew streamlines and vortices which almost satisfy our present needs. But it was not until 1750-1800 that statistical graphics -length and area to show quantity, time-series, scatterplots, and multivariate displays -were invented (Tufte 1983). With respect to astronomy, probably Galileo was the first one in annotating and drawing sketches about the sky with a scientific purpose. ...
Full-text available
Thesis
The ways of interpreting complex processes and simulations involving data sets is one of the essential themes that scientific visualization has to take into account these days. All the different forms of interpretation and exploration of the data, and information, have similar goals, to gain a deeper understanding and insight into the data. With the development of computers and the technological advances in hardware and software, scientists and engineers have been able to model phenomena that earlier were almost not conceivable. Yet, visualization is far behind data generation, and new techniques and ways of interpretation have to be developed for closing this gap. Some of these remarkable phenomena can be found in the field of astrophysics: interstellar clouds and jets, star formation, accretion disks, neutron stars and black hole coalition, are only a few among many exciting astronomical events that we have been able to study. This thesis explores new ways of visualizing these kinds of data and will present the results obtained from the visualization of two- and three- dimensional numerical simulations in astrophysics of the phenomena mentioned previously. An effort to produce true 3D visualization using stereo computer graphics and virtual reality techniques, specifically the VR-cube is also presented here.
... These books and traditional educational methods have provided visual designs with a wide range of rules and guidelines. Example guidelines include Shneiderman's mantra [Shn96], Tufte's data-ink ratio [Tuf85], a critique of the rainbow color map [BT07], etc. While these guidelines are often seen as part of the collected wisdom in visualization, they naturally do not cover all the nuances present in concrete, real-world scenarios, With their exceptions, often undefined scope, and conflicting supporting evidence, applying visualization guidelines in the wild therefore poses major challenges that the next generation of visualization practitioners and researchers must face. ...
Conference Paper
We propose a novel educational approach for teaching visualization, using a community-driven and participatory methodology that extends the traditional course boundaries from the classroom to the broader visualization community.We use a visualization community project, VisGuides, as the main platform to support our educational approach. We evaluate our new methodology by means of three use cases from two different universities. Our contributions include the proposed methodology, the discussion on the outcome of the use cases, the benefits and limitations of our current approach, and a reflection on the open problems and noteworthy gaps to improve the current pedagogical techniques to teach visualization and promote critical thinking. Our findings show extensive benefits from the use of our approach in terms of the number of transferable skills to students, educational resources for educators, and additional feedback for research opportunities to the visualization community.
... Currently, there are many approaches to weather forecast visualization, such as contour and thematic maps that communicate forecast information about specific geographic regions [11,4]. Visualization guidelines and best practices in visualization can help to improve these designs and make them more effective [3,2,1,13]. ...
Conference Paper
In this work, we present several interactive visual designs for mobile visualization of severe weather events for the communication of weather hazards, their risks, uncertainty, and recommended actions. Our approach is based on previous work on uncertainty visualization [5], cognitive science [6], and decision sciences for risk management [3, 4]. We propose six configurations that vary the ratio of text vs graphics used in the visual display, and the interaction workflow needed for a non-expert user to make an informed decision and effective actions. Our goal is to test how efficient these configurations are and to what degree they are suitable to communicate weather hazards, associated uncertainty, risk, and recommended actions to non-experts. Future steps include two cycle of evaluations, consisting of a first pilot to rapidly test the prototype with a small number of participants, collect actionable insights, and incorporate potential improvements. In a second user study, we will perform a crowd-sourced extensive evaluation of the visualization prototypes.
... This third dimension is useless in terms of giving the user an understanding of the data. This becomes chartjunk [40], and is often judged to be bad practice. However, recent work has started to discover that in some situations, there is worth to using chartjunk. ...
Full-text available
Article
The opportunities for 3D visualisations are huge. People can be immersed inside their data, interface with it in natural ways, and see it in ways that are not possible on a traditional desktop screen. Indeed, 3D visualisations, especially those that are immersed inside head-mounted displays are becoming popular. Much of this growth is driven by the availability, popularity and falling cost of head-mounted displays and other immersive technologies. However, there are also challenges. For example, data visualisation objects can be obscured, important facets missed (perhaps behind the viewer), and the interfaces may be unfamiliar. Some of these challenges are not unique to 3D immersive technologies. Indeed, developers of traditional 2D exploratory visualisation tools would use alternative views, across a multiple coordinated view (MCV) system. Coordinated view interfaces help users explore the richness of the data. For instance, an alphabetical list of people in one view shows everyone in the database, while a map view depicts where they live. Each view provides a different task or purpose. While it is possible to translate some desktop interface techniques into the 3D immersive world, it is not always clear what equivalences would be. In this paper, using several case studies, we discuss the challenges and opportunities for using multiple views in immersive visualisation. Our aim is to provide a set of concepts that will enable developers to perform critical thinking, creative thinking and push the boundaries of what is possible with 3D and immersive visualisation. In summary developers should consider how to integrate many views, techniques and presentation styles, and one view is not enough when using 3D and immersive visualisations.
... L'analisi spaziale permise al dottor Snow di determinare che il contagio di colera derivava da una delle fontane pubbliche, che era quella attorno alla quale, a grappolo, si erano determinati i decessi nel quartiere. La rimozione della fontana permise di porre fine all'epidemia (Tufte, 2007). Il presente studio ha fatto ampio ricorso al connubio tra i due metodi esplorativi, ricorrendo alla comparazione costante, in fase di raccolta dati e di interpretazione degli stessi, delle diverse fonti utilizzate. ...
Full-text available
Thesis
La ricerca indaga i meccanismi con i quali le istituzioni pubbliche locali cercano di porre un argine alla nuova crisi urbana, rigenerando quartieri in declino economico e sociale. Le fratture della città contemporanea sono vaste e molteplici: proprio nelle periferie il sentimento di deprivazione, ingiustizia, abbandono raggiunge il suo apice. Le istituzioni, di fronte alla crisi della democrazia rappresentativa, mettono in atto processi di ascolto e consultazione dei cittadini. Le domande attorno alle quali ruota la tesi si situano su due livelli: nel primo sono state analizzate le politiche e ci si è chiesti quali fossero le strategie messe in atto dai comuni per rigenerare i quartieri in crisi, come e fino a che punto i cittadini fossero protagonisti dei programmi di intervento e se si riuscisse a rispondere al disagio sociale e al sentimento di abbandono di una parte della popolazione. Su un secondo livello, invece, il ricercatore ha cercato di costruire uno strumento che potesse favorire la partecipazione e rinsaldare la fiducia tra cittadini e istituzioni: un Web GIS che monitora e ricostruisce il piano di rigenerazione del quartiere Libertà di Bari e illustra la logica dei programmi. Assumendo una prospettiva di pianificazione collaborativa e comunicativa, lo studio è stato quindi un tentativo di comprendere le politiche pubbliche urbane e di creare strumenti di analisi e monitoraggio al servizio della società civile e delle autorità locali. La tesi si è avvalsa di un mixed method: ci si è rifatti alla theory-driven evaluation e alla valutazione realista per ricostruire i meccanismi dei programmi di investimento pubblico, alla metodologia della ricerca storica per lo studio del contesto nazionale e locale, alla grounded theory, infine, per l’analisi della partecipazione e della sua relazione con la democrazia e il populismo. I metodi sono quindi confluiti in un Sistema di Informazione Geografica utilizzato sia per far emergere nessi di casualità e relazioni che erano sfuggiti alle altre analisi dei dati effettuate (grounded visualization), sia per comunicare i risultati dello studio. È una tesi sui fondi di coesione utilizzati per ripensare la città e le sue funzioni: le istituzioni locali, a Sud, usano prevalentemente finanziamenti europei e del governo nazionale per gli investimenti necessari a intervenire da un punto di vista fisico e immateriale nelle aree in declino. L’approccio è mutato nel corso del tempo. La programmazione, specialmente sotto la spinta riformista dei primi governi di centro-sinistra, si è trasformata, nel secondo dopoguerra e negli anni del boom economico, da un insieme di previsioni a una concatenazione coerente di programmi e strumenti. Parallelamente alla riflessione europea sulle politiche territoriali, nella seconda metà degli anni ’90 viene varata la Nuova Programmazione, che ha avuto il merito di elaborare un metodo di intervento che, attraverso il coinvolgimento degli attori locali, mira a trovare le configurazioni istituzionali e le azioni adatte a innescare un circolo virtuoso capace di incidere sui contesti a lungo termine. La Puglia entra nel nuovo millennio con una economia e una società fragili. I fondi strutturali, con la Primavera pugliese, sono lo strumento principale per rendere più efficiente il sistema economico e per democratizzare la vita civile. Le città, secondo i documenti di programmazione, dovrebbero diventare gli avamposti delle produzioni ad alto contenuto di conoscenza, luoghi coesi e creativi. Bari è divisa e disuguale: l’estremo potere della rendita non ha permesso una pianificazione razionale della città. L’amministrazione comunale sta investendo fondi di coesione nazionali ed europei per cercare di invertire il declino delle periferie e, in particolare, ha identificato il quartiere Libertà come principale area obiettivo delle politiche di rigenerazione. Quartiere operaio e delle manifatture, nel tempo sono sorte nell’area fiorenti attività commerciali. Il declino è iniziato con l’inizio del millennio, acutizzandosi con la crisi finanziaria. Al momento il disagio della popolazione è multidimensionale e il quartiere è l’area della città con la più alta presenza di migranti. Per invertirne le sorti, il comune sta implementando un programma integrato, attuato coinvolgendo prevalentemente enti del terzo settore. Il piano consiste nel rafforzamento dei presidi di welfare, nella realizzazione di centri culturali giovanili, nel finanziamento di imprese sociali e di nuove attività economiche, nella creazione di un centro per l’impiego e di un incubatore di imprese, nella realizzazione di nuove piazze, di un grande parco e della sede del CNR nell’ex Manifattura Tabacchi. La sfida è di trasformare il Libertà nel luogo delle diversità e dell’economia della conoscenza, delle giovani coppie dall’alto capitale umano e delle aziende creative. Il metaobiettivo del programma è rafforzare la coesione e il capitale umano e sociale del quartiere. Ricostruendo la teoria del programma del piano, ovvero le ipotesi dell’amministrazione sugli effetti e sui nessi di casualità che gli interventi genereranno sugli abitanti, si notano alcune incongruenze. Il dubbio, vista anche la storia di Bari di continue espulsioni dei residenti storici dalle aree centrali interessate da rinnovamento immobiliare, è che l’aumento del capitale umano e sociale avverrà non solo perché effettivamente cresceranno le capacità di impiego, formative e relazionali della popolazione più vulnerabile, ma anche per sostituzione dei residenti. Alcune caratteristiche del quartiere – la vicinanza al centro e alla stazione, valori immobiliari ancora accessibili, il miglioramento della dotazione di beni e servizi pubblici – rendono possibile l’ipotesi di un processo di gentrificazione. Dall’analisi emerge come la partecipazione riproduca le fratture e le disuguaglianze: non si riesce a coinvolgere nel disegno e nella gestione dei programmi proprio la fascia di popolazione più fragile. Le politiche di rigenerazione urbana studiate sembrano rispondere più alla logica della ricerca di una maggiore efficienza e competitività della città che a ragioni di equità. Occorre una istituzionalizzazione del conflitto distributivo per rinsaldare, attraverso il dibattito, la fiducia dei tanti che sono stati impoveriti dalle crisi e abbandonati al loro destino. _____________________________________________________________________________________________________________________________________________________________________ The research investigates the mechanisms by which local public institutions try to contain the new urban crisis, regenerating neighbourhoods in economic and social decline. The fractures of contemporary cities are deep and multiple: the feeling of deprivation, injustice and abandonment reaches its peak especially in the suburbs. Faced with the crisis of representative democracy, institutions implement participatory processes, listening to and consulting citizens. The research questions are twofold. First of all, the study depicts the strategies implemented by municipalities to regenerate neighbourhoods, analyses how and to what extent citizens are involved in the interventions and whether the plan is able to invert the feeling of discomfort and abandonment of part of the population. Secondly, the researcher created a tool to encourage participation and to strengthen trust between citizens and institutions: a Web GIS that monitors and reconstructs the regeneration programme of the Libertà district in Bari and illustrates the programme theory. The study was therefore an attempt, inspired by the communicative planning paradigm, to create a tool to facilitate the participation of the civil society. The research used a mixed method approach: the theory-driven evaluation and the realistic evaluation were used to reconstruct the mechanisms of public investment programmes; historical methodologies were employed to understand the national and local context, and grounded theory to analyse participation and its relationship with democracy and populism. The methods were then merged into a Geographical Information System, used both to discover new links and relationships between categories (grounded visualization), and to communicate the results of the study. This is a study about the cohesion funds used to rethink the city and its functions: local government institutions in the South of Italy mainly use European and national funding to intervene in declining areas. The approach has changed over time. Planning, especially under the reformist impetus of the first centre-left governments, was modified after the Second World War and in the years of the economic boom, remodelling from a set of forecasts into a coherent concatenation of programmes and instruments. In parallel to the European debate on territorial development, the New Programming was launched in the second half of the ‘90s and had the merit to elaborate a method of intervention that aimed at finding the institutional configurations and the actions suitable to trigger a virtuous circle of long term progress and development through the involvement of local actors. The Apulia Region entered the new millennium with a fragile economy and society. With the so called Apulian Spring (a term reflecting the reforms introduced by a new progressive local leadership), the structural funds turned into the main instrument to transform the economic system and to democratise the society. According to the programming documents, cities should become the outposts of knowledge economy, cohesive and creative places. Bari is divided and unequal: the extreme power of rent has not allowed a rational planning of the city. The municipality is investing national and European cohesion funds to try to reverse the decline of the suburbs and, in particular, has identified the Libertà district as the main target area for regeneration policies. It is a working-class and manufacturing district, where flourishing commercial activities have sprung up over time. The decline began at the turn of the millennium, sharpening with the financial crisis. At present, the population distress is multidimensional and the district is the area of the city with the highest percentage of migrants. So the municipality is implementing an integrated programme, mainly involving third sector organizations. The plan consists in strengthening welfare services, creating cultural centres for young people, funding social enterprises and new economic activities, creating a job centre and a business incubator, renewing squares, creating a large park and the new National Research Council headquarter in the former Manifattura Tabacchi (Tobacco Factory). The challenge is to turn Libertà into a place of diversity and knowledge economy, of creative companies and young couples with high human capital. The goal of the programme is to strengthen the cohesion and to increase the human and social capital of the inhabitants of the neighbourhood. Some inconsistencies are found while reconstructing the programme theory of the plan, based on the municipalities’ assumptions about the effects that interventions would generate on the local community. The doubt, considering Bari's history of continuous expulsions of residents from central areas affected by real estate projects, is that the increase in human and social capital will occur not only because the employment, training and relational capacities of the most vulnerable population will actually grow, but also because of replacement of residents. Some characteristics of the neighbourhood – the proximity to the city centre and the railway station, still accessible property values, the improvement of the endowment of public goods and services – make the hypothesis of a gentrification process possible. The analysis shows how participation reproduces fractures and inequalities: the most fragile population is not involved in the design and management of programmes. The urban regeneration policies studied seem to respond more to the logic of seeking greater efficiency and competitiveness of the city than to reasons of equity. We need to institutionalize distributional conflict in order to strengthen, through debate, the confidence of the many who have been impoverished by the crises and abandoned to their fate.
... Even though trend reports are insightful in understanding the peaks and dips of search interest of specific terms, the COVID-19 pandemic adds a vital context related to the information seeking of users in understanding the need for information in Instructional Design and Remote Learning. The design of the Tableau dashboards involved following six fundamental principles of information design by Tufte (1983), including comparison, causality, multivariate, integration, documentation, and context. Each Tableau dashboard is contextualized around COVID-19 milestones and news related to vaccine development by the World Health Organization (WHO), U.S. Food Drug and Administration (FDA), United Nations (UN), and National Institutes of Health (NIH). ...
Full-text available
Conference Paper
As educational institutions switched from in-person to Remote Learning or Emergency Remote Teaching (ERT), this study explores the search popularity indices of three search terms globally and in the USA from January 2020 - April 2021. Google search terms, such as COVID-19, Instructional Design, and Remote Learning, showed considerable search interest in Google Trends at the peak of the pandemic on Mach 11, 2020 (WHO, 2020). Pytrends was used to extract search interest of the primary and related terms from Google and YouTube Search globally and in the USA (Pytrends, n.d.). The search interest of the three search terms and associated terms exhibited similar peaks and dips of interest at the beginning of the pandemic and the 2020-2021 school year. Educational technology tools, including SeeSaw Learning and Zoom, were present as related search terms for Remote Learning. Search queries in YouTube revealed educational channels that host various lecture materials related to K-12 and professional development. The interactive visualizations in Tableau Public enable users to explore the search trend patterns of the three main terms and related queries. This project serves as an archive of users’ conscious information seeking to highlight the pivotal roles of Instructional Design and Remote Learning observed in search popularity indices in light of the pandemic. The project can be accessed at edtechtrends.javierleung.com or on Tableau Public (Leung, n.d-a, n.d.-b).
... Cartograms satisfy Tufte's principle of graphical integrity: "The representation of numbers, as physically measured on the surface of the graphic itself, should be directly proportional to the numerical quantities represented" [8]. ...
Preprint
A contiguous area cartogram is a geographic map in which the area of each region is rescaled to be proportional to numerical data (e.g., population size) while keeping neighboring regions connected. Few studies have investigated whether readers can make accurate quantitative assessments using contiguous area cartograms. Therefore, we conducted an experiment to determine the accuracy, speed, and confidence with which readers infer numerical data values for the mapped regions. We investigated whether including an area-to-value legend (in the form of a square symbol next to the value represented by the square's area) makes it easier for map readers to estimate magnitudes. We also evaluated the effectiveness of two additional features: grid lines and an interactive area-to-value legend that allows participants to select the value represented by the square. Without any legends and only informed about the total numerical value represented by the whole cartogram, the distribution of estimates for individual regions was centered near the true value with substantial spread. Selectable legends with grid lines significantly reduced the spread but led to a tendency to underestimate the values. When comparing differences between regions or between cartograms, legends and grid lines made estimation slower but not more accurate. However, legends and grid lines made it more likely that participants completed the tasks. We recommend considering the cartogram's use case and purpose before deciding whether to include grid lines or an interactive legend.
... Moreover, it represents two aspects of time: observations from several times in the day (shown implicitly as a representation of speed classes in a day's image sequence) and observation sessions from different days over the years. Our representation thus combines one-dimensional distance with speed (derived from distance and detailed time) and a coarse time representationsomewhat akin to train schedule maps (discussed, e. g., by Tufte [69])-, in contrast to other space-time cube representations. It also goes beyond "space flattening" [7] since we use the local coordinate system to encode both distance traveled in a day and precise time (the latter implicitly via color-coded speed). ...
Article
We present a case study on a journey about a personal data collection of carnivorous plant species habitats, and the resulting scientific exploration of location data biases, data errors, location hiding, and data plausibility. While initially driven by personal interest, our work led to the analysis and development of various means for visualizing threats to insight from geo-tagged social media data. In the course of this endeavor we analyzed local and global geographic distributions and their inaccuracies. We also contribute Motion Plausibility Profilesa new means for visualizing how believable a specific contributors location data is or if it was likely manipulated. We then compared our own repurposed social media dataset with data from a dedicated citizen science project. Compared to biases and errors in the literature on traditional citizen science data, with our visualizations we could also identify some new types or show new aspects for known ones. Moreover, we demonstrate several types of errors and biases for repurposed social media data.
... The presentations were limited to 10 minutes in duration, followed by 5 minutes of questions from the audience. Reference [17] was offered as a suggestion in visual communication. Similar to the written report, grading was performed using the AIAA AFM Student Paper Competition oral presentation rubric in Appendix B. ...
Full-text available
Conference Paper
Atmospheric Flight Mechanics (AFM) is a diverse and highly multidisciplinary field of engineering that continues to evolve alongside technological advances and customer demands. This paper collects various resources that may be useful to students learning about these subjects, instructors delivering courses, or practicing engineers looking for examples and tools. In particular, sources of experimental flight test data, nonlinear flight simulations, and aerodynamics models are discussed. Data for modern aircraft are emphasized, as well as resources for computer software and communication skills, in accordance with findings from AIAA AFM Technical Committee panel discussions on the subject. Examples of incorporating these resources are also discussed within the context of a graduate course in atmospheric flight control recently delivered at the University of Maryland.
... Она непосредственно связана и с графическим дизайном, и с дисциплинами по обработке и анализу данных. Представители СМИ (Симакова, 2017), математики (Tufte, 2007), социологи (Макулин, 2019), экономисты (Серебряник, Надршин, 2016), статистики (Лаптев, 2012), дизайнеры (Выровцева, Индутная, Симакова, 2020), ученые (Li, Chen, 2021), исследователи (Аликина, Рапакова, 2019), преподаватели (Lyra, Isotani, Reis et al., 2021) и многие другие используют ее в своей профессиональной деятельности. В связи с междисциплинарностью инфографики, особенно в период цифровизации, необходимо определить сквозные технологии обучения работы с инфографикой для студентов вузов. ...
... Using the same statistical graph, we can pose questions of different levels of difficulty, which can refer to the title and scales, the variables and values being represented, the interpolation or extrapolation of values, and the detection of biases in the graph or in statements based on the graph. The responses to these questions require a series of interpretative processes of each component of the graph and of the graph as a whole, as well as of the relationship of the graph with the context of the data (Arteaga et al., 2012;Tufte, 2001). This interpretation is more or less complex, depending on the information that needs to be extracted from the graph, and for this reason, several authors have defined levels in the reading of graphs. ...
Full-text available
Article
The aim of this research was to describe the errors and reading levels that 6th and 7th grade Chilean primary school children reach when working with line graphs. To achieve this objective, we gave a questionnaire, previously validated by experts with two open-ended tasks, to a sample of 745 students from different Chilean cities. In the first task, we asked the children to read the title of the graph, describe the variables represented and perform a direct and inverse reading of a data value. In the second task, where we address the visual effect of a scale change in a representation, the students had to select the line graph more convenient to a candidate. Although both tasks were considered easy for the grade levels targeted, only some of the students achieved the highest reading level and many made occasional errors in the reading of the graphs. Abstract: Spanish El objetivo de esta investigación es describir los errores y niveles de lectura que alcanzan estudiantes chilenos de 6º y 7º grado de Educación Primaria al trabajar con gráficos de líneas. Para lograr este objetivo, se aplicó un cuestionario, previamente validado por expertos, con dos tareas abiertas a una muestra de 745 estudiantes de diferentes ciudades chilenas. En la primera tarea, se pidió que leyeran el título del gráfico, indicaran las variables representadas y realizaran una lectura directa y otra inversa de un valor de datos. En la segunda tarea, los estudiantes deben seleccionar y justificar el gráfico de líneas más conveniente para respaldar a un candidato, donde se aborda el efecto visual de cambio de escala en una representación. Aunque ambas tareas fueron fáciles, solo una parte de los estudiantes logró el máximo nivel de lectura y aparecieron errores ocasionales en la lectura de los gráficos.
... The Cartesian coordinate system as a series of statistical charts including the scatter plot, line and bar graph was developed and popularised in the late 18th century by the English political economist William Playfair specifically to represent abstract aggregate data of measurable phenomena in economic planning. The bar chart was invented to represent commercial events of trade imports and exports against years as time series (Tufte, 2001). William Playfair's Commercial and Political Atlas (1786), which was the first of three editions, set out a vast range of innovative graphical devices to hasten the communication of financial information against time and place. ...
Full-text available
Article
Policies for biodiversity no net loss and net gain underwrite narratives for green growth through advancing reparative logics to ongoing habitat impacts. By enabling offsetting practices that risk accommodating rather than averting land change developments, net principles are said to resemble modes of ‘accumulation by environmental restoration’. Biodiversity net principles are frequently depicted visually as a diagram of the mitigation hierarchy for communicational ease and have proliferated over recent decades despite little evidence for their ecological effectiveness. This paper combines economic sociology, visual media analysis of the net diagram and political ecology to account for the stabilisation of net principles in policy frameworks. It highlights the upstream imaginative work that this visual tool and its wider assemblages perform to support offsetting and habitat banking practices on the ground. The paper positions the NNL diagram as a conceptual and ideational technology. It traces the practices through which biodiversity is rationalised by the Cartesian coordinates of an XY schematic, and en-framed as a measure of numerical value on a vertical scale. The effect is to engender coherence to the idea of netting out differences in aggregate sums of biodiversity unit value, making nature conceptually offset-able. I develop this account through a history of the diagram as well as the broader processes that have shaped the policy and its arrival in English planning frameworks. Observers increasingly question how biodiversity offsetting and no net loss/ net gain have become so popular when their empirical foundations are so weak. This paper proposes that within the wider assemblages of actors, one answer is located in the potency and mobility of conceptual technologies such as diagrams of no net loss or net gain of biodiversity and the logic of balance-sheet accounting that is imbricated within the visual design.
... More than 40 years ago, Tukey (1977) described a number of strategies for using data visualization to make sense of the meaning of data and advocated careful and thoughtful examination of descriptive data before moving on to complex analyses. In a series of beautifully designed and illustrated books, Tufte (2001) provided essential guidance on the visual display of quantitative information. The effective examination of descriptive data is a critical step in bringing methodology from the abstract to the concrete by showing what actually happens when finely tuned methods come into contact with contexts, populations, and situations that might either enhance or limit the value of the data that are actually obtained. ...
Full-text available
Article
As data analytic methods in the managerial sciences become more sophisticated, the gap between the descriptive data typically presented in Table 1 and the analyses used to test the principal hypotheses advanced has become increasingly large. This contributes to several problems including: (1) the increasing likelihood that analyses presented in published research will be performed and/or interpreted incorrectly, (2) an increasing reliance on statistical significance as the principal criterion for evaluating results, and (3) the increasing difficulty of describing our research and explaining our findings to non-specialists. A set of simple methods for assessing whether hypotheses about interventions, moderator relationships and mediation, are plausible that are based on the simplest possible examination of descriptive statistics are proposed.
... In viewing Figures 1a and 1b, consider the principle that a science graphic should allow for "Inspecting and evaluating alternative explanations" [7]. Consider the principle that "We need a lack of hubris to allow us to see data and let them generate … multiple alternative working hypotheses" [8]. ...
Full-text available
Preprint
A less than nuanced view of storytelling for science can create false dichotomies, undermine the value of muliple working hypotheses, and send misinformation about the process of science (the misinformation comes in verbal and visual form). Self-corrections within the research and scientific publishing communities are possible but require a break from trends toward business models of science.
... There are many books on the topic, including the popular "Presentation Zen" by Garr Reynolds [11]. Finally, although briefly touched on here, the visualization of data is an entire topic of its own that is worth perfecting for both written and oral presentations of work, with fantastic resources like Edward Tufte's "The Visual Display of Quantitative Information" [12] or the article "Visualization of Biomedical Data" by O'Donoghue and colleagues [13]. ...
... Se trata del «mapa figurativo» de Charles Joseph Minard que ilustra la campaña rusa (1812-1813) de la Grande Armée de Napoleón (ver Figura 11). Este mapa se recoge en numerosos libros sobre visualización, diseño, cartografía y estadística (Tufte, 1983;C. Chen et al., 2007;Meirelles, 2013;Lauer y Pentak, 2011;Thrower, 2008;Johnson y Bhattacharyya, 2019;Rendgen, 2018), principalmente por su capacidad para evocar la tragedia -en términos de las vidas humanas perdidas en relación con el avance y retirada del ejército y las condiciones climáticas extremas durante la campaña-y la efectividad de una elección simple en el diseño: el grosor de la línea, que se superpone a un esbozo de mapa geográfico, se corresponde con el número de soldados que componían el ejército en cada momento. ...
Full-text available
Article
El análisis de datos ha recurrido a la representación visual para encontrar u ofrecer explicaciones desde sus orígenes. Las capacidades de almacenamiento y procesamiento no han parado de crecer gracias al avance de la computación. Paralelamente, gracias a una democratización tecnológica, las contribuciones artísticas en la visualización de datos se han multiplicado, aportando nuevas formas creativas y planteamientos diferentes a los propios de la ciencia. Este artículo analiza las influencias recíprocas entre las artes y las ciencias en torno a las visualizaciones de datos. Se traza una visión panorámica de la Visualización de Datos mediante un análisis de la dedicación en las diversas disciplinas a este tema de estudio recogido en las principales bases de datos bibliográficas, así como de la creciente comunidad de práctica que acoge en la actualidad a científicos, diseñadores, artistas y otros perfiles profesionales. A través de una serie de ejemplos históricos que van desde la prehistoria hasta la actualidad, se ponen de manifiesto diversas líneas de influencia en las que arte y ciencia han propiciado avances en la comunicación de fenómenos o el planteamiento de preguntas a partir de los datos. Finalmente, se ofrecen unas reflexiones sobre los retos que afronta la Visualización de Datos y las oportunidades que de ellos se derivan.
... According to Tufte (2001), the graphical presentation of data is an efficient way to disseminate research findings because it presents the information clearly, precisely and more efficiently than other forms of presentations. The visualisation of research findings also has the potential to induce emotional involvement and improve understanding of the presented information (Wheeldon & Åhlberg, 2012), which is important for solid documentation of the results from socially engaged studies. ...
Full-text available
Chapter
This chapter presents an approach for documenting the outcomes of participation in socially engaged projects through a brief overview of data visualisation in mixed methods research and the provision of examples that illustrate a variety of possible forms for quantitative results; the chapter also specifies some of the possibilities for the integration of these results with qualitative data. Based on the existing evidence in this emerging field of information design, this form of documenting and presenting the outcomes of socially engaged arts can impact the public and decision making about issues affecting marginalised community members; this can occur through cocreative engagement with artists who try to make change by increasing awareness and mobilising support for their causes.
Full-text available
Article
This paper aims to investigate the effect of technological change on manufacturing decisions in the surgical manufacturing Industry of Pakistan. The research was concentrated on three manufacturing decisions, outsourcing (International), in-house manufacturing, and both (in-house/International). This paper used multinomial logistic regression to investigate the set of relationships with categorical outcome variables (Manufacturing Decisions), using a set of data composed of 115 firms. Technological change factors and non-technological factors as control variables (Firm Age, Firm Size, and Human Capital), the paper provides causal evidence of how advanced technology increases the outsourcing probability. This strategy will help the firm avoid incurring the sunk cost on their investment when a new technology is introduced. The sunk cost is even higher when a firm produces at a small scale and is mitigated by other high-scale producers in the market. Therefore, the increased scale producers to which other low-scale producers outsource their production should be manufactured products prone to technological changes. The governments may extend their integration within the global economy and diversify exports from merchandise to value-added presentations and services. Considering the technological changes and, most significantly, offer economy-wide developmental advantages in terms of increased employment ratios and a higher standard of living while adopting technological changes. A firm faces the challenge of producing a specific product internally or purchasing it from a specialized vendor. The findings also suggest that firms outsource products prone to rapid technological advancement. It will benefit the industry where technological changes are frequent.
Full-text available
Conference Paper
Discussion paper presented at CSSE 2012 of the Knowledge Network of Applied Educational Research (KNAER) project on Data Visualization. This project was funded by the Ministry of Education's KNAER initiative and produced a variety of resources that were shared through workshops and training sessions on the principles of data visualization in an educational research context.
Article
As large enterprises embrace Digital Transformation, it is obvious that merely scanning paper documents to digital form is inadequate to achieve their broader goals. These must be transformed into more granular data that are version controlled, configuration managed and connected – i.e., a digital thread. As a digital thread begins to span the product lifecycle, its complexity increases exponentially with the number of lifecycle phases and application areas that it encompasses. This quickly becomes intractable, given the “normal” approach to finding and presenting data in PLM platforms. The goal of creating these enormous and rapidly growing networks of data is not just to archive the data. They must empower the product teams to be productive and to make the most effective use of this dynamic data network to meet the rapidly changing goals of the corporation as it reacts to the market's needs. This requires various graphical query and visualization tools.
Full-text available
Article
La comunicación eficaz de los riesgos es indispensable para gestionar los brotes epidémicos; consecuentemente, las noticias y cifras sobre la COVID-19 se convirtieron en una necesidad constante desde principios de 2020. En este sentido, los infográficos suponen una poderosa herramienta para transmitir este tipo de información. El presente trabajo analiza el primer año de pandemia a través de los infográficos publicados por El País Digital para evaluar su uso en la comunicación de esta crisis sanitaria. Mediante el análisis de contenido, se estudian temas y enfoques de los artículos, el peso de los infográficos respecto al texto, los tipos y subtipos de infográficos utilizados, y sus grados y tipos de interacción, así como la evolución de estas variables a lo largo del periodo de estudio. Los resultados determinan que el contenido tipo del periodo analizado es un artículo de temática basada en datos, abordado desde un enfoque analítico/interpretativo, con protagonismo compartido entre infográficos y texto, y que incluye gráficos de líneas sin ningún tipo de interactividad.
Chapter
Data visualization issues associated with the graphics used in a presentation, not in the analysis stage of developing the material leading to the presentation, are discussed in many books. I focus on data visualization from a practical analytical point-of-view in this chapter, not their presentation. This does not mean, however, that they cannot be used in a presentation; they certainly can be used. The graphs I describe are meant to aid and enhance the extraction of latent Rich Information from data.
Article
High Frequency Trading (HFT), mainly based on high speed infrastructure, is a significant element of the trading industry. However, trading machines generate enormous quantities of trading messages that are difficult to explore for financial researchers and traders. Visualization tools of financial data usually focus on portfolio management and the analysis of the relationships between risk and return. Beside risk-return relationship, there are other aspects that attract financial researchers like liquidity and moments of flash crashes in the market. HFT researchers can extract these aspects from HFT data since it shows every detail of the market movement. In this paper, we present HFTViz, a visualization tool designed to help financial researchers explore the HFT dataset provided by NASDAQ exchange. HFTViz provides a comprehensive dashboard aimed at facilitate HFT data exploration. HFTViz contains two sections. It first proposes an overview of the market on a specific date. After selecting desired stocks from overview visualization to investigate in detail, HFTViz also provides a detailed view of the trading messages, the trading volumes and the liquidity measures. In a case study gathering five domain experts, we illustrate the usefulness of HFTViz.
Full-text available
Article
Peer review is a necessary and important component of scholarly publication. When done well, it benefits both the reviewer and authors and improves the science itself. However, the skills of effective peer review are rarely taught. In the adolescent field of medical education research, peer review is especially important to advance the scientific rigor of the field. From our experience reviewing biomedical and medical education research, we have found that a thorough review takes multiple readings and multiple hours. The first reading provides a general overview of the aims and methods. Subsequent readings focus on the details of the methodology, results, and interpretation. The written review should provide firm but gentle feedback that the authors can use to improve their work, even if we have recommended rejection for this submission. We hope that this description of our process for reviewing a medical education research manuscript will assist others and thereby advance the quality of publications in our field.
Effectively designed data visualizations allow viewers to use their powerful visual systems to understand patterns in data across science, education, health, and public policy. But ineffectively designed visualizations can cause confusion, misunderstanding, or even distrust—especially among viewers with low graphical literacy. We review research-backed guidelines for creating effective and intuitive visualizations oriented toward communicating data to students, coworkers, and the general public. We describe how the visual system can quickly extract broad statistics from a display, whereas poorly designed displays can lead to misperceptions and illusions. Extracting global statistics is fast, but comparing between subsets of values is slow. Effective graphics avoid taxing working memory, guide attention, and respect familiar conventions. Data visualizations can play a critical role in teaching and communication, provided that designers tailor those visualizations to their audience.
Article
Objective The objective of these studies was to identify hazard statement (HS) design elements in procedures that affected whether both workers and lab participants performed the associated hazard mitigation. Background Many of the incidents in high-risk industries are the result of issues with procedures (e.g., standard operating procedures; SOPs) workers use to support their performance. HSs in these procedures are meant to communicate potential work hazards and methods of mitigating those hazards. However, there is little empirical research regarding whether current hazard design guidelines for consumer products translate to procedures. Method Two experimental studies—(1) a laboratory study and (2) a high-fidelity simulation—manipulated the HS design elements present in procedures participants used while performing tasks. Participants’ adherence to the mitigation of the hazard was compared for the HS designs. Results The guidelines for HSs from consumer products did not translate to procedures. Specifically, the presence of an alert icon, a box around the statement, and highlighting the statement did not improve adherence to HSs. Indeed, the only consistent finding was for the Icon, with its presence reliably predicting nonadherence in both studies. Additionally, the total number of design elements did not have a positive effect on adherence. Conclusion These findings indicate that more fundamental procedure HSs research is needed to identify effective designs as well as to understand the potential attentional mechanisms associated with these findings. Application The findings from these studies indicate that current regulations and guidelines should be revisited regarding hazard presentation in procedures.
Full-text available
Article
Orice analiză finală ca soluție rațională sau validă de interpretare a rezultatelor majore ale unei cercetări științifi ce, dar, mai ales, statistice, indiferent dacă aceasta este multi-, inter-, cros-sau transdisciplinară valorifi că metode de prezentare succintă și relevantă, axată pe serii sau distribuții, tabele și mai ales reprezentări grafi ce. Structurarea acestei lucrări a impus o secţiune introductivă cu rolul de a esenţializa sau a releva paradigma specifi că sau conceptul original de grafi c statistic, cu accent pe importanţa caracterului multidisciplinar sau cu adevărat complex al grafi cului modern. Secţiunea a doua reprezintă o recenzie a literaturii dedicate iniţial istoriografi ei reprezentărilor vizuale subliniind că cele mai vechi reprezentări vizuale ce pot fi denumite (pre)grafi ce apar înainte chiar de începutul secolului al II-lea înainte de Issus Christos, iar grafi ce standard sau de referintă, abia după începutul secolulului al XIV-lea. În partea fi nală a acestei retrospective multidisciplinare sunt prezentate personalităţile, domeniile şi momentele marcante din evoluţia grafi celor statistice cu accent pe cinci personaje legendare cum au fost Nicole Oresme, René Descartes, Joseph Priestley, William Playfair şi Florence Nightingale, dar şi pe dilatarea impactului diagramei statistice în cercetarea ştiinţifi că, prin transformarea acesteia într-un suport vizual al gândirii logice a cercetătorului. Metoda de investigare a autorilor este alternativ narativă sau descriptivă, păstrând reperele temporale specifi ce istoriei grafi cului modern, fapt pentru care nu a fost necesară o secţiune metodologică în cadrul articolului, ci doar una retrospectivă care să descrie evoluţia unei metode statistice instrumentale caracteristice argumentărilor ştiinţifi ce din toate domeniile, de la apariţie până în prezent, Multiplicarea tipologică a diagramelor statistice este astăzi poate cea mai prolifi că, fi ind amplifi cată de softurile specializate şi de pachetele de programe polivalente ale reprezentărilor moderne vizuale ale seriilor de date. Câteva remarci fi nale reconfi rmă necesitatea investigaţiilor retrospective, viitorul oricărei cercetări depinzând natural de cunoaşterea evolutivă a trecutulu, iar în cazul acestui articol viitorul recunoașterii ofi ciale a analizei datelor ca o ramură legitimă
Full-text available
Article
Any final analysis as a rational or valid solution for interpretation of the major results of scientific research, but especially statistical, whether it is multi-, inter-, cross-or transdisciplinary capitalizes methods of brief and relevant presentation, focused on series or distributions, tables and especially graphic representations. The structure of this paper imposed an introductory section with the role of essentializing or revealing the specific paradigm or the original concept of statistical diagram, with emphasis on the importance of the multidisciplinary or truly complex character of the modern graph. The second section is a brief review of the literature initially dedicated to the historiography of visual representations, emphasizing that the oldest visual representations that can be called (pre) graphs appear before the very beginning of the second century BC, and standard or classical diagrams, only after the beginning of the 14th century. In the final part of this multidisciplinary retrospective are presented the personalities, domains and highlights of the evolution of statistical diagrams with emphasis on five legendary characters such as Nicole Oresme, René Descartes, Joseph Priestley, William Playfair and Florence Nightingale, but also the expansion of the impact of the statistical chart in scientific research, by transforming it into visual support of the logical thinking of the researcher. The method of investigating characterizing the authors is alternatively narrative or descriptive, preserving the temporal landmarks specific to the history of the modern graph, which did not require a methodological section in the article, but only a retrospective describing the evolution of an instrumental statistical method characteristic of all scientific arguments,. domains, from its appearance until now, Perhaps, the typological multiplication of statistical diagrams is even today the most prolific, being amplified by specialized software and multipurpose software packages of modern visual representations of data series. A few final remarks reconfirm the need for retrospective investigations, especially for the statistical diagram. The future of any research naturally depends on the evolutionary knowledge of the past, and in the case of this paper the future of official recognition of data.
Chapter
Heuristic evaluation has been an important part of data visualization. Many heuristic rules and guidelines for evaluating data visualization have been proposed and reviewed. However, applying heuristic evaluation in practice is not trivial. First, the heuristic rules are discussed in different publications across different disciplines. There is no central repository of heuristic rules for data visualization. There are no consistent guidelines on how to apply them. Second, it is difficult to find multiple experts who are knowledgeable about the heuristic rules, their pitfalls, and counterpoints. To address this issue, we present a computer-assisted heuristic evaluation method for data visualization. Based on this method, we developed a Python-based tool for evaluating plots created by the visualization tool Plotly. Recent advances in declarative data visualization libraries have made it feasible to create such a tool. By providing advice, critiques, and recommendations, this tool serves as a knowledgeable virtual assistant to help data visualization developers evaluate their visualizations as they code.
Chapter
Technology can be designed to strengthen the participatory culture and giving a voice to people through User-Generated Content (UGC). Such practices may also influence the way we engage with places and communities. In this regard, we propose the conceptualization of an online platform to promote participation and preserve audiovisual records of shared experiences. Memories attached to places and cultural events, such as concerts, traditional celebrations and visits to landmarks and exhibitions, are frequently captured on multimedia records, which are often shared online by those who experienced them. The aggregation and correlation of these audiovisual resources, in a participatory platform, may enhance these experiences through forms of presentation based on multiple perspectives, making them collective. To gather insights and make proof of concept the method of exploratory interviews followed by a qualitative content analysis was adopted. Hence, the conceptualization of a digital platform that allows the creation of a living collaborative archive that preserves the uniqueness of each resource, but, in addition, allows an overview and combination of other participants’ contributions, was presented to experts within the areas of archives, museology, heritage, ethnography, community projects, cultural events, design, participatory media and digital platforms. This paper presents a segment of the interviews’ results concerning relevant use contexts along with recommendation and strategies for collection, visualization and participation to guide the development process of prototypes to be tested with target-users, whining a PhD research.
Chapter
This chapter will continue the theme of the previous one, but with more emphasis on how the audience and the medium chosen for communicating results can affect the statistical content of what is presented and how it is described.
Chapter
This chapter moves from having seen our raw data and having done some preparatory manipulation of that data, to the analysis of the data as specified by our statistical analysis plan. This single chapter on ‘analysis’ is not going to attempt to cover, or even introduce, all the types of analytical method within each study design setting in which those methods might be encountered. Instead, the chapter builds on and offers thoughts about those methods which might be commonly used in research studies in medicine and healthcare. The scope is far from being comprehensive but will hopefully include several analytical frameworks with which the reader is either already familiar or is seeking to become familiar.
Chapter
Understanding the complex relationships between a range of disparate types of data including (but not limited to) clinical signs and symptoms, socio-economic statuses, and environmental exposures is an ongoing struggle for researchers, administrators, clinicians, public health experts, and patients who struggle to use data to understand mental health. Information visualization techniques combining rich displays of data with highly responsive user interactions allow for dynamic exploration and interpretation of data to gain otherwise unavailable insights into these challenging datasets. To encourage broader adoption of visualization techniques in mental health, we draw upon research conducted over the past thirty years to introduce the reader to the field of interactive visualizations. We introduce theoretical models underlying information visualization and key considerations in the design of visualizations, including understanding user needs, managing data, effectively displaying information, and selecting appropriate approaches for interacting with the data. We introduce various types of mental health data, including survey data, administrative data, environmental data, and mobile health data, with a focus on focus on data integration and the use of predictive models. We introduce currently available open-source and commercial tools for visualization. Finally, we discuss two outstanding challenges in the field: uncertainty visualization and evaluation of visualization.
Chapter
“Big Data” is a concept that has been used in the last 10–15 years to describe the increasing complexity and amount of data available at scale in organizations and companies—data that often requires novel computational techniques and methods to generate knowledge. Compared to other health domains, mental health is influenced by a greater variety of factors, such as those related to mental, interpersonal, cultural, environmental, and biological phenomena. Thus, knowledge discovery in mental health research can involve a broad variety of data types and therefore data resources, including medical, behavioral, administrative, molecular, ‘omics’, environmental, financial, geographic, and social media repositories. Moreover, these varied phenomena interact in more complex ways in mental health and illness than in other domains of health so knowledge discovery must be open to this complexity. In this chapter, we outline the main underlying concepts of the “big data” paradigm and examine examples of different types of data repositories that could be used for mental health research. We also provide an example case study for developing a data repository, outlining the key considerations for designing, building, and using these types of resources.
Article
Introduction. The amount of information offered for study in modern educational institutions is rapidly increasing. Overcoming the inconsistency of the obligation to master the increasing flow of educational information and achieving the necessary "level" of training in fulfilling the traditional requirement of pedagogy – withstanding an acceptable amount of visibility leads to the mass introduction of multimedia tools. The folding so-called clip thinking shades the problems of filling thesaurus learning information virtually uncontrollable content and quality. Materials and Methods . Theoretical and empirical methods of research were used in the course of the work: analysis, synthesis, generalization, comparison, comparison, scientific theorization. Results. Clip-thinking interferes with a clear understanding of the context, and therefore the clip leaves no trace in semantically related phenomena. The trend of radical change in the roles of teachers and students reveals the main reasons and conditions for the transition to visualized presentation of educational information. In addition to the apparent negative consequences of the current pedagogical situation, some advantages of the protective reactions developed by modern learners to the most powerful stream of educational information are revealed. Studies have confirmed the risks of over-visibility in the learning process. Objective differences in the purpose and effectiveness of the types of visibility considered are highlighted. The main points of the formation of visual images are reflected, the obligatory maintenance of the conditions of problems in the educational process is emphasized. Discussion and Conclusions. The studies that have been carried out present an argument for the recommendations for the proposed frame-graphic approach in the organization of the educational process. The orientation of the educational process for each allocated group is justified, which requires a rather different nature of the script. One solution is to use educational information visualization tools to ensure the sustainability of the creative thinking vector.
ResearchGate has not been able to resolve any references for this publication.