Article

Open our visualization eyes, individualization: On Albrecht Dürer’s 1515 wood cut celestial charts

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The position that visualization is an intimate part of human existence and associated with the human species is advanced in this work: visualization abounds delimited by the space of individuality across human history. Visualization involves two complementary aspects of the uniqueness deemed of individuals: individualization reflects individuals’ capabilities and personalization reflects designs that seek compatibility with individuals’ capabilities. This has a number of implications upon the design and evaluation of visualizations. For one, a suitable visualization model that expresses individualization and personalization is needed: a brief survey of models is presented. For another, addressing intellectual uniqueness requires deep analysis and selective objective balance due to the potentially humongous number of unique ideas that support visualization design and viewer experiences. The Engineering Insightful Serviceable Visualizations model is selected as a guide for a comprehensive visualization evaluation of Albrecht Dürer’s 1515 celestial charts. Motivating this choice of visualization is its significance as the first notable and influential European star chart intended for scientific use and mass viewership, and as a blending of science and art. In addition, there is a lack of discussion concerning this particular visualization in the visualization literature. Concluding remarks suggest the significance of approaching visualization from this point-of-view.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

Article
When designing websites and planning marketing concepts, students need to understand how to create an experience that is user-friendly in order to build traffic over time. Users want a website that is both intuitive and informative, while simultaneously providing an ultra-efficient search experience. They want information that is essential and specific to them, for their own precise needs and purposes. The key to designing a successful interface is to meet the user’s expectations of finding a suitable fact, product, or service expeditiously. Examining the case of Zillow.com, a highly successful real estate portal website, provides students with an effective example of underlying web design principles, such as user knowledge, ease of navigation, content adjustment, and the site’s fit within the digital marketplace. Zillow.com exemplifies incredible ease of use through various aspects of superior web design, such as the accessibility and functionality of both of the top menu and submenu links and the ability to filter search results using data fields with key requirements, leading to instant, valuable results. By looking at what Zillow has done right, students can glean a set of best practices in web design that not only apply to the real estate industry, but can be translated broadly across websites in all industries.
Article
Full-text available
The collection of 5th Millennium BCE frescoes from the Chalcolithic (4700–3700 BC) township of Teleilat Ghassul, Jordan, are vital signposts for our understanding of early visual communication systems and the role of art in preliterate societies. The collection of polychrome wall murals includes intricate geometric designs, scenes illustrative of a stratified and complex society, and possibly early examples of landscape vistas. These artworks were produced by specialists using the buon fresco technique, and provide a visual archive documenting a fascinating, and largely unknown culture. This paper will consider the place these pictorial artefacts hold in the prehistory of art.
Article
Full-text available
Our ability to rapidly detect threats is thought to be subserved by a subcortical pathway that quickly conveys visual information to the amygdala. This neural shortcut has been demonstrated in animals but has rarely been shown in the human brain. Importantly, it remains unclear whether such a pathway might influence neural activity and behavior. We conducted a multimodal neuroimaging study of 622 participants from the Human Connectome Project. We applied probabilistic tractography to diffusion-weighted images, reconstructing a subcortical pathway to the amygdala from the superior colliculus via the pulvinar. We then computationally modeled the flow of haemodynamic activity during a face-viewing task and found evidence for a functionally afferent pulvinar-amygdala pathway. Critically, individuals with greater fibre density in this pathway also had stronger dynamic coupling and enhanced fearful face recognition. Our findings provide converging evidence for the recruitment of an afferent subcortical pulvinar connection to the amygdala that facilitates fear recognition. Editorial note: This article has been through an editorial process in which the authors decide how to respond to the issues raised during peer review. The Reviewing Editor's assessment is that minor issues remain unresolved (see decision letter).
Article
Full-text available
Slimak et al. challenge the reliability of our oldest (>65,000 years) U-Th dates on carbonates associated with cave paintings in Spain. They cite a supposed lack of parietal art for the 25,000 years following this date, along with potential methodological issues relating to open-system behavior and corrections to detrital or source water 230Th.We show that their criticisms are unfounded.
Article
Full-text available
Hoffmann et al . (Reports, 23 February 2018, p. 912) report the discovery of parietal art older than 64,800 years and attributed to Neanderthals, at least 25 millennia before the oldest parietal art ever found. Instead, critical evaluation of their geochronological data seems to provide stronger support for an age of 47,000 years, which is much more consistent with the archaeological background in hand.
Article
Full-text available
As a new form of advertising, augmented reality (AR) advertising offers a unique opportunity to study how new technology-facilitated ad campaigns are constructed across cultures. This research analyzes the content of 117 AR ad campaign videos uploaded on YouTube, and reveals the prevalent characteristics of AR campaigns. We compared videos from individualistic countries with those from collectivistic countries. Results suggest that AR ad campaigns from more individualistic cultures tend to include product information and enable a user to control the virtual AR content, whereas AR ad campaigns from more collectivistic cultures tend to allow a user to become part of the virtual AR content without manipulating it. In closing, we discuss managerial implications for marketers and advertisers as well as opportunities for future research.
Article
Full-text available
Neandertal cave art It has been suggested that Neandertals, as well as modern humans, may have painted caves. Hoffmann et al. used uranium-thorium dating of carbonate crusts to show that cave paintings from three different sites in Spain must be older than 64,000 years. These paintings are the oldest dated cave paintings in the world. Importantly, they predate the arrival of modern humans in Europe by at least 20,000 years, which suggests that they must be of Neandertal origin. The cave art comprises mainly red and black paintings and includes representations of various animals, linear signs, geometric shapes, hand stencils, and handprints. Thus, Neandertals possessed a much richer symbolic behavior than previously assumed. Science , this issue p. 912
Chapter
Full-text available
Data visualization and analytics are nowadays one of the corner-stones of Data Science, turning the abundance of Big Data being produced through modern systems into actionable knowledge. Indeed, the Big Data era has realized the availability of voluminous datasets that are dynamic, noisy and heterogeneous in nature. Transforming a data-curious user into someone who can access and analyze that data is even more burdensome now for a great number of users with little or no support and expertise on the data processing part. Thus, the area of data visualization and analysis has gained great attention recently, calling for joint action from different research areas and communities such as information visualization, data management and mining, human-computer interaction, and computer graphics. This article presents the limitations of traditional visualization systems in the Big Data era. Additionally, it discusses the major prerequisites and challenges that should be addressed by modern visualization systems. Finally, the state-of-the-art methods that have been developed in the context of the Big Data visualization and analytics are presented, considering methods from the Data Management and Mining, Information Visualization and Human-Computer Interaction communities.
Article
Full-text available
The aim of this article is to analyse cone density, spacing and arrangement using an adaptive optics flood illumination retina camera (rtx1™) on a healthy population. Cone density, cone spacing and packing arrangements were measured on the right retinas of 109 subjects at 2°, 3°, 4°, 5° and 6° of eccentricity along 4 meridians. The effects of eccentricity, meridian, axial length, spherical equivalent, gender and age were evaluated. Cone density decreased on average from 28 884 ± 3 692 cones/mm², at 2° of eccentricity, to 15 843 ± 1 598 cones/mm² at 6°. A strong inter-individual variation, especially at 2°, was observed. No important difference of cone density was observed between the nasal and temporal meridians or between the superior and inferior meridians. However, the horizontal and vertical meridians differed by around 14% (T-test, p<0.0001). Cone density, expressed in units of area, decreased as a function of axial length (r² = 0.60), but remained constant (r² = 0.05) when cone density is expressed in terms of visual angle supporting the hypothesis that the retina is stretched during the elongation of the eyeball. Gender did not modify the cone distribution. Cone density was slightly modified by age but only at 2°. The older group showed a smaller density (7%). Cone spacing increased from 6,49 ± 0,42 μm to 8,72 ± 0,45 μm respectively between 2° and 6° of eccentricity. The mosaic of the retina is mainly triangularly arranged (i.e. cells with 5 to 7 neighbors) from 2° to 6°. Around half of the cells had 6 neighbors.
Article
Full-text available
This research demonstrates that activities related to teaching stone knapping took place in the Early Neolithic flint mine at Casa Montero (Spain). Raw material sources are shown to be the ideal places for research and analysis of the transmission of technical knowledge. This study is based on the analysis of abandoned cores at the site. The methodology and criteria that were utilized to identify lithic debris resulting from apprentice workmanship are described. Identification of errors in selection and execution has enabled a classification of cores according to distinctions in flint knapping ability: expert, advanced, and novice. A premature abandonment of a core is associated with novice workmanship. The method of knowledge transfer of lithic technology is also discussed, proposing an instructional system that may have been used in this early agricultural society. This research suggests a progressive inclusion of youth into the social system by means of technological and social learning in context.
Article
Full-text available
Although visualization design models exist in the literature in the form of higher-level methodological frameworks, these models do not present a clear methodological prescription for the domain characterization step. This work presents a framework and end-to-end model for requirements engineering in problem-driven visualization application design. The framework and model are based on the activity-centered design paradigm, which is an enhancement of human-centered design. The proposed activity-centered approach focuses on user tasks and activities, and allows an explicit link between the requirements engineering process with the abstraction stage-and its evaluation-of existing, higher-level visualization design models. In a departure from existing visualization design models, the resulting model: assigns value to a visualization based on user activities; ranks user tasks before the user data; partitions requirements in activity-related capabilities and nonfunctional characteristics and constraints; and explicitly incorporates the user workflows into the requirements process. A further merit of this model is its explicit integration of functional specifications, a concept this work adapts from the software engineering literature, into the visualization design nested model. A quantitative evaluation using two sets of interdisciplinary projects supports the merits of the activity-centered model. The result is a practical roadmap to the domain characterization step of visualization design for problem-driven data visualization. Following this domain characterization model can help remove a number of pitfalls that have been identified multiple times in the visualization design literature.
Article
Full-text available
Background: Global and regional prevalence estimates for blindness and vision impairment are important for the development of public health policies. We aimed to provide global estimates, trends, and projections of global blindness and vision impairment. Methods: We did a systematic review and meta-analysis of population-based datasets relevant to global vision impairment and blindness that were published between 1980 and 2015. We fitted hierarchical models to estimate the prevalence (by age, country, and sex), in 2015, of mild visual impairment (presenting visual acuity worse than 6/12 to 6/18 inclusive), moderate to severe visual impairment (presenting visual acuity worse than 6/18 to 3/60 inclusive), blindness (presenting visual acuity worse than 3/60), and functional presbyopia (defined as presenting near vision worse than N6 or N8 at 40 cm when best-corrected distance visual acuity was better than 6/12). Findings: Globally, of the 7·33 billion people alive in 2015, an estimated 36·0 million (80% uncertainty interval [UI] 12·9-65·4) were blind (crude prevalence 0·48%; 80% UI 0·17-0·87; 56% female), 216·6 million (80% UI 98·5-359·1) people had moderate to severe visual impairment (2·95%, 80% UI 1·34-4·89; 55% female), and 188·5 million (80% UI 64·5-350·2) had mild visual impairment (2·57%, 80% UI 0·88-4·77; 54% female). Functional presbyopia affected an estimated 1094·7 million (80% UI 581·1-1686·5) people aged 35 years and older, with 666·7 million (80% UI 364·9-997·6) being aged 50 years or older. The estimated number of blind people increased by 17·6%, from 30·6 million (80% UI 9·9-57·3) in 1990 to 36·0 million (80% UI 12·9-65·4) in 2015. This change was attributable to three factors, namely an increase because of population growth (38·4%), population ageing after accounting for population growth (34·6%), and reduction in age-specific prevalence (-36·7%). The number of people with moderate and severe visual impairment also increased, from 159·9 million (80% UI 68·3-270·0) in 1990 to 216·6 million (80% UI 98·5-359·1) in 2015. Interpretation: There is an ongoing reduction in the age-standardised prevalence of blindness and visual impairment, yet the growth and ageing of the world's population is causing a substantial increase in number of people affected. These observations, plus a very large contribution from uncorrected presbyopia, highlight the need to scale up vision impairment alleviation efforts at all levels. Funding: Brien Holden Vision Institute.
Conference Paper
Full-text available
There are some aspects of visualisation that are uniquely human. Humans recognise why certain aspects of the data matter, are aware of background information and use intuition, purpose and storytelling when choosing what to visualise. Whilst computer algorithms can create visualisations from data in a brute force combinatorial manner, humans are still much better at quickly determining what context and which aspects of the data, at what granularity, will successfully highlight what the data represents. Visual analytics, that combines machine learning, graphic user-interfaces and human interaction, is a popular way of addressing the shortcomings of fully automated computer generated visualisations. This paper is part of a larger project that will be exploring the development of a non-interactive computational algorithm that enhances the process of computer produced visualisations by introducing criteria and techniques from the theories of computational creativity, which is sub-field within the artificial intelligence domain. One of the objectives of the larger project is to identify the parts of the visualisation pipeline and also to identify what aspects of visualisation generation process humans are better at than computers – specifically with respect to human creativity. This critical literature review aims to identify the parts of the visualisation pipeline that involve human creativity.
Article
Full-text available
Readability criteria have been addressed as a measurement of the quality of graph visualizations. In this paper, we argue that readability criteria are necessary but not sufficient. We propose a new kind of criteria, namely faithfulness, to evaluate the quality of graph layouts. We introduce a general model for quantify faithfulness, and contrast it with the well established readability criteria. We show examples of common visualization techniques, such as multidimensional scaling, edge bundling and several other visualization metaphors for the study of faithfulness.
Article
Full-text available
In bridging ideas from the forum of visual-spatial learning with those of art and design learning, inspiration is taken from Piaget who explained that the evolution of spatial cognition occurs through perception, as well as through thought and imagination. Insights are embraced from interdisciplinary educational theorists, intertwining and dividing their contributions along Piaget's lines into three interrelated aspects: perceptual, intellectual, and imaginative. In the quest for early literacy, the perception and ordering of universals of form, the formation and wielding of internal intellectual constructs, and the construction of metaphorical and imaginative ideas and creations are all involved in aesthetic growth. With further understanding, the arena of visual-spatial learning as enhanced by art and design learning, may find more inclusion in general education.
Article
Full-text available
Visual analytics (VA) is typically applied in scenarios where complex data has to be analyzed. Unfortunately, there is a natural correlation between the complexity of the data and the complexity of the tools to study them. An adverse effect of complicated tools is that analytical goals are more difficult to reach. Therefore, it makes sense to consider methods that guide or assist users in the visual analysis process. Several such methods already exist in the literature, yet we are lacking a general model that facilitates in-depth reasoning about guidance. We establish such a model by extending van Wijk's model of visualization with the fundamental components of guidance. Guidance is defined as a process that gradually narrows the gap that hinders effective continuation of the data analysis. We describe diverse inputs based on which guidance can be generated and discuss different degrees of guidance and means to incorporate guidance into VA tools. We use existing guidance approaches from the literature to illustrate the various aspects of our model. As a conclusion, we identify research challenges and suggest directions for future studies. With our work we take a necessary step to pave the way to a systematic development of guidance techniques that effectively support users in the context of VA.
Article
Full-text available
In contrast to many other human endeavors, science pays little attention to its history. Fundamental scientific discoveries are often considered to be timeless and independent of how they were made. Science and the history of science are regarded as independent academic disciplines. Although most scientists are aware of great discoveries in their fields and their association with the names of individual scientists, few know the detailed stories behind the discoveries. Indeed, the history of scientific discovery is sometimes only recorded in informal accounts that may be inaccurate or biased for self-serving reasons. Scientific papers are generally written in a formulaic style that bears no relationship to the actual process of discovery. Here we examine why scientists should care more about the history of science. A better understanding of history can illuminate social influences on the scientific process, allow scientists to learn from previous errors and provide a greater appreciation for the importance of serendipity in scientific discovery. Moreover, history can help to assign credit where it is due and call attention to evolving ethical standards in science. History can make science better.
Article
Full-text available
Engaging a general audience with scientific research can be effectively assisted by visualization. Visualization art has the potential to engage users with scientific data in a way that gives the audience deep and reflective insights into scientific research. This paper reviews relevant literature on different methods and practices of visualization from the analytical to artistic. We will see that visualization art can be effective in promoting behavioral change in the general public on topics such as climate change. By presenting the audience with data, in a clarified context, the non-expert user can gain unique and deep insight.
Article
Full-text available
The objective is to investigate the hypothesis that Neandertal eye orbits can predict group size and social cognition as presented by Pearce et al. (Proc R Soc B Biol Sci 280 (2013) 20130168). We performed a linear regression of known orbital aperture diameter (OAD), neocortex ratio, and group size among 18 extant diurnal primate species. Our data were derived from Kirk (J Hum Evol 51 (2006) 159-170) and Dunbar (J Hum Evol 22 (1992), 469-493; J Hum Evol 28 (1995) 287-296). There is a positive correlation between OAD and group size; a positive correlation between neocortex and group size; and a positive correlation between OAD and neocortex size. The strength of the collinearity between OAD and neocortex ratio accounts for any significance of OAD in a model. The model that best accounts for variation in group size is one that includes only neocortex ratio; including OAD does not strengthen the model. OAD accounts for 29 percent of the variation in group size. Larger orbits are correlated with larger group sizes in primates, although not significantly when controlling for neocortex ratio. Moreover, the amount of variation in group size that can be explained by OAD is negligible. The larger orbits of Neandertals compared to the average modern human population do not permit any interpretation of cognitive ability related to group size. Am J Phys Anthropol, 2015. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Article
Full-text available
Data surrounds each and every one of us in our daily lives, ranging from exercise logs, to archives of our interactions with others on social media, to online resources pertaining to our hobbies. There is enormous potential for us to use these data to understand ourselves better and make positive changes in our lives. Visualization (Vis) and visual analytics (VA) offer substantial opportunities to help individuals gain insights about themselves, their communities and their interests; however, designing tools to support data analysis in non-professional life brings a unique set of research and design challenges. We investigate the requirements and research directions required to take full advantage of Vis and VA in a personal context. We develop a taxonomy of design dimensions to provide a coherent vocabulary for discussing personal visualization and personal visual analytics. By identifying and exploring clusters in the design space, we discuss challenges and share perspectives on future research. This work brings together research that was previously scattered across disciplines. Our goal is to call research attention to this space and engage researchers to explore the enabling techniques and technology that will support people to better understand data relevant to their personal lives, interests, and needs.
Article
Full-text available
Abstract We developed and validated a symptom scale that can be used to identify "trypophobia", in which individuals experience aversion induced by images of clusters of circular objects. The trypophobia questionnaire (TQ) was based on reports of various symptom types, but it nevertheless demonstrated a single construct, with high internal consistency and test-retest reliability. The TQ scores predicted discomfort from trypophobic images, but not neutral or unpleasant images, and did not correlate with anxiety. Using image filtering, we also reduced the excess energy at mid-range spatial frequencies associated with both trypophobic and uncomfortable images. Relative to unfiltered trypophobic images, the discomfort from filtered images experienced by observers with high TQ scores was less than that experienced with control images and by observers with low TQ scores. Furthermore, we found that clusters of concave objects (holes) did not induce significantly more discomfort than clusters of convex objects (bumps), suggesting that trypophobia involves images with particular spectral profile rather than clusters of holes per se.
Chapter
This chapter highlights the story of how artists participated in the practices of observation that Lorraine Daston and Peter Galison compellingly define as collective empiricism. It shows the history of scientific objectivity has constantly crossed paths with the history of artistic visualization, from which it has received some powerful challenges. Historicizing the category of representation also has the advantage of reinforcing its vital connection with visualizational connection that is rarely addressed in current philosophical discussions. Distinctive of twentieth-century image- making, trained judgment was a reaction to the constraints imposed by mechanical reproducibility. In the age of computerization, visualization challenges the boundaries between the artifactual and the natural: The new scientific images fulfill the purpose of manipulating the realand they do so in an aesthetically pleasing way. Callanan's artwork is a physical visualization of real-time raw scientific data.
Book
Explore the beauty and awe of the heavens through the rich celestial prints and star atlases offered in this third edition book. The author traces the development of celestial cartography from ancient to modern times, describes the relationships between different star maps and atlases, and relates these notions to our changing ideas about humanity’s place in the universe. Also covered in this book are more contemporary cosmological ideas, constellation representations, and cartographic advances. The text is enriched with 226 images (141 in color) from actual, antiquarian celestial books and atlases, each one with an explanation of unique astronomical and cartographic features. This never-before-available hardcover edition includes two new chapters on pictorial style maps and celestial images in art, as well over 50 new images. Additionally, the color plates are now incorporated directly into the text, providing readers with a vibrant, immersive look into the history of star maps.
Article
Cellular Automata (CA) are discrete simulation models, thus producing spatio-temporal data through experiments, as well as stochastic models, thus generating multi-run data. Identifying temporal patterns, such as cycles, is important to understand the behavior of the model. Assessing variability is also essential to estimate which parameter values may require more runs and what consensus emerges across simulation runs. However, these two tasks are currently arduous as the commonly employed slider-based visualizations offer little support to identify temporal trends or excessive model variability. In this article, we addressed these two tasks by developing, implementing, and evaluating a new visual analytics environment that uses several linked visualizations. Our empirical evaluation of the proposed environment assessed (i) whether modelers could identify temporal patterns and variability, (ii) how features of simulations impacted performances, and (iii) whether modelers can use the familiar slider-based visualization together with our new environment. Results shows that participants were confident on results obtained using our new environment. They were also able to accomplish the two target tasks without taking longer than they would with current solutions. Our qualitative analysis found that some participants saw value in switching between our proposed visualization and the commonly used slider-based version. In addition, we noted that errors were affected not only by the type of visualizations but also by specific features of the simulations. Future work may combine and adapt these visualizations depending on salient simulation parameters.
Article
Data visualization and analytics are nowadays one of the corner-stones of Data Science, turning the abundance of Big Data being produced through modern systems into actionable knowledge. Indeed, the Big Data era has realized the availability of voluminous datasets that are dynamic, noisy and heterogeneous in nature. Transforming a data-curious user into someone who can access and analyze that data is even more burdensome now for a great number of users with little or no support and expertise on the data processing part. Thus, the area of data visualization and analysis has gained great attention recently, calling for joint action from different research areas and communities such as information visualization, data management and mining, human-computer interaction, and computer graphics. This article presents the limitations of traditional visualization systems in the Big Data era. Additionally, it discusses the major prerequisites and challenges that should be addressed by modern visualization systems. Finally, the state-of-the-art methods that have been developed in the context of the Big Data visualization and analytics are presented, considering methods from the Data Management and Mining, Information Visualization and Human-Computer Interaction communities
Article
Information behavior has emerged as an important aspect of human life, however our knowledge and understanding of it is incomplete and underdeveloped scientifically. Research on the topic is largely contemporary in focus and has generally not incorporated results from other disciplines. In this monograph Spink provides a new understanding of information behavior by incorporating related findings, theories and models from social sciences, psychology and cognition. In her presentation, she argues that information behavior is an important instinctive sociocognitive ability that can only be fully understood with a highly interdisciplinary approach. The leitmotivs of her examination are three important research questions: First, what is the evolutionary, biological and developmental nature of information behavior? Second, what is the role of instinct versus environment in shaping information behavior? And, third, how have information behavior capabilities evolved and developed over time? Written for researchers in information science as well as social and cognitive sciences, Spink’s controversial text lays the foundation for a new interdisciplinary theoretical perspective on information behavior that will not only provide a more holistic framework for this field but will also impact those sciences, and thus also open up many new research directions.
Book
This book attempts to connect artificial intelligence to primitive intelligence. It explores the idea that a genuinely intelligentcomputer will be able to interact naturally with humans. To form this bridge, computers need the ability to recognize, understand and even have instincts similar to humans. The author organizes the book into three parts. He starts by describing primitive problem-solving, discussing topics like default mode, learning, tool-making, pheromones and foraging. Part two then explores behavioral models of instinctive cognition by looking at the perception of motion and event patterns, appearance and gesture, behavioral dynamics, figurative thinking, and creativity. The book concludes by exploring instinctivecomputing in modern cybernetics, including models of self-awareness, stealth, visual privacy, navigation, autonomy, and survivability. Instinctive Computing reflects upon systematic thinking for designing cyber-physical systems and it would be a stimulating reading for those who are interested in artificial intelligence, cybernetics, ethology, human-computer interaction, data science, computer science, security and privacy, social media, or autonomous robots.
Book
Until the publication of the first edition of 'Star Maps,' books were either general histories of astronomy using examples of antiquarian celestial maps as illustrations, or catalogs of celestial atlases that failed to trace the flow of sky map development over time. The second edition focuses on the development of contemporary views of the heavens and advances in map-making. It captures the beauty and awe of the heavens through images from antiquarian celestial prints and star atlases. This book uniquely combines a number of features: 1) the history of celestial cartography is traced from ancient to modern times; 2) this development is integrated with contemporary cosmological systems; 3) the artistry of sky maps is shown using beautiful color images from actual celestial atlases and prints; 4) each illustration is accompanied by a legend explaining what is being shown; and 5) the text is written for the lay reader based on the author's experience with writing articles for amateur astronomy and map collector magazines. This updated second edition of 'Star Maps' contains over 50 new pages of text and 44 new images (16 in color), including completely new sections on celestial frontispieces, deep-sky objects, playing card maps, additional cartographers, and modern computerized star maps. There is also expanded material about celestial globes, volvelles, telescopes, and planets and asteroids. © Springer Science+Business Media New York 2012. All rights reserved.
Article
A theoretical visualization model that is suitable for a guideline based engineering approach as well as generically and widely applicable to visualization and its subfields is developed in this work. It is based on investigating question-answer pairs and emphasizes understanding and knowledge acquisition achieved via insight and learning but which is impeded by confusion brought on by in-appropriateness, incoherence, anacolutha and non-sequiturs. These terms are technically defined within the model. A visualization metric is developed that relates insight, learning and confusion with characteristics of how much and how fast understanding and knowledge are acquired. The model entails two connected processes: a visualization process based on visualization media componentization followed by a human process consisting of perception, interpretation, understanding and knowledge acquisition. Several case studies drawn from the various subfields of visualization show the potential of the proposed model.
Article
Visual analytics has been widely studied in the past decade. One key to make visual analytics practical for both research and industrial applications is the appropriate definition and implementation of the visual analytics pipeline which provides effective abstractions for designing and implementing visual analytics systems. In this paper we review the previous work on visual analytics pipelines and individual modules from multiple perspectives: data, visualization, model and knowledge. In each module we discuss various representations and descriptions of pipelines inside the module, and compare the commonalities and the differences among them.
Book
Data is powerful. It separates leaders from laggards and it drives business disruption, transformation, and reinvention. Today's most progressive companies are using the power of data to propel their industries into new areas of innovation, specialization, and optimization. The horsepower of new tools and technologies have provided more opportunities than ever to harness, integrate, and interact with massive amounts of disparate data for business insights and value - something that will only continue in the era of the Internet of Things. And, as a new breed of tech-savvy and digitally native knowledge workers rise to the ranks of data scientist and visual analyst, the needs and demands of the people working with data are changing, too. The world of data is changing fast. And, it's becoming more visual. Visual insights are becoming increasingly dominant in information management, and with the reinvigorated role of data visualization, this imperative is a driving force to creating a visual culture of data discovery. The traditional standards of data visualizations are making way for richer, more robust and more advanced visualizations and new ways of seeing and interacting with data. However, while data visualization is a critical tool to exploring and understanding bigger and more diverse and dynamic data, by understanding and embracing our human hardwiring for visual communication and storytelling and properly incorporating key design principles and evolving best practices, we take the next step forward to transform data visualizations from tools into unique visual information assets. Discusses several years of in-depth industry research and presents vendor tools, approaches, and methodologies in discovery, visualization, and visual analytics Provides practicable and use case-based experience from advisory work with Fortune 100 and 500 companies across multiple verticals Presents the next-generation of visual discovery, data storytelling, and the Five Steps to Data Storytelling with Visualization Explains the Convergence of Visual Analytics and Visual discovery, including how to use tools such as R in statistical and analytic modeling Covers emerging technologies such as streaming visualization in the IOT (Internet of Things) and streaming animation
Book
Traditionally, images have played an important role in politics and policy making, mostly in relation to propaganda and public communication. However, contemporary society is inundated with visual material due to the increasing ubiquity of media and visual technologies that facilitate the production, distribution and consumption of images in new and innovative ways. As such, a visual culture has emerged, and a number of authors have written on visual culture and the technologies which underlie it. However, a clear link to policy making is still lacking. This books links the emergence of this visual culture to policy making and explores how visual culture (and the growing number of technologies used to create and distribute images) influence the course, content and outcome of public policy making. It examines how visual culture and policy making in contemporary society are intertwined, elaborating concepts such as power, framing and storytelling. It then links this to technology, and the way this can enhance power, transparency, registration, surveillance and communication. Dealing with the entire cycle of public policy making, from agenda-setting, to policy design, decision making to evaluation, the book contains diverse international case studies including water management, risk management, live-stock diseases, minority integration, racism, freedom of speech, healthcare, disaster evaluation and terrorism.
Article
The process of creating an effective visual representation of a set of data is both an art and a science, requiring extensive efforts in visualization design, implementation, and evaluation. For visualization design, there are significant potential benefits in seeking inspiration from previous graphical work in art, illustration, visual communication, and design, and in seeking insights from research in vision and visual perception. This chapter discusses the art and science of visualization design and evaluation, illustrated with case study examples from the research. For each application, the chapter describes how inspiration from art and insight from visual perception can provide guidance for the development of promising approaches to the targeted visualization problems. As appropriate, relevant details of the algorithms developed are included to achieve the referenced implementations, and where studies have been done, the chapter discusses their findings and the implications for future directions of work.
Article
Decades of research have repeatedly shown that people perform poorly at estimating and understanding conditional probabilities that are inherent in Bayesian reasoning problems. Yet in the medical domain, both physicians and patients make daily, life-critical judgments based on conditional probability. Although there have been a number of attempts to develop more effective ways to facilitate Bayesian reasoning, reports of these findings tend to be inconsistent and sometimes even contradictory. For instance, the reported accuracies for individuals being able to correctly estimate conditional probability range from 6% to 62%. In this work, we show that problem representation can significantly affect accuracies. By controlling the amount of information presented to the user, we demonstrate how text and visualization designs can increase overall accuracies to as high as 77%. Additionally, we found that for users with high spatial ability, our designs can further improve their accuracies to as high as 100%. By and large, our findings provide explanations for the inconsistent reports on accuracy in Bayesian reasoning tasks and show a significant improvement over existing methods. We believe that these findings can have immediate impact on risk communication in health-related fields.
Chapter
The Almagest became known in Europe through the Latin translation of Gerard of Cremona in 1175. Astronomy began to assimilate Ptolemaic theory and its Arabic revisions into the emerging physical sciences and thereby laid the ground for the following rapid scientific development. In the 16th century Copernicus succeeded in overcoming the geocentric construction of Ptolemy’s planetary orbits, but the methodological structure of his “De revolutionibus” was still oriented on the book that was written a millennium before.
Article
The aim of this paper is to examine the iconography on a set of star charts by Albrecht Dürer (1515), and celestial globes by Caspar Vopel (1536) and Christoph Schissler (1575). The iconography on these instruments is conditioned by strong traditions which include not only the imagery on globes and planispheres (star charts), but also ancient literature about the constellations. Where this iconography departs from those traditions, the change had to do with humanism in the sixteenth century. This “humanistic” dimension is interwoven with other concerns that involve both “social” and “technical” motivations. The interplay of these three dimensions illustrates how the iconography on celestial charts and globes expresses some features of the shared knowledge and shared culture between artisans, mathematicians, and nobles in Renaissance Europe.
Article
In the first half of the 17th century, Dutch astronomers rapidly abandoned the practice of astrology. By the second half of the century, no trace of it was left in Dutch academic discourse. This abandonment, in its early stages, does not appear as the result of criticism or skepticism, although such skepticism was certainly known in the Dutch Republic and leading humanist scholars referred to Pico’s arguments against astrological predictions. The astronomers, however, did not really refute astrology, but simply stopped paying attention to it, as other questions (in particular the constitution of the universe) became the focus of their scholarship. The underlying physical view of the world, with its idea of celestial influences, remained in vigor much longer. Even convinced anti-Aristotelians, in explaining the world, tried to account for the effects of the oppositions and conjunctions of planets, and similar elements. It is only with Descartes that the by now widespread skepticism about predictions found expression in a philosophy that denied celestial influence.