Figure - uploaded by Terry Yoo
Content may be subject to copyright.
Figure A. Six methods for visualizing the same 2D vector field. They include (1) icons on a regular grid; (2) icons on a jittered grid; (3) layering method inspired by oil painting; (4) line-integral convolution; (5) image-guided streamlines; and (6) streamlines seeded on a regular grid. (Figure courtesy of David Laidlaw, Brown University.)
Source publication
Early 20 years ago, the US National Science Foundation (NSF) convened a panel to report on visualization's potential as a new technology in 2004, the NSF and the US National Institutes of Health (NIH) convened the Visualization Research Challenges (VRC) Executive Committee - made up of this article's authors - to write a new report. Here, we summar...
Citations
... Visualizations have classically been used to enable detecting trends or outliers in such datasets, and providing an externalization for working memory more generally (Steichen, Carenini, & Conati 2013). However, applying an inappropriate visual metaphor to represent the gathered data can potentially be misleading, and key insights can easily be missed (Moorhead et al., 2006). ...
Task Analysis (TA) represents not one standard method, but rather a toolkit of methods that come with a large and complex range of outputs. Due to this fluidity in methods and resultant data, a standard method of analysis and visual data representation does not currently exist. Depending on the research methods used and desired outcomes, some visualization methods may be more appropriate than others. This body of work seeks to demonstrate the need for the establishment of a graphical grammar in this domain. Initial recommendations are provided for visualizing different task analysis methods. This research lays the groundwork towards the development of a full set of visualization best practices to allow practitioners to gain the most insight from their TA methods of choice.
... Flexible general-purpose visualization tools and languages like d3 and Vega are pervasive, and frameworks for deploying visualizations alongside analysis tools (e.g. machine learning using TensorFlow (19) ), such as Jupyter Lab or Observable , make excellent use of them. [See Jeff Heer's contribution to this volume for more 6 on d3 and Vega.] 7 3. Data sharing. Owing to both carrot (" hey, people, use my great data!" ) and stick (" no more government funding if you don't deposit your data" ) approaches, data sharing is on the rise. ...
... 5 Technical note: glue presently runs on desktop machines (PC, Mac or UNIX), as a python program, using the qt windowing system. 6 An alpha version of glue that runs entirely in the browser, within Jupyter Lab, shown by Goodman as a video at the Sackler Colloquium, will be released in late 2018. Note to editor--depending on timing, we'll want to include an actual link here. ...
As the complexity and volume of datasets have increased along with the capabilities of modular, open-source, easy-to-implement, visualization tools, scientists' need for, and appreciation of, data visualization has risen too. Until recently, scientists thought of the "explanatory" graphics created at a research project's conclusion as "pretty pictures" needed only for journal publication or public outreach. The plots and displays produced during a research project - often intended only for experts - were thought of as a separate category, what we here call "exploratory" visualization. In this view, discovery comes from exploratory visualization, and explanatory visualization is just for communication. Our aim in this paper is to spark conversation amongst scientists, computer scientists, outreach professionals, educators, and graphics and perception experts about how to foster flexible data visualization practices that can facilitate discovery and communication at the same time. We present an example of a new finding made using the glue visualization environment to demonstrate how the border between explanatory and exploratory visualization is easily traversed. The linked-view principles as well as the actual code in glue are easily adapted to astronomy, medicine, and geographical information science - all fields where combining, visualizing, and analyzing several high-dimensional datasets yields insight. Whether or not scientists can use such a flexible "undisciplined" environment to its fullest potential without special training remains to be seen. We conclude with suggestions for improving the training of scientists in visualization practices, and of computer scientists in the iterative, non-workflow-like, ways in which modern science is carried out.
For many people, sport is one of the stress-relieving activities. People being involved with sport wish to achieve attractive shape, healthy lifestyle, lose weight, and so on. However, there are also people who deal with sport because of competition goals. In order to fulfill their competition goals, they need to train properly. Even for professionals, it is very hard to perform a serious training. On the other hand, recent expansion of smart sport watches and even smart phones allow athletes to train smarter. During the months and years, they produce dozens of activity files. These files offer thousands of opportunities for data mining approaches, where athletes gained a deep insight into their training data. Data mining approaches are able to extract habits of athletes, help to prevent over-training syndrome and injuries, clustering similar activities together, and much more. In this chapter, the authors show opportunities for data mining, enumerate recent applications, and outline future potential for research and applications in the real world.
During the past two decades, information visualisation (InfoVis) research has created new techniques and methods to support data- intensive analyses in science, industry and government. These have enabled a wide range of analyses tasks to be executed, with tasks varying in terms of the type and volume of data involved. However, the majority of this research has focused on static datasets, and the analysis and visualisation tasks tend to be carried out by trained expert users. In more recent years, social changes and technological advances have meant that data have become more and more dynamic, and are consumed by a wider audience. Examples of such dynamic data streams include e-mails, status updates, RSS 1 feeds, versioning systems, social networks and others. These new types of data are used by populations that are not specifically trained in information visualization. Some of these people might consist of casual users, while others might consist of people deeply involved with the data, but in both cases, they would not have received formal training in information visualization. For simplicity, throughout this dissertation, I refer to the people (casual users, novices, data experts) who have not been trained in information visualisation as non-experts.These social and technological changes have given rise to multiple challenges because most existing visualisation models and techniques are intended for experts, and assume static datasets. Few studies have been conducted that explore these challenges. In this dissertation, with my collaborators, I address the question: Can we empower non-experts in their use of visualisation by enabling them to contribute to data stream analysis as well as to create their own visualizations?The first step to answering this question is to determine whether people who are not trained in information visualisation and the data sciences can conduct useful dynamic analysis tasks using a visualisation system that is adapted to support their tasks. In the first part of this dissertation I focus on several scenarios and systems where different sized crowds of InfoVis non-experts users (20 to 300 and 2 000 to 700 000 people) use dynamic information visualisation to analyse dynamic data.Another important issue is the lack of generic design principles for the visual encoding of dynamic visualization. In this dissertation I design, define and explore a design space to represent dynamic data for non-experts. This design space is structured by visual tokens representing data items that provide the constructive material for the assembly over time of different visualizations, from classic represen- tations to new ones. To date, research on visual encoding has been focused on static datasets for specific tasks, leaving generic dynamic approaches unexplored and unexploited.In this thesis, I propose construction as a design paradigm for non-experts to author simple and dynamic visualizations. This paradigm is inspired by well-established developmental psychological theory as well as past and existing practices of visualisation authoring with tangible elements. I describe the simple conceptual components and processes underlying this paradigm, making it easier for the human computer interaction community to study and support this process for a wide range of visualizations. Finally, I use this paradigm and tangible tokens to study if and how non-experts are able to create, discuss and update their own visualizations. This study allows us to refine our previous model and provide a first exploration into how non-experts perform a visual mapping without software. In summary, this thesis contributes to the understanding of dynamic visualisation for non-expert users.
In recent years, empirical studies have increasingly been seen as a core part of visualization research, and user evaluations have proliferated. It is broadly understood that new techniques and applications must be formally validated in order to be seen as meaningful contributions. However, these efforts continue to face the numerous challenges involved in validating complex software techniques that exist in a wide variety of use contexts. The authors, who represent perspectives from across visualization research and applications, discuss the leading challenges that must be addressed for empirical research to have the greatest possible impact on visualization in the years to come. These include challenges in developing research questions and hypotheses, designing effective experiments and qualitative methods, and executing studies in specialized domains. We discuss those challenges that have not yet been solved and possible approaches to addressing them. This chapter provides an informal survey and proposes a road map for moving forward to a more cohesive and grounded use of empirical studies in visualization research.
The software domain is an environment that has produced a wide variety of exaptation-based innovations through the repurposing of data, algorithms, and visualizations to problems other than the ones they were originally developed to solve. Unfortunately these innovations have largely been the result of serendipity. Because modern software development is fundamentally aligned to the same principles of evolution that lead to biological innovation—modularity, fluidity, community, diversity, translatability, and combinatorial flexibility—it is an ideal environment in which to leverage our understanding of exaptation to actively facilitate innovations instead of leaving them to chance. Achieving this, however, requires a departure from traditional programming paradigms and the implementation of development systems specifically oriented toward innovation. Preliminary experiments show that when explicit innovation-oriented programming systems and practices are leveraged, innovations occur, suggesting opportunities to leverage the advantages of the virtual domain for the production of both repeatable and scaleable radical innovation.
The chapter deals with knowledge discovery from data in sport. In the narrower sense, knowledge discovery from data refers to a data mining that also incorporates methods from other domains, like statistics, pattern recognition, machine learning, visualization, association rule mining and computational intelligence algorithms.
Nowadays, the use of sport trackers increases from day to day. Athletes from different sports disciplines use them in three ways: (1) to monitor their performance data during training, (2) to analyze data after training sessions, and (3) to use the results of the analysis to improve their performance. Many different tracking technologies have been developed since the arrival of the Global Positioning System. Actually, the computer program running on the web offered by the tracker manufacturers, allows uploading the performed training sessions for later consideration, organizes the collected data, provides the basic statistical analysis, and depicts the uploaded data in the sense of a variety of graphs, tables and numbers.
This chapter presents an automatic construction of sports dietary plans based on the training plan generated by an artificial sports trainer. Differential evolution serves as the core algorithm for this purpose. The goal of this algorithm is to select the suitable foods from a food list dataset according to estimated macro-nutrient requirements. The main advantage of the algorithm is introduced by a domain-specific language for food description that allows a flexibility and autonomy in the construction of the dietary plans.
Movement is one of the more complex human functions requiring multiple biological systems in the body to operate in concert. There are five systems that enable the functioning of the organism: skeletal, muscular, nervous, respiratory, and cardiovascular. The first three create a so-called kinetic chain that is responsible for performing the function of movement, while the remaining two supply oxygen and nutrition to the body on the one hand, while removing waste products from the body on the other. The last two systems create a so-called supply chain.