Conference PaperPDF Available

Segmentation and Visualization of Multivariate Features Using Feature-Local Distributions

Authors:

Abstract and Figures

We introduce an iterative feature-based transfer function design that extracts and systematically incorporates multivariate feature-local statistics into a texture-based volume rendering process. We argue that an interactive multivariate feature-local approach is advantageous when investigating ill-defined features, because it provides a physically meaningful, quantitatively rich environment within which to examine the sensitivity of the structure properties to the identification parameters. We demonstrate the efficacy of this approach by applying it to vortical structures in Taylor-Green turbulence. Our approach identified the existence of two distinct structure populations in these data, which cannot be isolated or distinguished via traditional transfer functions based on global distributions.
Content may be subject to copyright.
A preview of the PDF is not available
... The use of wavelet-based compression schemes is becoming increasingly popular in the data visualization domain, see for example Gruchalla [9], Gruchalla et al. [10], and Gruchalla et al. [11]. We fully expect this trend to continue with their inclusion as the default compression tool in the VAPOR software package [6]. ...
Article
The volume of data and the velocity with which it is being generated by com- putational experiments on high performance computing (HPC) systems is quickly outpacing our ability to effectively store this information in its full fidelity. There- fore, it is critically important to identify and study compression methodologies that retain as much information as possible, particularly in the most salient regions of the simulation space. In this paper, we cast this in terms of a general decision-theoretic problem and discuss a wavelet-based compression strategy for its solution. We pro- vide a heuristic argument as justification and illustrate our methodology on several examples. Finally, we will discuss how our proposed methodology may be utilized in an HPC environment on large-scale computational experiments.
... We follow a practical approach similar to [23,24]. The idea is to visualize basic data characteristics and let the user select regions of interest. ...
Article
Full-text available
Spatial simulations of biochemical systems are carried out to gain insight into nature's underlying mechanisms. However, such simulations are usually difficult to set up and they generate large and complex data. In order to help scientists understand their models and the data generated by the simulations, appropriate visual support can be a decisive factor. In this paper, we apply and extend ideas of feature-based visualization to develop a visual analytics approach to analyze data of reaction–diffusion system simulations. Our approach enables simulation experts to interactively specify meaningful features, which are automatically extracted and tracked via analytical means. Events in the features’ evolution over time are detected as well. Features and events are visualized via dedicated 3D and 2D views, which in combination portray the interplay of the spatial, temporal, and structural aspects of the simulation data. Our approach is being implemented in the context of a multi-view multi-display visualization environment. We demonstrate how researchers can analyze spatio-temporal distributions of particles in a multi-step activation model with spatial constraints. The visual analytics approach helped to identify interesting behavior of the spatial simulation, which was previously only speculated about, and to examine and discuss competing hypotheses regarding possible reasons for the behavior.
... We follow a practical approach that integrates spatial, temporal, and attribute aspects similar to [DGH03, GRBM11]. Basic data characteristics (i.e., the frequency distribution of protein concentration) are conveyed in parallel aligned histograms. ...
Article
Full-text available
The ever increasing processing capabilities of the supercomputers available to computational scientists today, combined with the need for higher and higher resolution computational grids, has resulted in deluges of simulation data. Yet the computational resources and tools required to make sense of these vast numerical outputs through subsequent analysis are often far from adequate, making such analysis of the data a painstaking, if not a hopeless, task. In this paper, we describe a new tool for the scientific investigation of massive computational datasets. This tool (VAPOR) employs data reduction, advanced visualization, and quantitative analysis operations to permit the interactive exploration of vast datasets using only a desktop PC equipped with a commodity graphics card. We describe VAPORs use in the study of two problems. The first, motivated by stellar envelope convection, investigates the hydrodynamic stability of compressible thermal starting plumes as they descend through a stratified layer of increasing density with depth. The second looks at current sheet formation in an incompressible helical magnetohydrodynamic flow to understand the early spontaneous development of quasi two-dimensional (2D) structures embedded within the 3D solution. Both of the problems were studied at sufficiently high spatial resolution, a grid of 5042 by 2048 points for the first and 15363 points for the second, to overwhelm the interactive capabilities of typically available analysis resources.
Conference Paper
Full-text available
Knowledge extraction from data volumes of ever increasing size requires ever more flexible tools to facilitate interactive query. Interactivity enables real-time hypothesis testing and scientific discovery, but can generally not be achieved without some level of data reduction. The approach described in this paper combines multi-resolution access, region-of-interest extraction, and structure identification in order to provide interactive spatial and statistical analysis of a terascale data volume. Unique aspects of our approach include the incorporation of both local and global statistics of the flow structures, and iterative refinement facilities, which combine geometry, topology, and statistics to allow the user to effectively tailor the analysis and visualization to the science. Working together, these facilities allow a user to focus the spatial scale and domain of the analysis and perform an appropriately tailored multivariate visualization of the corresponding data. All of these ideas and algorithms are instantiated in a deployed visualization and analysis tool called VAPOR, which is in routine use by scientists internationally. In data from a 10243 simulation of a forced turbulent flow, VAPOR allowed us to perform a visual data exploration of the flow properties at interactive speeds, leading to the discovery of novel scientific properties of the flow, in the form of two distinct vortical structure populations. These structures would have been very difficult (if not impossible) to find with statistical overviews or other existing visualization-driven analysis approaches. This kind of intelligent, focused analysis/refinement approach will become even more important as computational science moves towards petascale applications.
Chapter
For visualization purposes, the depiction of the topology results in synthetic representations that transcribe the fundamental characteristics of the data. Further, topology-based visualization results in a dramatic decrease in the amount of data required for interpretation, which makes it very appealing for the analysis of large-scale datasets. This chapter introduces to the mathematical foundations of the topological approach to flow visualization along with a survey of existing techniques in this domain. The focus is on methods directly related to the depiction and analysis of the flow topology. The chapter considers vector fields and introduces basic theoretical notions. Theoretical notions result from the qualitative theory of dynamical systems. Nonlinear and parameter-dependent topologies are discussed in the chapter, along with the fundamental concept of bifurcation. The chapter also treats tensor fields and considers the topology of the eigenvector fields of symmetric, second-order tensor fields.
Article
We present an approach to visualizing correlations in 3D multifield scalar data. The core of our approach is the computation of correlation fields, which are scalar fields containing the local correlations of subsets of the multiple fields. While the visualization of the correlation fields can be done using standard 3D volume visualization techniques, their huge number makes selection and handling a challenge. We introduce the Multifield-Graph to give an overview of which multiple fields correlate and to show the strength of their correlation. This information guides the selection of informative correlation fields for visualization. We use our approach to visually analyze a number of real and synthetic multifield datasets.
Article
Transfer functions are a standard technique used in volumerendering to assign color and opacity to a volumeof a scalar field. Multi-dimensional transfer functions(MDTFs) have proven to be an e#ective way to extractspecific features with subtle properties. As 3D texturebasedmethods gain widespread popularity for the visualizationof steady and unsteady flow field data, there isa need to define and apply similar MDTFs to interactive3D flow visualization. We exploit flow field properties...
Article
Visualization is the process of converting a set of numbers resulting from numerical simulations or experiments into a graphical image. However, the ultimate goal is to understand the underlying science. A crucial part is to identify and quantify "important" regions and structures. Computer vision (image understanding) seeks to do the same. In this paper, we discuss our visiometric approach to visualization, i.e., visualizing, identifying, and quantifying evolving amorphous regions in 3D data sets. Our methods incorporate ideas from computer vision, image processing, and mathematical morphology.
Article
In this state-of-the-art report we discuss relevant research works related to the visualization of complex, multi- variate data. We discuss how different techniques take effect at specific stages of the visualization pipeline and how they apply to multi-variate data sets being composed of scalars, vectors and tensors. We also provide a categorization of these techniques with the aim for a better overview of related approaches. Based on this classification we highlight combinable and hybrid approaches and focus on techniques that potentially lead towards new directions in visualization research. In the second part of this paper we take a look at recent techniques that are useful for the visualization of complex data sets either because they are general purpose or because they can be adapted to specific problems.
Article
Detection of the salient iso-values in a volume dataset is often the first step towards its exploration. An error-and-trail approach is often used; new semi-automatic techniques either make assumptions about their data [4] or present multiple criteria for analysis. Determining if a dataset satisfies an algorithm's assumptions, or the criteria to be used in an analysis are both non-trivial tasks. The use of a dataset's statistical signatures, local higher order moments (LHOMs), to characterize its salient iso-values was presented in [10]. In this paper we propose a computational algorithm that uses LHOMs for expedient estimation of salient iso-values. As LHOMs are model independent statistical signatures our algorithm does not impose any assumptions on the data. Further, the algorithm has a single criterion for characterization of the salient iso-values, and the search for this criterion is easily automated. Examples from medical and computational domains are used to demonstrate the effectiveness of the proposed algorithm.
Article
This paper presents a fast algorithm for labeling connected components in binary images based on sequential local operations. A one-dimensional table, which memorizes label equivalences, is used for uniting equivalent labels successively during the operations in forward and backward raster directions. The proposed algorithm has a desirable characteristic: the execution time is directly proportional to the number of pixels in connected components in an image. By comparative evaluations, it has been shown that the efficiency of the proposed algorithm is superior to those of the conventional algorithms.