IEEE Transactions on Visualization and Computer Graphics

Published by Institute of Electrical and Electronics Engineers
Online ISSN: 1941-0506
Publications
Article
In this application paper, we describe the efforts of a multidisciplinary team towards producing a visualization of the September 11 Attack on the North Tower of New York's World Trade Center. The visualization was designed to meet two requirements. First, the visualization had to depict the impact with high fidelity, by closely following the laws of physics. Second, the visualization had to be eloquent to a nonexpert user. This was achieved by first designing and computing a finite-element analysis (FEA) simulation of the impact between the aircraft and the top 20 stories of the building, and then by visualizing the FEA results with a state-of-the-art commercial animation system. The visualization was enabled by an automatic translator that converts the simulation data into an animation system 3D scene. We built upon a previously developed translator. The translator was substantially extended to enable and control visualization of fire and of disintegrating elements, to better scale with the number of nodes and number of states, to handle beam elements with complex profiles, and to handle smoothed particle hydrodynamics liquid representation. The resulting translator is a powerful automatic and scalable tool for high-quality visualization of FEA results.
 
Article
With the rapid growth of the World Wide Web and electronic information services,text corpus is becoming available on-line at an incredible rate.By displaying text data in a logical layout (e.g., color graphs),text visualization presents a direct way to observe the documents as well as understand the relationship between them.In this paper, we propose a novel technique, Exemplar-based Visualization (EV), to visualize an extremely large text corpus. Capitalizing on recent advances in matrix approximation and decomposition, EV presents a probabilistic multidimensional projection model in the low-rank text subspace with a sound objective function. The probability of each document proportion to the topics is obtained through iterative optimization and embedded to a low dimensional space using parameter embedding.By selecting the representative exemplars, we obtain a compact approximation of the data. This makes the visualization highly efficient and flexible. In addition, the selected exemplars neatly summarize the entire data set and greatly reduce the cognitive overload in the visualization, leading to an easier interpretation of large text corpus. Empirically, we demonstrate the superior performance of EV through extensive experiments performed on the publicly available text data sets.
 
Article
Color vision deficiency (CVD) affects approximately 200 million people worldwide, compromising the ability of these individuals to effectively perform color and visualization-related tasks. This has a significant impact on their private and professional lives. We present a physiologically-based model for simulating color vision. Our model is based on the stage theory of human color vision and is derived from data reported in electrophysiological studies. It is the first model to consistently handle normal color vision, anomalous trichromacy, and dichromacy in a unified way. We have validated the proposed model through an experimental evaluation involving groups of color vision deficient individuals and normal color vision ones. Our model can provide insights and feedback on how to improve visualization experiences for individuals with CVD. It also provides a framework for testing hypotheses about some aspects of the retinal photoreceptors in color vision deficient individuals.
 
Persistent structure. (a) The node-copying technique, where each persistent node has 3 extra fields. (b) Persistent binary search tree (where we use the alphabetical order to compare the keys (letters)) after simulating a sequence of updates. Each node has 1 extra field. The number associated with a node/pointer denotes its version stamp. The numbers 1 to 7 on the top horizontal line denote the entry-point array A. Version 5 is shown in red.
Results of View-dependent filtering with various values of L . Top: number of meta-cells read from disk against L . Bottom: running time in seconds against L . The results are for isosurfaces of one time step. Note that the axis of L is in a logarithmic ( log 2 ( · ) ) scale. 
Representative isosurfaces. Each column shows isosurfaces from the same dataset with two different time steps of the same isovalue. Datasets from left to right: Jets, Syn, Turb, and Vort. 
Article
We develop a new algorithm for isosurface extraction and view-dependent filtering from large time-varying fields, by using a novel Persistent Time-Octree (PTOT) indexing structure. Previously, the Persistent Octree (POT) was proposed to perform isosurface extraction and view-dependent filtering, which combines the advantages of the interval tree (for optimal searches of active cells) and of the Branch-On-Need Octree (BONO, forview-dependent filtering), but it only works for steady-state(i.e., single time step) data. For time-varying fields, a 4D version of POT, 4D-POT, was proposed for 4D isocontour slicing, where slicing on the time domain gives all active cells in the queried timestep and isovalue. However, such slicing is not output sensitive and thus the searching is sub-optimal. Moreover, it was not known how to support view-dependent filtering in addition to time-domain slicing.In this paper, we develop a novel Persistent Time-Octree (PTOT) indexing structure, which has the advantages of POT and performs 4D isocontour slicing on the time domain with an output-sensitive and optimal searching. In addition, when we query the same isovalue q over m consecutive time steps, there is no additional searching overhead (except for reporting the additionalactive cells) compared to querying just the first time step. Such searching performance for finding active cells is asymptotically optimal, with asymptotically optimal space and preprocessing time as well. Moreover, our PTOT supports view-dependent filtering in addition to time-domain slicing. We propose a simple and effective out-of-core scheme, where we integrate our PTOT with implicit occluders, batched occlusion queries and batched CUDA computing tasks, so that we can greatly reduce the I/O cost as well as increase the amount of data being concurrently computed in GPU.This results in an efficient algorithm for isosurface extraction with view-dependent filtering utilizing a state-of-the-art programmable GPUfor time-varying fields larger than main memory. Our experiments on datasets as large as 192GB (with 4GB per time step) having no more than 870MB of memory footprint in both preprocessing and run-time phases demonstrate the efficacy of our new technique.
 
ANOVA results for pairwise comparison on techniques 
ANOVA results for pairwise comparison on tasks 
Illustration of one of the layouts for data features and uncertainty features for 1D (top) and 2D (bottom) dataset generation. The datasets were designed to have some data features that overlapped with uncertainty features, and some that did not.  
Section of a 1D dataset depicting the generation of pseudo-observations. The green line is the true data A true . The act of taking readings simulated by 50 normally distributed random numbers at each data value is shown by the randomly colored dots. One set of observations, A k from the 50 observation sets A 0 to A 49 , is illustrated by the red dotted line.  
Article
Many techniques have been proposed to show uncertainty in data visualizations. However, very little is known about their effectiveness in conveying meaningful information. In this paper, we present a user study that evaluates the perception of uncertainty amongst four of the most commonly used techniques for visualizing uncertainty in one-dimensional and two-dimensional data. The techniques evaluated are traditional errorbars, scaled size of glyphs, color-mapping on glyphs, and color-mapping of uncertainty on the data surface. The study uses generated data that was designed to represent the systematic and random uncertainty components. Twenty-seven users performed two types of search tasks and two types of counting tasks on 1D and 2D datasets. The search tasks involved finding data points that were least or most uncertain. The counting tasks involved counting data features or uncertainty features. A 4x4 full-factorial ANOVA indicated a significant interaction between the techniques used and the type of tasks assigned for both datasets indicating that differences in performance between the four techniques depended on the type of task performed. Several one-way ANOVAs were computed to explore the simple main effects. Bonferronni's correction was used to control for the family-wise error rate for alpha-inflation. Although we did not find a consistent order among the four techniques for all the tasks, there are several findings from the study that we think are useful for uncertainty visualization design. We found a significant difference in user performance between searching for locations of high and searching for locations of low uncertainty. Errorbars consistently underperformed throughout the experiment. Scaling the size of glyphs and color-mapping of the surface performed reasonably well. The efficiency of most of these techniques were highly dependent on the tasks performed. We believe that these findings can be used in future uncertainty visualization design. In addition, the framework developed in this user study presents a structured approach to evaluate uncertainty visualization techniques, as well as provides a basis for future research in uncertainty visualization.
 
Article
This paper introduces double-sided 2.5D graphics, aiming at enriching the visual appearance when manipulating conventional 2D graphical objects in 2.5D worlds. By attaching a back texture image on a single-sided 2D graphical object, we can enrich the surface and texture detail on 2D graphical objects and improve our visual experience when manipulating and animating them. A family of novel operations on 2.5D graphics, including rolling, twisting, and folding, are proposed in this work, allowing users to efficiently create compelling 2.5D visual effects. Very little effort is needed from the user's side. In our experiment, various creative designs on double-sided graphics were worked out by the recruited participants including a professional artist, which show and demonstrate the feasibility and applicability of our proposed method.
 
Article
The articles in this special section contain selected papers from the 2010 ACM Virtual Reality Software and Technology Symposium.
 
Article
The papers in this special section includes extended versions of four of the best papers presented at the 2012 ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games held from 9-11 March 2012 in Costa Mesa, California.
 
Article
This paper presents a 2D flow visualization user study that we conducted using new methodologies to increase the objectiveness. We evaluated grid-based variable-size arrows, evenly spaced streamlines, and line integral convolution (LIC) variants (basic, oriented, and enhanced versions) coupled with a colorwheel and/or rainbow color map, which are representative of many geometry-based and texture-based techniques. To reduce data-related bias, template-based explicit flow synthesis was used to create a wide variety of symmetric flows with similar topological complexity. To suppress task-related bias, pattern-based implicit task design was employed, addressing critical point recognition, critical point classification, and symmetric pattern categorization. In addition, variable-duration and fixed-duration measurement schemes were utilized for lightweight precision-critical and heavyweight judgment intensive flow analysis tasks, respectively, to record visualization effectiveness. We eliminated outliers and used the Ryan REGWQ post-hoc homogeneous subset tests in statistical analysis to obtain reliable findings. Our study shows that a texture-based dense representation with accentuated flow streaks, such as enhanced LIC, enables intuitive perception of the flow, while a geometry-based integral representation with uniform density control, such as evenly spaced streamlines, may exploit visual interpolation to facilitate mental reconstruction of the flow. It is also shown that inappropriate color mapping (e.g., colorwheel) may add distractions to a flow representation.
 
Article
We present a toolbox for quickly interpreting and illustrating 2D slices of seismic volumetric reflection data. Searching for oil and gas involves creating a structural overview of seismic reflection data to identify hydrocarbon reservoirs. We improve the search of seismic structures by precalculating the horizon structures of the seismic data prior to interpretation. We improve the annotation of seismic structures by applying novel illustrative rendering algorithms tailored to seismic data, such as deformed texturing and line and texture transfer functions. The illustrative rendering results in multi-attribute and scale invariant visualizations where features are represented clearly in both highly zoomed in and zoomed out views. Thumbnail views in combination with interactive appearance control allows for a quick overview of the data before detailed interpretation takes place. These techniques help reduce the work of seismic illustrators and interpreters.
 
Article
We propose the first graphics processing unit (GPU) solution to compute the 2D constrained Delaunay triangulation (CDT) of a planar straight line graph (PSLG) consisting of points and edges. There are many existing CPU algorithms to solve the CDT problem in computational geometry, yet there has been no prior approach to solve this problem efficiently using the parallel computing power of the GPU. For the special case of the CDT problem where the PSLG consists of just points, which is simply the normal Delaunay triangulation (DT) problem, a hybrid approach using the GPU together with the CPU to partially speed up the computation has already been presented in the literature. Our work, on the other hand, accelerates the entire computation on the GPU. Our implementation using the CUDA programming model on NVIDIA GPUs is numerically robust, and runs up to an order of magnitude faster than the best sequential implementations on the CPU. This result is reflected in our experiment with both randomly generated PSLGs and real-world GIS data having millions of points and edges.
 
Article
We describe an experiment in which art and illustration experts evaluated six 2D vector visualization methods. We found that these expert critiques mirrored previously recorded experimental results; these findings support that using artists, visual designers and illustrators to critique scientific visualizations can be faster and more productive than quantitative user studies. Our participants successfully evaluated how well the given methods would let users complete a given set of tasks. Our results show a statistically significant correlation with a previous objective study: designers' subjective predictions of user performance by these methods match users measured performance. The experts improved the evaluation by providing insights into the reasons for the effectiveness of each visualization method and suggesting specific improvements.
 
Article
Simulators for dynamic systems are now widely used in various application areas and raise the need for effective and accurate flow visualization techniques. Animation allows us to depict direction, orientation, and velocity of a vector field accurately. This paper extends a former proposal for a new approach to produce perfectly cyclic and variable-speed animations for 2D steady vector fields (see [1] and [2]). A complete animation of an arbitrary number of frames is encoded in a single image. The animation can be played using the color table animation technique, which is very effective even on low-end workstations. A cyclic set of textures can be produced as well and then encoded in a common animation format or used for texture mapping on 3D objects. As compared to other approaches, the method presented in this paper produces smoother animations and is more effective, both in memory requirements to store the animation, and in computation time.
 
GIS data (Scotland and England) depicting polygonal lines that share a number of points.  
Article
Abstract-Polygonal lines constitute a key graphical primitive in 2D vector graphics data. Thus, the ability to apply a digital watermark to such an entity would enable the watermarking of cartoons, drawings, and Geographical Information Systems (GIS) data in vector graphics format. This paper builds on and extends an existing algorithm that achieves polygonal line watermarking by modifying the Fourier descriptors magnitude in an imperceptible way. Watermarks embedded by this technique can be detected in rotated, translated, scaled, or reflected polygonal lines. The detection of such watermarks had been previously carried out through a correlator detector. In this paper, analysis of the statistics of the Fourier descriptors is exploited to devise an optimal blind detector. Furthermore, the problem of watermarking multiple lines, as well as other implementation issues are being addressed. Experimental results verify the imperceptibility and robustness of the proposed method.
 
Article
Design of time-varying vector fields, i.e., vector fields that can change over time, has a wide variety of important applications in computer graphics. Existing vector field design techniques do not address time-varying vector fields. In this paper, we present a framework for the design of time-varying vector fields, both for planar domains as well as manifold surfaces. Our system supports the creation and modification of various time-varying vector fields with desired spatial and temporal characteristics through several design metaphors including streamlines, pathlines, singularity paths, and bifurcations. These design metaphors are integrated into an element-based design to generate the time-varying vector fields via a sequence of basis field summations or spatial constrained optimizations at the sampled times. The key frame design and field deformation are also introduced to support other user design scenarios. Accordingly, a spatial-temporal constrained optimization and the time-varying transformation are employed to generate the desired fields for these two design scenarios, respectively. We apply the time-varying vector fields generated using our design system to a number of important computer graphics applications that require controllable dynamic effects such as evolving surface appearance, dynamic scene design, steerable crowd movement, and painterly animation, many of which are difficult or impossible to achieve via prior simulation-based methods.
 
Article
We describe a series of experiments that compare 2D displays, 3D displays, and combined 2D/3D displays (orientation icon, ExoVis, and clip planes) for relative position estimation, orientation, and volume of interest tasks. Our results indicate that 3D displays can be very effective for approximate navigation and relative positioning when appropriate cues, such as shadows, are present. However, 3D displays are not effective for precise navigation and positioning except possibly in specific circumstances, for instance, when good viewing angles or measurement tools are available. For precise tasks in other situations, orientation icon and ExoVis displays were better than strict 2D or 3D displays (displays consisting exclusively of 2D or 3D views). The combined displays had as good or better performance, inspired higher confidence, and allowed natural, integrated navigation. Clip plane displays were not effective for 3D orientation because users could not easily view more than one 2D slice at a time and had to frequently change the visibility of individual slices. Major factors contributing to display preference and usability were task characteristics, orientation cues, occlusion, and spatial proximity of views that were used together.
 
Article
Recent advances in vector field topologymake it possible to compute its multi-scale graph representations for autonomous 2D vector fields in a robust and efficient manner. One of these representations is a Morse Connection Graph (MCG), a directed graph whose nodes correspond to Morse sets, generalizing stationary points and periodic trajectories, and arcs - to trajectories connecting them. While being useful for simple vector fields, the MCG can be hard to comprehend for topologically rich vector fields, containing a large number of features. This paper describes a visual representation of the MCG, inspired by previous work on graph visualization. Our approach aims to preserve the spatial relationships between the MCG arcs and nodes and highlight the coherent behavior of connecting trajectories. Using simulations of ocean flow, we show that it can provide useful information on the flow structure. This paper focuses specifically on MCGs computed for piecewise constant (PC) vector fields. In particular, we describe extensions of the PC framework that make it more flexible and better suited for analysis of data on complex shaped domains with a boundary. We also describe a topology simplification scheme that makes our MCG visualizations less ambiguous. Despite the focus on the PC framework, our approach could also be applied to graph representations or topological skeletons computed using different methods.
 
Article
In uncertain scalar fields where data values vary with a certain probability, the strength of this variability indicates the confidence in the data. It does not, however, allow inferring on the effect of uncertainty on differential quantities such as the gradient, which depend on the variability of the rate of change of the data. Analyzing the variability of gradients is nonetheless more complicated, since, unlike scalars, gradients vary in both strength and direction. This requires initially the mathematical derivation of their respective value ranges, and then the development of effective analysis techniques for these ranges. This paper takes a first step into this direction: Based on the stochastic modeling of uncertainty via multivariate random variables, we start by deriving uncertainty parameters, such as the mean and the covariance matrix, for gradients in uncertain discrete scalar fields. We do not make any assumption about the distribution of the random variables. Then, for the first time to our best knowledge, we develop a mathematical framework for computing confidence intervals for both the gradient orientation and the strength of the derivative in any prescribed direction, for instance, the mean gradient direction. While this framework generalizes to 3D uncertain scalar fields, we concentrate on the visualization of the resulting intervals in 2D fields. We propose a novel color diffusion scheme to visualize both the absolute variability of the derivative strength and its magnitude relative to the mean values. A special family of circular glyphs is introduced to convey the uncertainty in gradient orientation. For a number of synthetic and real-world data sets, we demonstrate the use of our approach for analyzing the stability of certain features in uncertain 2D scalar fields, with respect to both local derivatives and feature orientation.
 
Article
We present a new method for converting a photo or image to a synthesized painting following the painting style of an example painting. Treating painting styles of brush strokes as sample textures, we reduce the problem of learning an example painting to a texture synthesis problem. The proposed method uses a hierarchical patch-based approach to the synthesis of directional textures. The key features of our method are: 1) Painting styles are represented as one or more blocks of sample textures selected by the user from the example painting; 2) image segmentation and brush stroke directions defined by the medial axis are used to better represent and communicate shapes and objects present in the synthesized painting; 3) image masks and a hierarchy of texture patches are used to efficiently synthesize high-quality directional textures. The synthesis process is further accelerated through texture direction quantization and the use of Gaussian pyramids. Our method has the following advantages: First, the synthesized stroke textures can follow a direction field determined by the shapes of regions to be painted. Second, the method is very efficient; the generation time of a synthesized painting ranges from a few seconds to about one minute, rather than hours, as required by other existing methods, on a commodity PC. Furthermore, the technique presented here provides a new and efficient solution to the problem of synthesizing a 2D directional texture. We use a number of test examples to demonstrate the efficiency of the proposed method and the high quality of results produced by the method.
 
Article
We propose an method to place streamlines in parallel for 2D flow fields. This method is based on local tracing areas (LTAs). An LTA is a sub-domain enclosed by streamlines and/or field borders, where tracing of streamlines are localized. Given a flow field, it is initialized as an LTA, which is later recursively partitioned into hierarchical LTAs. Streamlines are placed within LTAs simultaneously and independently. At the same time, to control density of streamlines, each streamline is associated with an isolation zone and a saturation zone, both of which are center aligned with the streamline but have different widths. None of streamlines can trace into other isolation zones. New streamlines are only seeded within valid seeding areas (VSAs) that are enclosed by saturation zones and/or field borders. To implement the parallel strategy and the density control, a cell-based modeling is devised to describe isolation zones and LTAs as well saturation zones and VSAs. With these cell-based models, a seeding strategy is proposed to seed streamlines within LTAs, and a cell-marking technique is used to control seeding and tracing of streamlines. Test results show that the placement method can achieve highly parallel performance on shared memory systems without losing placement quality.
 
Tesselation and Smoothing Timing Comparison
Visibility polygons and windows: The visibility polygons V (u) and V (v) for vertices u and v and the windows w u (v) and w v (u) are indicated. The red '+' indicates where our algorithm places the Steiner vertex for this case.
Compatible triangulation torture test: Results of our algorithm on shapes that are difficult to triangulate compatibly due to necessity of k-links, k > 2. Numbers indicate vertex correspondences; colors indicate triangle correspondences. 32 Steiner vertices were added.
Matching comparison: A comparison with a matching result taken from Zabulis et al. [17] (left) vs. our algorithm (right). No manual correspondences were specified. Our algorithm obtains very similar results but in a fraction of the time.
Tesselation timing comparison: Plots of the left two columns of Table 2.
Article
We present new algorithms for the compatible embedding of 2D shapes. Such embeddings offer a convenient way to interpolate shapes having complex, detailed features. Compared to existing techniques, our approach requires less user input, and is faster, more robust, and simpler to implement, making it ideal for interactive use in practical applications. Our new approach consists of three parts. First, our boundary matching algorithm locates salient features using the perceptually motivated principles of scale-space and uses these as automatic correspondences to guide an elastic curve matching algorithm. Second, we simplify boundaries while maintaining their parametric correspondence and the embedding of the original shapes. Finally, we extend the mapping to shapes' interiors via a new compatible triangulation algorithm. The combination of our algorithms allows us to demonstrate 2D shape interpolation with instant feedback. The proposed algorithms exhibit a combination of simplicity, speed, and accuracy that has not been achieved in previous work.
 
Article
This work describes the EL-REP, a new 2D decomposition scheme with interesting properties and applications. The EL-REP can be computed for one or more simple polygons of any kind: convex or non-convex, with or without holes and even with several shells. A method for constructing this decomposition is described in detail, together with several of its main applications: fast point-in-polygon inclusion test, 2D location, triangulation of polygons and collision detection.
 
Article
Research in the field of complex fluids such as polymer solutions, particulate suspensions and foams studies how the flow of fluids with different material parameters changes as a result of various constraints. Surface Evolver, the standard solver software used to generate foam simulations, provides large, complex, time-dependent data sets with hundreds or thousands of individual bubbles and thousands of time steps. However this software has limited visualization capabilities, and no foam specific visualization software exists. We describe the foam research application area where, we believe, visualization has an important role to play. We present a novel application that provides various techniques for visualization, exploration and analysis of time-dependent 2D foam simulation data. We show new features in foam simulation data and new insights into foam behavior discovered using our application.
 
Article
We present a novel approach for analyzing two-dimensional (2D) flow field data based on the idea of invariant moments. Moment invariants have traditionally been used in computer vision applications, and we have adapted them for the purpose of interactive exploration of flow field data. The new class of moment invariants we have developed allows us to extract and visualize 2D flow patterns, invariant under translation, scaling, and rotation. With our approach one can study arbitrary flow patterns by searching a given 2D flow data set for any type of pattern as specified by a user. Further, our approach supports the computation of moments at multiple scales, facilitating fast pattern extraction and recognition. This can be done for critical point classification, but also for patterns with greater complexity. This multi-scale moment representation is also valuable for the comparative visualization of flow field data. The specific novel contributions of the work presented are the mathematical derivation of the new class of moment invariants, their analysis regarding critical point features, the efficient computation of a novel feature space representation, and based upon this the development of a fast pattern recognition algorithm for complex flow structures.
 
Article
We present a 2D feature-based technique for morphing 3D objects represented by light fields. Existing light field morphing methods require the user to specify corresponding 3D feature elements to guide morph computation. Since slight errors in 3D specification can lead to significant morphing artifacts, we propose a scheme based on 2D feature elements that is less sensitive to imprecise marking of features. First, 2D features are specified by the user in a number of key views in the source and target light fields. Then the two light fields are warped view by view as guided by the corresponding 2D features. Finally, the two warped light fields are blended together to yield the desired light field morph. Two key issues in light field morphing are feature specification and warping of light field rays. For feature specification, we introduce a user interface for delineating 2D features in key views of a light field, which are automatically interpolated to other views. For ray warping, we describe a 2D technique that accounts for visibility changes and present a comparison to the ideal morphing of light fields. Light field morphing based on 2D features makes it simple to incorporate previous image morphing techniques such as nonuniform blending, as well as to morph between an image and a light field.
 
Article
Large 2D information spaces, such as maps, images, or abstract visualizations, require views at various level of detail: close ups to inspect details, overviews to maintain (literally) an overview. Users often change their view during a session. Smooth animations enable the user to maintain an overview during interactive viewing and to understand the context of separate views. We present a generic model to handle smooth image viewing. The core of the model is a metric on the effect of simultaneous zooming and panning, based on an estimate of the perceived velocity. Using this metric, solutions for various problems are derived, such as the optimal animation between two views, automatic zooming, and the parametrization of arbitrary camera paths. Optimal is defined here as smooth and efficient. Solutions are based on the shortest paths of a virtual camera, given the metric. The model has two free parameters: animation speed and zoom/pan trade off. A user experiment to find good values for these is described. Finally, it is shown how the model can be extended to deal also with rotation and nonuniform scaling.
 
Article
We present results from a user study that compared six visualization methods for two-dimensional vector data. Users performed three simple but representative tasks using visualizations from each method: 1) locating all critical points in an image, 2) identifying critical point types, and 3) advecting a particle. Visualization methods included two that used different spatial distributions of short arrow icons, two that used different distributions of integral curves, one that used wedges located to suggest flow lines, and line-integral convolution (LIC). Results show different strengths and weaknesses for each method. We found that users performed these tasks better with methods that: 1) showed the sign of vectors within the vector field, 2) visually represented integral curves, and 3) visually represented the locations of critical points. Expert user performance was not statistically different from nonexpert user performance. We used several methods to analyze the data including omnibus analysis of variance, pairwise t-tests, and graphical analysis using inferential confidence intervals. We concluded that using the inferential confidence intervals for displaying the overall pattern of results for each task measure and for performing subsequent pairwise comparisons of the condition means was the best method for analyzing the data in this study. These results provide quantitative support for some of the anecdotal evidence concerning visualization methods. The tasks and testing framework also provide a basis for comparing other visualization methods, for creating more effective methods and for defining additional tasks to further understand the tradeoffs among the methods. In the future, we also envision extending this work to more ambitious comparisons, such as evaluating two-dimensional vectors on two-dimensional surfaces embedded in three-dimensional space and defining analogous tasks for three-dimensional visualization methods.
 
Article
This paper describes approaches to topologically segmenting 2D time-dependent vector fields. For this class of vector fields, two important classes of lines exist: stream lines and path lines. Because of this, two segmentations are possible: either concerning the behavior of stream lines or of path lines. While topological features based on stream lines are well established, we introduce path line oriented topology as a new visualization approach in this paper. As a contribution to stream line oriented topology, we introduce new methods to detect global bifurcations like saddle connections and cyclic fold bifurcations as well as a method of tracking all isolated closed stream lines. To get the path line oriented topology, we segment the vector field into areas of attracting, repelling, and saddle-like behavior of the path lines. We compare both kinds of topologies and apply them to a number of test data sets.
 
A synthetic test data set f containing uniform and outlier-like noise: a) a height field visualization of f , b) the scale space L of f , c) the pruned merge graph of the minima of L, with thickness given by homological persistence, d)-f) the 38 most important minima and maxima of f as specified by d) persistence, e) scale space lifetime, f) scale space persistence.
Article
This paper introduces a novel importance measure for critical points in 2D scalar fields. This measure is based on a combination of the deep structure of the scale space with the well-known concept of homological persistence. We enhance the noise robust persistence measure by implicitly taking the hill-, ridge- and outlier-like spatial extent of maxima and minima into account. This allows for the distinction between different types of extrema based on their persistence at multiple scales. Our importance measure can be computed efficiently in an out-of-core setting. To demonstrate the practical relevance of our method we apply it to a synthetic and a real-world data set and evaluate its performance and scalability.
 
Article
In this paper we present a new technique and prototype graph visualization system, stereoscopic highlighting, to help answer accessibility and adjacency queries when interacting with a node-link diagram. Our technique utilizes stereoscopic depth to highlight regions of interest in a 2D graph by projecting these parts onto a plane closer to the viewpoint of the user. This technique aims to isolate and magnify specific portions of the graph that need to be explored in detail without resorting to other highlighting techniques like color or motion, which can then be reserved to encode other data attributes. This mechanism of stereoscopic highlighting also enables focus+context views by juxtaposing a detailed image of a region of interest with the overall graph, which is visualized at a further depth with correspondingly less detail. In order to validate our technique, we ran a controlled experiment with 16 subjects comparing static visual highlighting to stereoscopic highlighting on 2D and 3D graph layouts for a range of tasks. Our results show that while for most tasks the difference in performance between stereoscopic highlighting alone and static visual highlighting is not statistically significant, users performed better when both highlighting methods were used concurrently. In more complicated tasks, 3D layout with static visual highlighting outperformed 2D layouts with a single highlighting method. However, it did not outperform the 2D layout utilizing both highlighting techniques simultaneously. Based on these results, we conclude that stereoscopic highlighting is a promising technique that can significantly enhance graph visualizations for certain use cases.
 
Article
Three-dimensional displays are drawing attention as next-generation devices. Some techniques which can reproduce three-dimensional images prepared in advance have already been developed. However, technology for the transmission of 3D moving pictures in real-time is yet to be achieved. In this paper, we present a novel method for 360-degrees viewable 3D displays and the Transpost system in which we implement the method. The basic concept of our system is to project multiple images of the object, taken from different angles, onto a spinning screen. The key to the method is projection of the images onto a directionally reflective screen with a limited viewing angle. The images are reconstructed to give the viewer a three-dimensional image of the object displayed on the screen. The display system can present images of computer-graphics pictures, live pictures, and movies. Furthermore, the reverse optical process of that in the display system can be used to record images of the subject from multiple directions. The images can then be transmitted to the display in real-time. We have developed prototypes of a 3D display and a 3D human-image transmission system. Our preliminary working prototypes demonstrate new possibilities of expression and forms of communication.
 
Article
We propose a method to obtain a complete and accurate 3D model from multiview images captured under a variety of unknown illuminations. Based on recent results showing that for Lambertian objects, general illumination can be approximated well using low-order spherical harmonics, we develop a robust alternating approach to recover surface normals. Surface normals are initialized using a multi-illumination multiview stereo algorithm, then refined using a robust alternating optimization method based on the l(1) metric. Erroneous normal estimates are detected using a shape prior. Finally, the computed normals are used to improve the preliminary 3D model. The reconstruction system achieves watertight and robust 3D reconstruction while neither requiring manual interactions nor imposing any constraints on the illumination. Experimental results on both real world and synthetic data show that the technique can acquire accurate 3D models for Lambertian surfaces, and even tolerates small violations of the Lambertian assumption.
 
Article
We present an engine for enhancing the geometry of a 3D face mesh model while making the enhanced version share close similarity with the original. After obtaining the feature points of a given scanned 3D face model, we first perform a local and global symmetrization on the key facial features. We then apply an overall proportion optimization to the frontal face based on Neoclassical Canons and golden ratios. A nonlinear least-squares solution is adopted to adjust the feature points so that the face profile complies with the aesthetic criteria, which are derived from the profile cosmetology. Through the above processes, we obtain the optimized feature points, which will lead to a more attractive face. According to the original feature points and the optimized ones, we perform Laplacian deformation to adjust the remaining points of the face in order to preserve the geometric details. The analysis of user study in this paper validates the effectiveness of our 3D face geometry enhancement engine.
 
Article
Software visualization studies techniques and methods for graphically representing different aspects of software. Its main goal is to enhance, simplify and clarify the mental representation a software engineer has of a computer system. During many years, visualization in 2D space has been actively studied, but in the last decade, researchers have begun to explore new 3D representations for visualizing software. In this article, we present an overview of current research in the area, describing several major aspects like: visual representations, interaction issues, evaluation methods and development tools. We also perform a survey of some representative tools to support different tasks, i.e., software maintenance and comprehension, requirements validation and algorithm animation for educational purposes, among others. Finally, we conclude identifying future research directions.
 
Article
We present a particle system for interactive visualization of steady 3D flow fields on uniform grids. For the amount of particles we target, particle integration needs to be accelerated and the transfer of these sets for rendering must be avoided. To fulfill these requirements, we exploit features of recent graphics accelerators to advect particles in the graphics processing unit (GPU), saving particle positions in graphics memory, and then sending these positions through the GPU again to obtain images in the frame buffer. This approach allows for interactive streaming and rendering of millions of particles and it enables virtual exploration of high resolution fields in a way similar to real-world experiments. The ability to display the dynamics of large particle sets using visualization options like shaded points or oriented texture splats provides an effective means for visual flow analysis that is far beyond existing solutions. For each particle, flow quantities like vorticity magnitude and wavelength2 are computed and displayed. Built upon a previously published GPU implementation of a sorting network, visibility sorting of transparent particles is implemented. To provide additional visual cues, the GPU constructs and displays visualization geometry like particle lines and stream ribbons.
 
Article
This paper presents a sharpness-based method for hole-filling that can repair a 3D model such that its shape conforms to that of the original model. The method involves two processes: interpolation-based hole-filling, which produces an initial repaired model; and post-processing, which adjusts the shape of the initial repaired model to conform to that of the original model. In the interpolation-based hole-filling process, a surface interpolation algorithm based on the radial basis function creates a smooth implicit surface that fills the hole. Then, a regularized marching tetrahedral algorithm is used to triangulate the implicit surface. Finally a stitching and regulating strategy is applied to the surface patch and its neighboring boundary polygon meshes to produce an initial repaired mesh model, which is a regular mesh model suitable for post-processing. During post-processing, a sharpness dependent filtering algorithm is applied to the initial repaired model. This is an iterative procedure whereby each iteration step adjusts the face normal associated with each meshed polygon to recover the sharp features hidden in the repaired model. The experiment results demonstrate that the method is effective in repairing incomplete 3D mesh models.
 
Overview of our visual attention model architecture. A) feature maps, B) conspicuity maps, C) bottomup attention (saliency map), D) update of per-surfel data, E) top-down attention, F) computation of final attention on screen, G) computation of the possible next gaze position and H) the gaze pattern simulator computing the final gaze position on screen. Red color emphasizes the novel parts of our visual attention model compared to existing techniques. 
Computation of the surfel map, and update of visibility and habituation maps. A) the surfel map containing world space surfel position (XYZ into RGB) B) surfel visibility (red=visible), C) surfel habituation (the greener the surfel, the less habituated the viewer is to it) and D) surfel habituation after waiting 6 seconds. B', C' and D' are views of the surfel map texture mapped on the scene. 
Article
This paper studies the design and application of a novel visual attention model designed to compute user's gaze position automatically, i.e., without using a gaze-tracking system. The model we propose is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute in real time a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive processes which take place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines both bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the user's one. We conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with sometimes more than 100 percent of accuracy gained. This suggests that computing a gaze point in a 3D virtual environment in real time is possible and is a valid approach, compared to object-based approaches. Finally, we expose different applications of our model when exploring virtual environments. We present different algorithms which can improve or adapt the visual feedback of virtual environments based on gaze information. We first propose a level-of-detail approach that heavily relies on multiple-texture sampling. We show that it is possible to use the gaze information of our visual attention model to increase visual quality where the user is looking, while maintaining a high-refresh rate. Second, we introduce the use of the visual attention model in three visual effects inspired by the human visual system namely: depth-of-field blur, camera- motions, and dynamic luminance. All these effects are computed based on the simulated gaze of the user, and are meant to improve user's sensations in future virtual reality applications.
 
Article
We developed an autostereoscopic display for distant viewing of three-dimensional (3D) computer graphics (CG) images without using special viewing glasses or tracking devices. The images are created by employing referential viewing area based CG image generation and pixel distribution algorithm for integral photography (IP) and integral videography (IV) imaging. CG image rendering is used to generate IP/IV elemental images. The images can be viewed from each viewpoint within a referential viewing area and the elemental images are reconstructed from rendered CG images by pixel redistribution and compensation method. The elemental images are projected onto a screen that is placed at the same referential viewing distance from the lens array as in the image rendering. Photographic film is used to record the elemental images through each lens. The method enables 3D images with a long visualization depth to be viewed from relatively long distances without any apparent influence from deviated or distorted lenses in the array. We succeeded in creating an actual autostereoscopic images with an image depth of several meters in front of and behind the display that appear to have three-dimensionality even when viewed from a distance.
 
Our original 3D agents controlled by MPML3D. 
UML sequence diagram to illustrate the runtime behavior of the modules to control agents in virtual worlds with a MPML3D script. 
Example of a visitor avatar (middle) attending to a presentation given by two MPML3D scripted agents in SL. The presentation has been started by an in-world perception triggered by the visitor avatar (user) writing "Hello" into the chat channel. 
MPML3D XML Schema specification. 
Synchronization of speech and gestures in distributed virtual worlds. The crucial part is the network delay which must be estimated. 
Article
The aim of this paper is two-fold. First, it describes a scripting language for specifying communicative behavior and interaction of computer-controlled agents ("bots") in the popular three-dimensional (3D) multi-user online world of "Second Life" and the emerging "OpenSimulator" project. While tools for designing avatars and in-world objects in Second Life exist, technology for non-programmer content creators of scenarios involving scripted agents is currently missing. Therefore, we have implemented new client software that controls bots based on the Multimodal Presentation Markup Language 3D (MPML3D), a highly expressive XML-based scripting language for controlling the verbal and non-verbal behavior of interacting animated agents. Second, the paper compares Second Life and OpenSimulator platforms and discusses the merits and limitations of each from the perspective of agent control. Here, we also conducted a small study that compares the network performance of both platforms.
 
(a) Stanford Bunny object; (b) after 40% vertex (e) after 90% vertex decimation.  
Article
In this paper, two novel methods suitable for blind 3D mesh object watermarking applications are proposed. The first method is robust against 3D rotation, translation, and uniform scaling. The second one is robust against both geometric and mesh simplification attacks. A pseudorandom watermarking signal is cast in the 3D mesh object by deforming its vertices geometrically, without altering the vertex topology. Prior to watermark embedding and detection, the object is rotated and translated so that its center of mass and its principal component coincide with the origin and the z-axis of the Cartesian coordinate system. This geometrical transformation ensures watermark robustness to translation and rotation. Robustness to uniform scaling is achieved by restricting the vertex deformations to occur only along the r coordinate of the corresponding (r, theta, phi) spherical coordinate system. In the first method, a set of vertices that correspond to specific angles theta is used for watermark embedding. In the second method, the samples of the watermark sequence are embedded in a set of vertices that correspond to a range of angles in the theta domain in order to achieve robustness against mesh simplifications. Experimental results indicate the ability of the proposed method to deal with the aforementioned attacks.
 
Article
The Morse-Smale complex is a useful topological data structure for the analysis and visualization of scalar data. This paper describes an algorithm that processes all mesh elements of the domain in parallel to compute the Morse-Smale complex of large two-dimensional data sets at interactive speeds. We employ a reformulation of the Morse-Smale complex using Forman's Discrete Morse Theory and achieve scalability by computing the discrete gradient using local accesses only. We also introduce a novel approach to merge gradient paths that ensures accurate geometry of the computed complex. We demonstrate that our algorithm performs well on both multicore environments and on massively parallel architectures such as the GPU.
 
Article
In this paper we present a novel focus+context zooming technique, which allows users to zoom into a route and its associated landmarks in a 3D urban environment from a 45-degree bird's-eye view. Through the creative utilization of the empty space in an urban environment, our technique can informatively reveal the focus region and minimize distortions to the context buildings. We first create more empty space in the 2D map by broadening the road with an adapted seam carving algorithm. A grid-based zooming technique is then used to enlarge the landmarks to reclaim the created empty space and thus reduce distortions to the other parts. Finally, an occlusion-free route visualization scheme adaptively scales the buildings occluding the route to make the route always visible to users. Our method can be conveniently integrated into Google Earth and Virtual Earth to provide seamless route zooming and help users better explore a city and plan their tours. It can also be used in other applications such as information overlay to a virtual city.
 
Article
This paper presents a new technique, called aura 3D textures, for generating solid textures based on input examples. Our method is fully automatic and requires no user interactions in the process. Given an input texture sample, our method first creates its aura matrix representations and then generates a solid texture by sampling the aura matrices of the input sample constrained in multiple view directions. Once the solid texture is generated, any given object can be textured by the solid texture. We evaluate the results of our method based on extensive user studies. Based on the evaluation results using human subjects, we conclude that our algorithm can generate faithful results of both stochastic and structural textures with an average successful rate of 76.4 percent. Our experimental results also show that the new method outperforms Wei and Levoy's method and is comparable to that proposed by Jagnow et al. [21].
 
Article
In this paper we present a novel anisotropic diffusion model targeted for 3D scalar field data. Our model preserves material boundaries as well as fine tubular structures while noise is smoothed out. One of the major novelties is the use of the directional second derivative to define material boundaries instead of the gradient magnitude for thresholding. This results in a diffusion model that has much lower sensitivity to the diffusion parameter and smoothes material boundaries consistently compared to gradient magnitude based techniques. We empirically analyze the stability and convergence of the proposed diffusion and demonstrate its de-noising capabilities for both analytic and real data. We also discuss applications in the context of volume rendering.
 
Article
This paper introduces a new streamline placement and selection algorithm for 3D vector fields. Instead of considering the problem as a simple feature search in data space, we base our work on the observation that most streamline fields generate a lot of self-occlusion which prevents proper visualization. In order to avoid this issue, we approach the problem in a view-dependent fashion and dynamically determine a set of streamlines which contributes to data understanding without cluttering the view. Since our technique couples flow characteristic criteria and view-dependent streamline selection we are able achieve the best of both worlds: relevant flow description and intelligible, uncluttered pictures. We detail an efficient GPU implementation of our algorithm, show comprehensive visual results on multiple datasets and compare our method with existing flow depiction techniques. Our results show that our technique greatly improves the readability of streamline visualizations on different datasets without requiring user intervention.
 
Article
We present Drawing on Air, a haptic-aided input technique for drawing controlled 3D curves through space. Drawing on Air addresses a control problem with current 3D modeling approaches based on sweeping movement of the hands through the air. While artists praise the immediacy and intuitiveness of these systems, a lack of control makes it nearly impossible to create 3D form beyond quick design sketches or gesture drawings. Drawing on Air introduces two new strategies for more controlled 3D drawing: one-handed drag drawing and two-handed tape drawing. Both approaches have advantages for drawing certain types of curves. We describe a tangent preserving method for transitioning between the two techniques while drawing. Haptic-aided redrawing and line weight adjustment while drawing are also supported in both approaches. In a quantitative user study evaluation by illustrators, the one and two-handed techniques performed at roughly the same level, and both significantly outperformed freehand drawing and freehand drawing augmented with a haptic friction effect. We present the design and results of this experiment as well as user feedback from artists and 3D models created in a style of line illustration for challenging artistic and scientific subjects.
 
Article
Three-dimensional metamorphosis is a powerful technique to produce a 3D shape transformation between two or more existing models. In this paper, we propose a novel 3D morphing technique that avoids creating a merged embedding that contains the faces, edges, and vertices of two given embeddings. This novel 3D morphing technique dynamically adds or removes vertices to gradually transform the connectivity of 3D polyhedrons from a source model into a target model and simultaneously creates the intermediate shapes. In addition, a priority control function provides the animators with control of arising or dissolving of input models' features in a morphing sequence. This is a useful tool to control a morphing sequence more easily and flexibly. Several examples of aesthetically pleasing morphs are demonstrated using the proposed method.
 
Top-cited authors
Hanspeter Pfister
  • Harvard University
Enrico Bertini
  • Northeastern University
Doug A. Bowman
  • Virginia Polytechnic Institute and State University
Catherine Plaisant
  • University of Maryland, College Park
Jessica Hullman
  • Northwestern University