# Computer Graphics Forum

Online ISSN: 1467-8659
Publications
Article
This paper presents a novel approach to detecting and preserving fine illumination structure within photon maps. Data derived from each photon's primal trajectory is encoded and used to build a high-dimensional kd-tree. Incorporation of these new parameters allows for precise differentiation between intersecting ray envelopes, thus minimizing detail degradation when combined with photon relaxation. We demonstrate how parameter-aware querying is beneficial in both detecting and removing noise. We also propose a more robust structure descriptor based on principal components analysis that better identifies anisotropic detail at the sub-kernel level. We illustrate the effectiveness of our approach in several example scenes and show significant improvements when rendering complex caustics compared to previous methods.

Article
The increased programmability of graphics hardware allows efficient graphical processing unit (GPU) implementations of a wide range of general computations on commodity PCs. An important factor in such implementations is how to fully exploit the SIMD computing capacities offered by modern graphics processors. Linear expressions in the form of ȳ = Ax̄ + b̄, where A is a matrix, and x̄, ȳ and b̄ are vectors, constitute one of the most basic operations in many scientific computations. In this paper, we propose a SIMD code optimization technique that enables efficient shader codes to be generated for evaluating linear expressions. It is shown that performance can be improved considerably by efficiently packing arithmetic operations into four-wide SIMD instructions through reordering of the operations in linear expressions. We demonstrate that the presented technique can be used effectively for programming both vertex and pixel shaders for a variety of mathematical applications, including integrating differential equations and solving a sparse linear system of equations using iterative methods.

Conference Paper
Tracing a ray through a scene and finding the closest intersection with the scene geometry is a fundamental operation in computer graphics. During the last two decades, significant efforts have been made to accelerate this operation, with interactive ray tracing as one of the major driving forces. At the heart of a fast method for intersecting a scene with a ray lies the acceleration structure. Many different acceleration structures exist, but research has focused almost exclusively on a few well-tried and well-established techniques: regular and hierarchical grids, bounding volume hierarchies and kd-trees. Spectacular advances have been made, which have contributed significantly to making interactive ray tracing a possibility. However, despite the success of these acceleration structures, several problems remain open. Handling deforming and dynamic geometry still poses significant challenges, and the local vs. global complexity of acceleration structures is still not entirely understood. One therefore wonders whether other acceleration structures, that leave the beaten path of efficient grids, bounding volume hierarchies and kd-trees, can provide viable alternatives.

Conference Paper
We present an automatic camera placement method for generating image-based models from scenes with known geometry. Our method first approximately determines the set of surfaces visible from a given viewing area and then selects a small set of appropriate camera positions to sample the scene from. We define a quality measure for a surface as seen, or covered, from the given viewing area. Along with each camera position, we store the set of surfaces which are best covered by this camera. Next, one reference view is generated from each reference view that do not belong to the selected set of polygons are masked out. The image-based model generated by our method, covers every visible surface only once, associating it with a camera position from which it is covered with quality that exceeds a user-specified quality threshold. The result is a compact non-redundant image-based model with controlled quality. The problem of covering every visible surface with a minimum number of cameras (guards) can be regarded as an extension to the well-known Art Gallery Problem. However, since the 3D polygonal model is textured, the camera-polygon visibility relation is not binary; instead, it has a weight-the quality of the polygon's coverage

Conference Paper
We present a novel use of commodity graphics hardware that effectively combines a plane-sweeping algorithm with view synthesis for real-time, on-line 3D scene acquisition and view synthesis. Using real-time imagery from a few calibrated cameras, our method can generate new images from nearby viewpoints, estimate a dense depth map from the current viewpoint, or create a textured triangular mesh. We can do this without prior geometric information or requiring any user interaction, in real time and on line. The heart of our method is using programmable pixel shader technology to square intensity differences between reference image pixels, and then to choose final colors (or depths) that correspond to the minimum difference, i.e. the most consistent color. In this paper we describe the method, place it in the context of related work in computer graphics and computer vision, and present results.

Conference Paper
Recent improvements in laser rangefinder technology, together with algorithms developed at Stanford for combining multiple range and color images, allow us to reliably and accurately digitize the external shape and reflectance of many physical objects. As an application of this technology, I and a team of 30 faculty, staff, and students from Stanford University and the University of Washington spent the 1998-99 academic year digitizing the sculptures and architecture of Michelangelo. During this time, we scanned 10 statues, including the giant figure of David, and 2 building interiors, including the Medici Chapel, which was designed by Michelangelo. As a side project, we also acquired a high-resolution light field of his statue of Night, in the Medici Chapel. Finally, in another side project, we scanned the 1,163 fragments of the Forma Urbis Romae, the giant marble map of ancient Rome. In the months ahead we will process the data we have collected to create 3D digital models of these objects and, in the case of the Forma Urbis, we will try to assemble the map. The goals of this project are scholarly and educational. Commercial use of the models is not excluded, and many such uses can be imagined, but none is currently planned. I outline the technological underpinnings, logistical challenges, and possible outcomes of this project

Article
RGBD images with high quality annotations in the form of geometric (i.e., segmentation) and structural (i.e., how do the segments are mutually related in 3D) information provide valuable priors to a large number of scene and image manipulation applications. While it is now simple to acquire RGBD images, annotating them, automatically or manually, remains challenging especially in cluttered noisy environments. We present SmartAnnotator, an interactive system to facilitate annotating RGBD images. The system performs the tedious tasks of grouping pixels, creating potential abstracted cuboids, inferring object interactions in 3D, and comes up with various hypotheses. The user simply has to flip through a list of suggestions for segment labels, finalize a selection, and the system updates the remaining hypotheses. As objects are finalized, the process speeds up with fewer ambiguities to resolve. Further, as more scenes are annotated, the system makes better suggestions based on structural and geometric priors learns from the previous annotation sessions. We test our system on a large number of database scenes and report significant improvements over naive low-level annotation tools.

Article
We present a fast algorithm for global 3D symmetry detection with approximation guarantees. The algorithm is guaranteed to find the best approximate symmetry of a given shape, to within a user-specified threshold, with an overwhelming probability. Our method uses a carefully designed sampling of the transformation space, where each transformation is efficiently evaluated using a property testing technique. We prove that the density of the sampling depends on the total variation of the shape, allowing us to derive formal bounds on the algorithm's complexity and approximation quality. We further investigate different volumetric shape representations (in the form of truncated distance transforms), and in such a way control the total variation of the shape and hence the sampling density and the runtime of the algorithm. A comprehensive set of experiments assesses the proposed method, including an evaluation on the eight categories of the COSEG data-set. This is the first large-scale evaluation of any symmetry detection technique that we are aware of.

Article
The use of Laplacian eigenbases has been shown to be fruitful in many computer graphics applications. Today, state-of-the-art approaches to shape analysis, synthesis, and correspondence rely on these natural harmonic bases that allow using classical tools from harmonic analysis on manifolds. However, many applications involving multiple shapes are obstacled by the fact that Laplacian eigenbases computed independently on different shapes are often incompatible with each other. In this paper, we propose the construction of common approximate eigenbases for multiple shapes using approximate joint diagonalization algorithms. We illustrate the benefits of the proposed approach on tasks from shape editing, pose transfer, correspondence, and similarity.

Article
Scientists study trajectory data to understand trends in movement patterns, such as human mobility for traffic analysis and urban planning. There is a pressing need for scalable and efficient techniques for analyzing this data and discovering the underlying patterns. In this paper, we introduce a novel technique which we call vector-field $k$-means. The central idea of our approach is to use vector fields to induce a similarity notion between trajectories. Other clustering algorithms seek a representative trajectory that best describes each cluster, much like $k$-means identifies a representative "center" for each cluster. Vector-field $k$-means, on the other hand, recognizes that in all but the simplest examples, no single trajectory adequately describes a cluster. Our approach is based on the premise that movement trends in trajectory data can be modeled as flows within multiple vector fields, and the vector field itself is what defines each of the clusters. We also show how vector-field $k$-means connects techniques for scalar field design on meshes and $k$-means clustering. We present an algorithm that finds a locally optimal clustering of trajectories into vector fields, and demonstrate how vector-field $k$-means can be used to mine patterns from trajectory data. We present experimental evidence of its effectiveness and efficiency using several datasets, including historical hurricane data, GPS tracks of people and vehicles, and anonymous call records from a large phone company. We compare our results to previous trajectory clustering techniques, and find that our algorithm performs faster in practice than the current state-of-the-art in trajectory clustering, in some examples by a large margin.

Article
This paper presents an approach to a time-dependent variant of the concept of vector field topology for 2-D vector fields. Vector field topology is defined for steady vector fields and aims at discriminating the domain of a vector field into regions of qualitatively different behaviour. The presented approach represents a generalization for saddle-type critical points and their separatrices to unsteady vector fields based on generalized streak lines, with the classical vector field topology as its special case for steady vector fields. The concept is closely related to that of Lagrangian coherent structures obtained as ridges in the finite-time Lyapunov exponent field. The proposed approach is evaluated on both 2-D time-dependent synthetic and vector fields from computational fluid dynamics.

Article
Calculating and categorizing the similarity of curves is a fundamental problem which has generated much recent interest. However, to date there are no implementations of these algorithms for curves on surfaces with provable guarantees on the quality of the measure. In this paper, we present a similarity measure for any two cycles that are homologous, where we calculate the minimum area of any homology (or connected bounding chain) between the two cycles. The minimum area homology exists for broader classes of cycles than previous measures which are based on homotopy. It is also much easier to compute than previously defined measures, yielding an efficient implementation that is based on linear algebra tools. We demonstrate our algorithm on a range of inputs, showing examples which highlight the feasibility of this similarity measure.

Article
We present a novel sparse modeling approach to non-rigid shape matching using only the ability to detect repeatable regions. As the input to our algorithm, we are given only two sets of regions in two shapes; no descriptors are provided so the correspondence between the regions is not know, nor we know how many regions correspond in the two shapes. We show that even with such scarce information, it is possible to establish very accurate correspondence between the shapes by using methods from the field of sparse modeling, being this, the first non-trivial use of sparse models in shape correspondence. We formulate the problem of permuted sparse coding, in which we solve simultaneously for an unknown permutation ordering the regions on two shapes and for an unknown correspondence in functional representation. We also propose a robust variant capable of handling incomplete matches. Numerically, the problem is solved efficiently by alternating the solution of a linear assignment and a sparse coding problem. The proposed methods are evaluated qualitatively and quantitatively on standard benchmarks containing both synthetic and scanned objects.

Article
We address the problem of curvature estimation from sampled compact sets. The main contribution is a stability result: we show that the gaussian, mean or anisotropic curvature measures of the offset of a compact set K with positive $\mu$-reach can be estimated by the same curvature measures of the offset of a compact set K' close to K in the Hausdorff sense. We show how these curvature measures can be computed for finite unions of balls. The curvature measures of the offset of a compact set with positive $\mu$-reach can thus be approximated by the curvature measures of the offset of a point-cloud sample. These results can also be interpreted as a framework for an effective and robust notion of curvature.

Article
GKS, GKS-3D, and PHIGS are all approved ISO standards for the application programmer interface. How does a system analyst or programmer decide which standard to use for his application? This paper discusses the range of application requirements likely to be encountered, explores the suitability of GKS and PHIGS for satisfying these requirements, and offers guidelines to aid in the decision process.

Article
We present a novel methodology that utilizes 4-Dimensional (4D) space deformation to simulate a magnification lens on versatile volume datasets and textured solid models. Compared with other magnification methods (e.g., geometric optics, mesh editing), 4D differential geometry theory and its practices are much more flexible and powerful for preserving shape features (i.e., minimizing angle distortion), and easier to adapt to versatile solid models. The primary advantage of 4D space lies at the following fact: we can now easily magnify the volume of regions of interest (ROIs) from the additional dimension, while keeping the rest region unchanged. To achieve this primary goal, we first embed a 3D volumetric input into 4D space and magnify ROIs in the 4th dimension. Then we flatten the 4D shape back into 3D space to accommodate other typical applications in the real 3D world. In order to enforce distortion minimization, in both steps we devise the high dimensional geometry techniques based on rigorous 4D geometry theory for 3D/4D mapping back and forth to amend the distortion. Our system can preserve not only focus region, but also context region and global shape. We demonstrate the effectiveness, robustness, and efficacy of our framework with a variety of models ranging from tetrahedral meshes to volume datasets.

Article
Data-driven methods play an increasingly important role in discovering geometric, structural, and semantic relationships between 3D shapes in collections, and applying this analysis to support intelligent modeling, editing, and visualization of geometric data. In contrast to traditional approaches, a key feature of data-driven approaches is that they aggregate information from a collection of shapes to improve the analysis and processing of individual shapes. In addition, they are able to learn models that reason about properties and relationships of shapes without relying on hard-coded rules or explicitly programmed instructions. We provide an overview of the main concepts and components of these techniques, and discuss their application to shape classification, segmentation, matching, reconstruction, modeling and exploration, as well as scene analysis and synthesis, through reviewing the literature and relating the existing works with both qualitative and numerical comparisons. We conclude our report with ideas that can inspire future research in data-driven shape analysis and processing.

Article
This paper extends a recently proposed robust computational framework for constructing the boundary representation (brep) of the volume swept by a given smooth solid moving along a one parameter family $h$ of rigid motions. Our extension allows the input solid to have sharp features, i.e., to be of class G0 wherein, the unit outward normal to the solid may be discontinuous. In the earlier framework, the solid to be swept was restricted to be G1, and thus this is a significant and useful extension of that work. This naturally requires a precise description of the geometry of the surface generated by the sweep of a sharp edge supported by two intersecting smooth faces. We uncover the geometry along with the related issues like parametrization, self-intersection and singularities via a novel mathematical analysis. Correct trimming of such a surface is achieved by a delicate analysis of the interplay between the cone of normals at a sharp point and its trajectory under $h$. The overall topology is explicated by a key lifting theorem which allows us to compute the adjacency relations amongst entities in the swept volume by relating them to corresponding adjacencies in the input solid. Moreover, global issues related to body-check such as orientation are efficiently resolved. Many examples from a pilot implementation illustrate the efficiency and effectiveness of our framework.

Article
The Conference was held on the three days 24th to 26th March at Glasgow University, the first day devoted to two parallel tutorials, the second day opening the conference proper and including meetings of special interest groups and the final day consisting of submitted and invited papers.

Article
The 2010 Eurographics Workshop on 3D Object Retrieval was held in NorrkoN¨ping, Sweden on May 2,2010. The workshop co-chairs were Mohamed Daoudi and Tobias Schreck, while the program co-chairs were Michela Spagnuolo, Ioannis Pratikakis, Remco Veltkamp and Theoharis Theoharis. A program of sessions on 3D shape descriptors, part-based representation for retrieval, 3D face recognition and learning and benchmarking was formed. Raif M. Rustamov presented a volume-based shape descriptor that is robust with respect to changes in pose and topology. Shape distributions were aggregated throughout the entire volume contained within the shape in this approach, capturing information conveyed by the volumes of shapes. Some other presenters presented an interest point detector for 3D objects based on Harris Corner Detection, which had been used with good results in computer vision applications.

Article
A mechanism is presented for direct manipulation of 3D objects with a conventional 2D input device, such as a mouse. The user can define and modify a model by graphical interaction on a 3D perspective or parallel projection. A gestural interface technique enables the specification of 3D transformations (translation, rotation and scaling) by 2D pick and drag operations. Interaction is not restricted to single objects but can be applied to compound objects as well. The method described in this paper is an easy-to-understand 3D input technique which does not require any special hardware and is compatible with the designer's mental model of object manipulation.

Article
Interactive 2-D systems have benefited greatly from the improvements in 1C technology. Today, the trend is to relieve the host computer from low level tasks through increasing the graphic system's computational power. The introduction of video RAMs has solved the problem of contention for memory cycles between the display generator and the video refresh controller. The improvements in graphic controllers have led from the first fixed instructions controllers to today's third generation of programmable graphic processors, able to support computer graphic interface standards. This article will present this evolution, and focus on a 2-D graphic processor designed at the Imagery, Instrumentation and Systems Laboratory, based on the separation of graphic generation and memory management functions.

Article
The earliest Web browsers focussed on the display of textual information. When graphics were added, essentially only image graphics and image file formats were supported. For a significant range of applications, image graphics has severe limitations, for example in terms of file size, download time and inability to interact with and modify the graphics client-side. Vector graphics may be more appropriate in these cases, and this has become possible through the introduction of the WebCGM and Scalable Vector Graphics (SVG) formats, both of which are open standards, the former from ISO/IEC and W3C and the latter from W3C. This paper reviews the background to Web graphics, presents the WebCGM file format, and gives a more detailed exposition of the most recent format, SVG. The paper concludes with reflections on the current state of this area and future prospects.

Article
Analytical approaches, based on digitised 2D texture models, for an automatic solid (3D) texture synthesis have been recently introduced to Computer Graphics. However, these approaches cannot provide satisfactory solutions in the usual case of natural anisotropic textures (wood grain for example). Indeed, solid texture synthesis requires particular care, and sometimes external knowledge to "guess" the internal structure of solid textures because only 2D texture models are used for analysis. By making some basic assumptions about the internal structure of solid textures, we propose a very efficient method based on a hybrid analysis (spectral and histogram) for an automatic synthesis of solid textures. This new method allows us to obtain high precision solid textures (closely resembling initial models) in a large number of cases, including the difficult case of anisotropic textures.

Article
We introduce a scheme of control polygons to design topological skeletons for vector fields of arbitrary topology. Based on this we construct piecewise linear vector fields of exactly the topology specified by the control polygons. This way a controlled construction of vector fields of any topology is possible. Finally we apply this method for topology-preserving compression of vector fields consisting of a simple topology.

Article
Topological methods produce simple and meaningful depictions of symmetric, second order two-dimensional tensor fields. Extending previous work dealing with vector fields, we propose here a scheme for the visualization of time-dependent tensor fields. Basic notions of unsteady tensor topology are discussed. Topological changes - known as bifurcations - are precisely detected and identified by our method which permits an accurate tracking of degenerate points and related structures.

Article
The topological structure of scalar, vector, and second-order tensor fields provides an important mathematical basis for data analysis and visualization. In this paper, we extend this framework towards higher-order tensors. First, we establish formal uniqueness properties for a geometrically constrained tensor decomposition. This allows us to define and visualize topological structures in symmetric tensor fields of orders three and four. We clarify that in 2D, degeneracies occur at isolated points, regardless of tensor order. However, for orders higher than two, they are no longer equivalent to isotropic tensors, and their fractional Poincaré index prevents us from deriving continuous vector fields from the tensor decomposition. Instead, sorting the terms by magnitude leads to a new type of feature, lines along which the resulting vector fields are discontinuous. We propose algorithms to extract these features and present results on higher-order derivatives and higher-order structure tensors.

Article
Data sets coming from simulations or sampling of real-world phenomena often contain noise that hinders their processing and analysis. Automatic filtering and denoising can be challenging: when the nature of the noise is unknown, it is difficult to distinguish between noise and actual data features; in addition, the filtering process itself may introduce “artificial” features into the data set that were not originally present. In this paper, we propose a smoothing method for 2D scalar fields that gives the user explicit control over the data features. We define features as critical points of the given scalar function, and the topological structure they induce (i.e., the Morse-Smale complex). Feature significance is rated according to topological persistence. Our method allows filtering out spurious features that arise due to noise by means of topological simplification, providing the user with a simple interface that defines the significance threshold, coupled with immediate visual feedback of the remaining data features. In contrast to previous work, our smoothing method guarantees a C1-continuous output scalar field with the exact specified features and topological structures.

Article
Computational Morphology is the analysis of form by computational means. This discipline typically uses techniques from Computational Geometry and Computer Aided Geometric Design. The present paper is more specifically about the construction and manipulation of closed object boundaries through a set of scattered points in 3D. Original results are developed in three stages of computational morphology: impose a geometrical structure on the set of points; construct a polyhedral boundary surface from this geometrical structure; build a hierarchy of polyhedral approximations together with localization information; The economic advantage of this approach is that there is no dependency on any specific data source. It can be used for various types of data sources or when the source is unknown.

Article
We present a complete system for designing and manipulating regular or near-regular textures in 2D images. We place emphasis on supporting creative workflows that produce artwork from scratch. As such, our system provides tools to create, arrange, and manipulate textures in images with intuitive controls, and without requiring 3D modeling. Additionally, we ensure continued, non-destructive editability by expressing textures via a fully parametric descriptor. We demonstrate the suitability of our approach with numerous example images, created by an artist using our system, and we compare our proposed workflow with alternative 2D and 3D methods.

Article
This paper presents a new method that combines a incdial axis and implicit sub,faces in order to reconstruct a 3D solid from an unstructured set of points scattered on the objcct's sufacc. The representation produced is based on iso-sufaccs generated by skeletons, and is a particularly compact way of defining a smooth free-form solid. The method is based on the minimisation of an energy representing a "distance" between the set of data points and the iso-sufacc, resembling previous rcscrach 9. Initialisation, however, is more robust and ej]ficicnt since there is computation of the incdial axis of the set of points. Instead of subdividing existing skeletons in order to refine the objcct's sufacc, a new reconstruction algorithm progressively selects skeleton-points from the prccomputed incdial axis using an heuristic principle based on a "local energy" criterion. This drastically speeds up the reconstruction process. Moreover, using the incdial axis allows reconstruction of objects with complex topology and geometry, like objects that have holes and branches or that arc composed of several connected components. This process is fully automatic. The method has bccn successfully applied to both synthetic and real data.

Article
Interactive visualization of very large volume data has been recognized as a task requiring great effort in a variety of science and engineering fields. In particular, such data usually places considerable demands on run-time memory space. In this paper, we present an effective 3D compression scheme for interactive visualization of very large volume data, that exploits the power of wavelet theory. In designing our method, we have compromised between two important factors: high compression ratio and fast run-time random access ability. Our experimental results on the Visual Human data sets show that our method achieves fairly good compression ratios. In addition, it minimizes the overhead caused during run-time reconstruction of voxel values. This 3D compression scheme will be useful in developing many interactive visualization systems for huge volume data, especially when they are based on personal computers or workstations with limited memory. Keywords: very large volume dat...

Article
We present the first algorithm for constructing 3D vector fields based on their topological skeleton. The skeleton itself is modeled by interactively moving a number of control polygons. Then a piecewise linear vector field is automatically constructed which has the same topological skeleton as modeled before. This approach is based on a complete segmentation of the areas around critical points into sectors of different flow behavior. Based on this, we present the first approach to visualizing higher order critical points of 3D vector fields. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Line and Curve Generation I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism

Article
The Migration Problem of GKS-3D and PHIGS is discussed in this report under the view of the graphics reference model's description technique: the component and framework concept. Therefore the components of GKS-3D and PHIGS are defined. The common and different concepts are indicated and the mapping from PHIGS to GKS-3D and from GKS-3D to PHIGS is done. This mapping is performed only on the components which have been proved as a description technique suited for such purposes. The migration has two different aspects. One is the theoretical approach to prove that such a migration is possible. The other one is more practical. It gives guide-lines for the use of PHIGS and GKS-3D and shows under which circumstances both can be used together.

Article
In this paper we present a novel method to approximate the force field of a discrete 3d object with a time complexity that is linear in the number of voxels. We define a rule, similar to the distance transform, to propagate forces associated with boundary points into the interior of the object. The result of this propagation depends on the order in which the points of the object are processed. Therefore we analyze how to obtain an order-invariant approximation formula. With the resulting formula it becomes possible to approximate the force field and to use its features for a fast and topology preserving skeletonization. We use a thinning strategy on the body-centered cubic lattice to compute the skeleton and ensure that critical points of the force field are not removed. This leads to improved skeletons with respect to the properties of centeredness and rotational invariance.

Article
An algorithm for automatic reconstruction of 3D objects from their orthographic projections is presented in this paper. It makes some improvements to, and complements the, Wesley-Markowsky algorithm, which is a typical hierarchical reconstruction algorithm limited to polyhedral objects, and extracts the idea of pattern recognition expressed in the Aldefeld algorithm. It is shown in theory by analysis and in practice by implementation that the proposed algorithm successfully rejected pathological cases and found all solutions with the same set of orthographic views. Compared with the existing algorithms presented in references, this algorithm covers some more complex cases of objects incorporating cylinders.

Article
Gosip is an implementation of a GKS-3D level 2c interface to PHIGS. It allows GKS applications to run on PHIGS platforms, offering performance and portability across a wide range of high-performance 3D workstations. Compatibility of the standards is reviewed. A selection of design solutions is given for the problems of error processing, non-retained primitives and attribute management. The concepts of Workstation Display Session and atmbute state are introduced. Some comments are made on implementation dependencies, performance and portability.

Article
This paper introduces the use of a visual attention model to improve the accuracy of gaze tracking systems. Visual attention models simulate the selective attention part of the human visual system. For instance, in a bottom-up approach, a saliency map is defined for the image and gives an attention weight to every pixel of the image as a function of its colour, edge or intensity. Our algorithm uses an uncertainty window, defined by the gaze tracker accuracy, and located around the gaze point given by the tracker. Then, using a visual attention model, it searches for the most salient points, or objects, located inside this uncertainty window, and determines a novel, and hopefully, better gaze point. This combination of a gaze tracker together with a visual attention model is considered as the main contribution of the paper. We demonstrate the promising results of our method by presenting two experiments conducted in two different contexts: (1) a free exploration of a visually rich 3D virtual environment without a specific task, and (2) a video game based on gaze tracking involving a selection task. Our approach can be used to improve real-time gaze tracking systems in many interactive 3D applications such as video games or virtual reality applications. The use of a visual attention model can be adapted to any gaze tracker and the visual attention model can also be adapted to the application in which it is used.

Article
How to render very complex datasets, and yet maintain interactive response times, is a hot topic in computer graphics. The MagicSphere idea originated as a solution to this problem, but its potential goes much further than this original scope. In fact, it has been designed as a very generical 3D widget: it defines a spherical volume of interest in the dataset modeling space. Then, several filters can be associated with the Magicsphere, which apply different visualization modalities to the data contained in the volume of interest. The visualization of multi-resolution datasets is selected here as a case study and an ad hoc filter has been designed, the MultiRes filter. Some results of a prototipal implementation are presented and discussed.

Article
Feature detection in geometric datasets is a fundamental tool for solving shape matching problems such as partial symmetry detection. Traditional techniques usually employ a priori models such as crease lines that are unspecific to the actual application. Our paper examines the idea of learning geometric features. We introduce a formal model for a class of linear feature constellations based on a Markov chain model and propose a novel, efficient algorithm for detecting a large number of features simultaneously. After a short user-guided training stage, in which one or a few example lines are sketched directly onto the input data, our algorithm automatically finds all pieces of geometry similar to the marked areas. In particular, the algorithm is able recognize larger classes of semantically similar but geometrically varying features, which is very difficult using unsupervised techniques. In a number of experiments, we apply our technique to point cloud data from 3D scanners. The algorithm is able to detect features with very low rates of false positives and negatives and to recognize broader classes of similar geometry (such as “windows” in a building scan) even from few training examples, thereby significantly improving over previous unsupervised techniques.

Article
An important research area in non-photorealistic rendering is the obtention of silhouettes. There are many methods to do this using 3D models and raster structures, but these are limited in their ability to create stylised silhouettes while maintaining complete flexibility. These limitations do not exist in illustration, as each element is plane and the interaction between them can be eliminated by locating each one in a different layer. This is the approach presented in this paper: a 3D model is flattened into plane elements ordered in space, which allows the silhouettes to be drawn with total flexibility.Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Line and Curve Generation

Article
The support of advanced information technology (IT) to preservation, restoration and documentation of Cultural Heritage (CH) is becoming a very important goal for the research community. Michelangelo's David was one of the first applications of 3D scanning technology on a highly popular work of art. The subsequent restoration campaign, started in 2002 and concluded in 2004, was also a milestone for the adoption of modern scientific analysis procedures and IT tools in the framework of a restoration process. One of the focuses in this restoration was also methodological, i.e. to plan and adopt innovative ways to document the restoration process. In this paper, we present the results of an integration of different restoration data (2D and 3D datasets) which has been concluded recently. The recent evolution of HW and SW graphics technologies gave us the possibility to interactively visualize an extremely dense 3D model which incorporates the colour information provided by two professional photographic campaigns, made before and after the restoration. Moreover, we present the results concerning the mapping, in this case on the 2D media, of the reliefs produced by restorers to assess and document the status of the marble surface before the restoration took place. This result could lead to new and fascinating applications of computer graphics for preservation, restoration and documentation of CH.

Article
We describe a new 3D scene streaming approach for remote walkthroughs. In a remote walkthrough, a user on a client machine interactively navigates through a scene that resides on a remote server. Our approach allows a user to walk through a remote 3D scene, without ever having to download the entire scene from the server. Our algorithm achieves this by selectively transmitting only small parts of the scene and lower quality representations of objects, based on the user's viewing parameters and the available connection bandwidth. An online optimization algorithm selects which object representations to send, based on the integral of a benefit measure along the predicted path of movement. The rendering quality at the client depends on the available bandwidth, but practical navigation of the scene is possible even when bandwidth is low.

Top-cited authors
• University of California, Davis
• University of Duisburg-Essen