# Computer Graphics Forum

Published by Wiley

Online ISSN: 1467-8659

Published by Wiley

Online ISSN: 1467-8659

Publications

Article

This paper presents a novel approach to detecting and preserving fine illumination structure within photon maps. Data derived from each photon's primal trajectory is encoded and used to build a high-dimensional kd-tree. Incorporation of these new parameters allows for precise differentiation between intersecting ray envelopes, thus minimizing detail degradation when combined with photon relaxation. We demonstrate how parameter-aware querying is beneficial in both detecting and removing noise. We also propose a more robust structure descriptor based on principal components analysis that better identifies anisotropic detail at the sub-kernel level. We illustrate the effectiveness of our approach in several example scenes and show significant improvements when rendering complex caustics compared to previous methods.

…

Article

The increased programmability of graphics hardware allows efficient graphical processing unit (GPU) implementations of a wide range of general computations on commodity PCs. An important factor in such implementations is how to fully exploit the SIMD computing capacities offered by modern graphics processors. Linear expressions in the form of ȳ = Ax̄ + b̄, where A is a matrix, and x̄, ȳ and b̄ are vectors, constitute one of the most basic operations in many scientific computations. In this paper, we propose a SIMD code optimization technique that enables efficient shader codes to be generated for evaluating linear expressions. It is shown that performance can be improved considerably by efficiently packing arithmetic operations into four-wide SIMD instructions through reordering of the operations in linear expressions. We demonstrate that the presented technique can be used effectively for programming both vertex and pixel shaders for a variety of mathematical applications, including integrating differential equations and solving a sparse linear system of equations using iterative methods.

…

Conference Paper

Tracing a ray through a scene and finding the closest intersection with the scene geometry is a fundamental operation in computer graphics. During the last two decades, significant efforts have been made to accelerate this operation, with interactive ray tracing as one of the major driving forces. At the heart of a fast method for intersecting a scene with a ray lies the acceleration structure. Many different acceleration structures exist, but research has focused almost exclusively on a few well-tried and well-established techniques: regular and hierarchical grids, bounding volume hierarchies and kd-trees. Spectacular advances have been made, which have contributed significantly to making interactive ray tracing a possibility. However, despite the success of these acceleration structures, several problems remain open. Handling deforming and dynamic geometry still poses significant challenges, and the local vs. global complexity of acceleration structures is still not entirely understood. One therefore wonders whether other acceleration structures, that leave the beaten path of efficient grids, bounding volume hierarchies and kd-trees, can provide viable alternatives.

…

Conference Paper

We present an automatic camera placement method for generating
image-based models from scenes with known geometry. Our method first
approximately determines the set of surfaces visible from a given
viewing area and then selects a small set of appropriate camera
positions to sample the scene from. We define a quality measure for a
surface as seen, or covered, from the given viewing area. Along with
each camera position, we store the set of surfaces which are best
covered by this camera. Next, one reference view is generated from each
reference view that do not belong to the selected set of polygons are
masked out. The image-based model generated by our method, covers every
visible surface only once, associating it with a camera position from
which it is covered with quality that exceeds a user-specified quality
threshold. The result is a compact non-redundant image-based model with
controlled quality. The problem of covering every visible surface with a
minimum number of cameras (guards) can be regarded as an extension to
the well-known Art Gallery Problem. However, since the 3D polygonal
model is textured, the camera-polygon visibility relation is not binary;
instead, it has a weight-the quality of the polygon's coverage

…

Conference Paper

We present a novel use of commodity graphics hardware that effectively combines a plane-sweeping algorithm with view synthesis for real-time, on-line 3D scene acquisition and view synthesis. Using real-time imagery from a few calibrated cameras, our method can generate new images from nearby viewpoints, estimate a dense depth map from the current viewpoint, or create a textured triangular mesh. We can do this without prior geometric information or requiring any user interaction, in real time and on line. The heart of our method is using programmable pixel shader technology to square intensity differences between reference image pixels, and then to choose final colors (or depths) that correspond to the minimum difference, i.e. the most consistent color. In this paper we describe the method, place it in the context of related work in computer graphics and computer vision, and present results.

…

Conference Paper

Recent improvements in laser rangefinder technology, together with
algorithms developed at Stanford for combining multiple range and color
images, allow us to reliably and accurately digitize the external shape
and reflectance of many physical objects. As an application of this
technology, I and a team of 30 faculty, staff, and students from
Stanford University and the University of Washington spent the 1998-99
academic year digitizing the sculptures and architecture of
Michelangelo. During this time, we scanned 10 statues, including the
giant figure of David, and 2 building interiors, including the Medici
Chapel, which was designed by Michelangelo. As a side project, we also
acquired a high-resolution light field of his statue of Night, in the
Medici Chapel. Finally, in another side project, we scanned the 1,163
fragments of the Forma Urbis Romae, the giant marble map of ancient
Rome. In the months ahead we will process the data we have collected to
create 3D digital models of these objects and, in the case of the Forma
Urbis, we will try to assemble the map. The goals of this project are
scholarly and educational. Commercial use of the models is not excluded,
and many such uses can be imagined, but none is currently planned. I
outline the technological underpinnings, logistical challenges, and
possible outcomes of this project

…

Article

RGBD images with high quality annotations in the form of geometric (i.e.,
segmentation) and structural (i.e., how do the segments are mutually related in
3D) information provide valuable priors to a large number of scene and image
manipulation applications. While it is now simple to acquire RGBD images,
annotating them, automatically or manually, remains challenging especially in
cluttered noisy environments. We present SmartAnnotator, an interactive system
to facilitate annotating RGBD images. The system performs the tedious tasks of
grouping pixels, creating potential abstracted cuboids, inferring object
interactions in 3D, and comes up with various hypotheses. The user simply has
to flip through a list of suggestions for segment labels, finalize a selection,
and the system updates the remaining hypotheses. As objects are finalized, the
process speeds up with fewer ambiguities to resolve. Further, as more scenes
are annotated, the system makes better suggestions based on structural and
geometric priors learns from the previous annotation sessions. We test our
system on a large number of database scenes and report significant improvements
over naive low-level annotation tools.

…

Article

We present a fast algorithm for global 3D symmetry detection with
approximation guarantees. The algorithm is guaranteed to find the best
approximate symmetry of a given shape, to within a user-specified threshold,
with an overwhelming probability. Our method uses a carefully designed sampling
of the transformation space, where each transformation is efficiently evaluated
using a property testing technique. We prove that the density of the sampling
depends on the total variation of the shape, allowing us to derive formal
bounds on the algorithm's complexity and approximation quality. We further
investigate different volumetric shape representations (in the form of
truncated distance transforms), and in such a way control the total variation
of the shape and hence the sampling density and the runtime of the algorithm. A
comprehensive set of experiments assesses the proposed method, including an
evaluation on the eight categories of the COSEG data-set. This is the first
large-scale evaluation of any symmetry detection technique that we are aware
of.

…

Article

The use of Laplacian eigenbases has been shown to be fruitful in many
computer graphics applications. Today, state-of-the-art approaches to shape
analysis, synthesis, and correspondence rely on these natural harmonic bases
that allow using classical tools from harmonic analysis on manifolds. However,
many applications involving multiple shapes are obstacled by the fact that
Laplacian eigenbases computed independently on different shapes are often
incompatible with each other. In this paper, we propose the construction of
common approximate eigenbases for multiple shapes using approximate joint
diagonalization algorithms. We illustrate the benefits of the proposed approach
on tasks from shape editing, pose transfer, correspondence, and similarity.

…

Article

Scientists study trajectory data to understand trends in movement patterns,
such as human mobility for traffic analysis and urban planning. There is a
pressing need for scalable and efficient techniques for analyzing this data and
discovering the underlying patterns. In this paper, we introduce a novel
technique which we call vector-field $k$-means.
The central idea of our approach is to use vector fields to induce a
similarity notion between trajectories. Other clustering algorithms seek a
representative trajectory that best describes each cluster, much like $k$-means
identifies a representative "center" for each cluster. Vector-field $k$-means,
on the other hand, recognizes that in all but the simplest examples, no single
trajectory adequately describes a cluster. Our approach is based on the premise
that movement trends in trajectory data can be modeled as flows within multiple
vector fields, and the vector field itself is what defines each of the
clusters. We also show how vector-field $k$-means connects techniques for
scalar field design on meshes and $k$-means clustering.
We present an algorithm that finds a locally optimal clustering of
trajectories into vector fields, and demonstrate how vector-field $k$-means can
be used to mine patterns from trajectory data. We present experimental evidence
of its effectiveness and efficiency using several datasets, including
historical hurricane data, GPS tracks of people and vehicles, and anonymous
call records from a large phone company. We compare our results to previous
trajectory clustering techniques, and find that our algorithm performs faster
in practice than the current state-of-the-art in trajectory clustering, in some
examples by a large margin.

…

Article

This paper presents an approach to a time-dependent variant of the concept of
vector field topology for 2-D vector fields. Vector field topology is defined
for steady vector fields and aims at discriminating the domain of a vector
field into regions of qualitatively different behaviour. The presented approach
represents a generalization for saddle-type critical points and their
separatrices to unsteady vector fields based on generalized streak lines, with
the classical vector field topology as its special case for steady vector
fields. The concept is closely related to that of Lagrangian coherent
structures obtained as ridges in the finite-time Lyapunov exponent field. The
proposed approach is evaluated on both 2-D time-dependent synthetic and vector
fields from computational fluid dynamics.

…

Article

Calculating and categorizing the similarity of curves is a fundamental
problem which has generated much recent interest. However, to date there are no
implementations of these algorithms for curves on surfaces with provable
guarantees on the quality of the measure. In this paper, we present a
similarity measure for any two cycles that are homologous, where we calculate
the minimum area of any homology (or connected bounding chain) between the two
cycles. The minimum area homology exists for broader classes of cycles than
previous measures which are based on homotopy. It is also much easier to
compute than previously defined measures, yielding an efficient implementation
that is based on linear algebra tools. We demonstrate our algorithm on a range
of inputs, showing examples which highlight the feasibility of this similarity
measure.

…

Article

We present a novel sparse modeling approach to non-rigid shape matching using
only the ability to detect repeatable regions. As the input to our algorithm,
we are given only two sets of regions in two shapes; no descriptors are
provided so the correspondence between the regions is not know, nor we know how
many regions correspond in the two shapes. We show that even with such scarce
information, it is possible to establish very accurate correspondence between
the shapes by using methods from the field of sparse modeling, being this, the
first non-trivial use of sparse models in shape correspondence. We formulate
the problem of permuted sparse coding, in which we solve simultaneously for an
unknown permutation ordering the regions on two shapes and for an unknown
correspondence in functional representation. We also propose a robust variant
capable of handling incomplete matches. Numerically, the problem is solved
efficiently by alternating the solution of a linear assignment and a sparse
coding problem. The proposed methods are evaluated qualitatively and
quantitatively on standard benchmarks containing both synthetic and scanned
objects.

…

Article

We address the problem of curvature estimation from sampled compact sets. The main contribution is a stability result: we show that the gaussian, mean or anisotropic curvature measures of the offset of a compact set K with positive $\mu$-reach can be estimated by the same curvature measures of the offset of a compact set K' close to K in the Hausdorff sense. We show how these curvature measures can be computed for finite unions of balls. The curvature measures of the offset of a compact set with positive $\mu$-reach can thus be approximated by the curvature measures of the offset of a point-cloud sample. These results can also be interpreted as a framework for an effective and robust notion of curvature.

…

Article

GKS, GKS-3D, and PHIGS are all approved ISO standards for the application programmer interface. How does a system analyst or programmer decide which standard to use for his application? This paper discusses the range of application requirements likely to be encountered, explores the suitability of GKS and PHIGS for satisfying these requirements, and offers guidelines to aid in the decision process.

…

Article

We present a novel methodology that utilizes 4-Dimensional (4D) space
deformation to simulate a magnification lens on versatile volume datasets and
textured solid models. Compared with other magnification methods (e.g.,
geometric optics, mesh editing), 4D differential geometry theory and its
practices are much more flexible and powerful for preserving shape features
(i.e., minimizing angle distortion), and easier to adapt to versatile solid
models. The primary advantage of 4D space lies at the following fact: we can
now easily magnify the volume of regions of interest (ROIs) from the additional
dimension, while keeping the rest region unchanged. To achieve this primary
goal, we first embed a 3D volumetric input into 4D space and magnify ROIs in
the 4th dimension. Then we flatten the 4D shape back into 3D space to
accommodate other typical applications in the real 3D world. In order to
enforce distortion minimization, in both steps we devise the high dimensional
geometry techniques based on rigorous 4D geometry theory for 3D/4D mapping back
and forth to amend the distortion. Our system can preserve not only focus
region, but also context region and global shape. We demonstrate the
effectiveness, robustness, and efficacy of our framework with a variety of
models ranging from tetrahedral meshes to volume datasets.

…

Article

Data-driven methods play an increasingly important role in discovering
geometric, structural, and semantic relationships between 3D shapes in
collections, and applying this analysis to support intelligent modeling,
editing, and visualization of geometric data. In contrast to traditional
approaches, a key feature of data-driven approaches is that they aggregate
information from a collection of shapes to improve the analysis and processing
of individual shapes. In addition, they are able to learn models that reason
about properties and relationships of shapes without relying on hard-coded
rules or explicitly programmed instructions. We provide an overview of the main
concepts and components of these techniques, and discuss their application to
shape classification, segmentation, matching, reconstruction, modeling and
exploration, as well as scene analysis and synthesis, through reviewing the
literature and relating the existing works with both qualitative and numerical
comparisons. We conclude our report with ideas that can inspire future research
in data-driven shape analysis and processing.

…

Article

This paper extends a recently proposed robust computational framework for
constructing the boundary representation (brep) of the volume swept by a given
smooth solid moving along a one parameter family $h$ of rigid motions. Our
extension allows the input solid to have sharp features, i.e., to be of class
G0 wherein, the unit outward normal to the solid may be discontinuous. In the
earlier framework, the solid to be swept was restricted to be G1, and thus this
is a significant and useful extension of that work. This naturally requires a
precise description of the geometry of the surface generated by the sweep of a
sharp edge supported by two intersecting smooth faces. We uncover the geometry
along with the related issues like parametrization, self-intersection and
singularities via a novel mathematical analysis. Correct trimming of such a
surface is achieved by a delicate analysis of the interplay between the cone of
normals at a sharp point and its trajectory under $h$. The overall topology is
explicated by a key lifting theorem which allows us to compute the adjacency
relations amongst entities in the swept volume by relating them to
corresponding adjacencies in the input solid. Moreover, global issues related
to body-check such as orientation are efficiently resolved. Many examples from
a pilot implementation illustrate the efficiency and effectiveness of our
framework.

…

Article

The Conference was held on the three days 24th to 26th March at Glasgow University, the first day devoted to two parallel tutorials, the second day opening the conference proper and including meetings of special interest groups and the final day consisting of submitted and invited papers.

…

Article

The 2010 Eurographics Workshop on 3D Object Retrieval was held in NorrkoN¨ping, Sweden on May 2,2010. The workshop co-chairs were Mohamed Daoudi and Tobias Schreck, while the program co-chairs were Michela Spagnuolo, Ioannis Pratikakis, Remco Veltkamp and Theoharis Theoharis. A program of sessions on 3D shape descriptors, part-based representation for retrieval, 3D face recognition and learning and benchmarking was formed. Raif M. Rustamov presented a volume-based shape descriptor that is robust with respect to changes in pose and topology. Shape distributions were aggregated throughout the entire volume contained within the shape in this approach, capturing information conveyed by the volumes of shapes. Some other presenters presented an interest point detector for 3D objects based on Harris Corner Detection, which had been used with good results in computer vision applications.

…

Article

Analytical approaches, based on digitised 2D texture models, for an automatic solid (3D) texture synthesis have been recently introduced to Computer Graphics. However, these approaches cannot provide satisfactory solutions in the usual case of natural anisotropic textures (wood grain for example). Indeed, solid texture synthesis requires particular care, and sometimes external knowledge to "guess" the internal structure of solid textures because only 2D texture models are used for analysis. By making some basic assumptions about the internal structure of solid textures, we propose a very efficient method based on a hybrid analysis (spectral and histogram) for an automatic synthesis of solid textures. This new method allows us to obtain high precision solid textures (closely resembling initial models) in a large number of cases, including the difficult case of anisotropic textures.

…

Article

We introduce a scheme of control polygons to design topological skeletons for vector fields of arbitrary topology. Based on this we construct piecewise linear vector fields of exactly the topology specified by the control polygons. This way a controlled construction of vector fields of any topology is possible. Finally we apply this method for topology-preserving compression of vector fields consisting of a simple topology.

…

Article

The earliest Web browsers focussed on the display of textual information. When graphics were added, essentially only image graphics and image file formats were supported. For a significant range of applications, image graphics has severe limitations, for example in terms of file size, download time and inability to interact with and modify the graphics client-side. Vector graphics may be more appropriate in these cases, and this has become possible through the introduction of the WebCGM and Scalable Vector Graphics (SVG) formats, both of which are open standards, the former from ISO/IEC and W3C and the latter from W3C. This paper reviews the background to Web graphics, presents the WebCGM file format, and gives a more detailed exposition of the most recent format, SVG. The paper concludes with reflections on the current state of this area and future prospects.

…

Article

We present a complete system for designing and manipulating regular or near-regular textures in 2D images. We place emphasis on supporting creative workflows that produce artwork from scratch. As such, our system provides tools to create, arrange, and manipulate textures in images with intuitive controls, and without requiring 3D modeling. Additionally, we ensure continued, non-destructive editability by expressing textures via a fully parametric descriptor. We demonstrate the suitability of our approach with numerous example images, created by an artist using our system, and we compare our proposed workflow with alternative 2D and 3D methods.

…

Article

Interactive 2-D systems have benefited greatly from the improvements in 1C technology. Today, the trend is to relieve the host computer from low level tasks through increasing the graphic system's computational power. The introduction of video RAMs has solved the problem of contention for memory cycles between the display generator and the video refresh controller. The improvements in graphic controllers have led from the first fixed instructions controllers to today's third generation of programmable graphic processors, able to support computer graphic interface standards. This article will present this evolution, and focus on a 2-D graphic processor designed at the Imagery, Instrumentation and Systems Laboratory, based on the separation of graphic generation and memory management functions.

…

Article

A mechanism is presented for direct manipulation of 3D objects with a conventional 2D input device, such as a mouse. The user can define and modify a model by graphical interaction on a 3D perspective or parallel projection. A gestural interface technique enables the specification of 3D transformations (translation, rotation and scaling) by 2D pick and drag operations. Interaction is not restricted to single objects but can be applied to compound objects as well. The method described in this paper is an easy-to-understand 3D input technique which does not require any special hardware and is compatible with the designer's mental model of object manipulation.

…

Article

Topological methods produce simple and meaningful depictions of symmetric, second order two-dimensional tensor fields. Extending previous work dealing with vector fields, we propose here a scheme for the visualization of time-dependent tensor fields. Basic notions of unsteady tensor topology are discussed. Topological changes - known as bifurcations - are precisely detected and identified by our method which permits an accurate tracking of degenerate points and related structures.

…

Article

The topological structure of scalar, vector, and second-order tensor fields provides an important mathematical basis for data analysis and visualization. In this paper, we extend this framework towards higher-order tensors. First, we establish formal uniqueness properties for a geometrically constrained tensor decomposition. This allows us to define and visualize topological structures in symmetric tensor fields of orders three and four. We clarify that in 2D, degeneracies occur at isolated points, regardless of tensor order. However, for orders higher than two, they are no longer equivalent to isotropic tensors, and their fractional Poincaré index prevents us from deriving continuous vector fields from the tensor decomposition. Instead, sorting the terms by magnitude leads to a new type of feature, lines along which the resulting vector fields are discontinuous. We propose algorithms to extract these features and present results on higher-order derivatives and higher-order structure tensors.

…

Article

Data sets coming from simulations or sampling of real-world phenomena often contain noise that hinders their processing and analysis. Automatic filtering and denoising can be challenging: when the nature of the noise is unknown, it is difficult to distinguish between noise and actual data features; in addition, the filtering process itself may introduce “artificial” features into the data set that were not originally present. In this paper, we propose a smoothing method for 2D scalar fields that gives the user explicit control over the data features. We define features as critical points of the given scalar function, and the topological structure they induce (i.e., the Morse-Smale complex). Feature significance is rated according to topological persistence. Our method allows filtering out spurious features that arise due to noise by means of topological simplification, providing the user with a simple interface that defines the significance threshold, coupled with immediate visual feedback of the remaining data features. In contrast to previous work, our smoothing method guarantees a C1-continuous output scalar field with the exact specified features and topological structures.

…

Article

Computational Morphology is the analysis of form by computational means. This discipline typically uses techniques from Computational Geometry and Computer Aided Geometric Design. The present paper is more specifically about the construction and manipulation of closed object boundaries through a set of scattered points in 3D. Original results are developed in three stages of computational morphology:
impose a geometrical structure on the set of points;
construct a polyhedral boundary surface from this geometrical structure;
build a hierarchy of polyhedral approximations together with localization information;
The economic advantage of this approach is that there is no dependency on any specific data source. It can be used for various types of data sources or when the source is unknown.

…

Article

An important research area in non-photorealistic rendering is the obtention of silhouettes. There are many methods to do this using 3D models and raster structures, but these are limited in their ability to create stylised silhouettes while maintaining complete flexibility. These limitations do not exist in illustration, as each element is plane and the interaction between them can be eliminated by locating each one in a different layer. This is the approach presented in this paper: a 3D model is flattened into plane elements ordered in space, which allows the silhouettes to be drawn with total flexibility.Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Line and Curve Generation

…

Article

This paper introduces the use of a visual attention model to improve the accuracy of gaze tracking systems. Visual attention models simulate the selective attention part of the human visual system. For instance, in a bottom-up approach, a saliency map is defined for the image and gives an attention weight to every pixel of the image as a function of its colour, edge or intensity.
Our algorithm uses an uncertainty window, defined by the gaze tracker accuracy, and located around the gaze point given by the tracker. Then, using a visual attention model, it searches for the most salient points, or objects, located inside this uncertainty window, and determines a novel, and hopefully, better gaze point. This combination of a gaze tracker together with a visual attention model is considered as the main contribution of the paper.
We demonstrate the promising results of our method by presenting two experiments conducted in two different contexts: (1) a free exploration of a visually rich 3D virtual environment without a specific task, and (2) a video game based on gaze tracking involving a selection task.
Our approach can be used to improve real-time gaze tracking systems in many interactive 3D applications such as video games or virtual reality applications. The use of a visual attention model can be adapted to any gaze tracker and the visual attention model can also be adapted to the application in which it is used.

…

Article

Feature detection in geometric datasets is a fundamental tool for solving shape matching problems such as partial symmetry detection. Traditional techniques usually employ a priori models such as crease lines that are unspecific to the actual application. Our paper examines the idea of learning geometric features. We introduce a formal model for a class of linear feature constellations based on a Markov chain model and propose a novel, efficient algorithm for detecting a large number of features simultaneously. After a short user-guided training stage, in which one or a few example lines are sketched directly onto the input data, our algorithm automatically finds all pieces of geometry similar to the marked areas. In particular, the algorithm is able recognize larger classes of semantically similar but geometrically varying features, which is very difficult using unsupervised techniques. In a number of experiments, we apply our technique to point cloud data from 3D scanners. The algorithm is able to detect features with very low rates of false positives and negatives and to recognize broader classes of similar geometry (such as “windows” in a building scan) even from few training examples, thereby significantly improving over previous unsupervised techniques.

…

Article

Gosip is an implementation of a GKS-3D level 2c interface to PHIGS. It allows GKS applications to run on PHIGS platforms, offering performance and portability across a wide range of high-performance 3D workstations. Compatibility of the standards is reviewed. A selection of design solutions is given for the problems of error processing, non-retained primitives and attribute management. The concepts of Workstation Display Session and atmbute state are introduced. Some comments are made on implementation dependencies, performance and portability.

…

Article

How to render very complex datasets, and yet maintain interactive response times, is a hot topic in computer graphics. The MagicSphere idea originated as a solution to this problem, but its potential goes much further than this original scope. In fact, it has been designed as a very generical 3D widget: it defines a spherical volume of interest in the dataset modeling space. Then, several filters can be associated with the Magicsphere, which apply different visualization modalities to the data contained in the volume of interest. The visualization of multi-resolution datasets is selected here as a case study and an ad hoc filter has been designed, the MultiRes filter. Some results of a prototipal implementation are presented and discussed.

…

Article

Visualization of high-dimensional data requires a mapping to a visual space. Whenever the goal is to preserve similarity relations a frequent strategy is to use 2D projections, which afford intuitive interactive exploration, e.g., by users locating and selecting groups and gradually drilling down to individual objects. In this paper, we propose a framework for projecting high-dimensional data to 3D visual spaces, based on a generalization of the Least-Square Projection (LSP). We compare projections to 2D and 3D visual spaces both quantitatively and through a user study considering certain exploration tasks. The quantitative analysis confirms that 3D projections outperform 2D projections in terms of precision. The user study indicates that certain tasks can be more reliably and confidently answered with 3D projections. Nonetheless, as 3D projections are displayed on 2D screens, interaction is more difficult. Therefore, we incorporate suitable interaction functionalities into a framework that supports 3D transformations, predefined optimal 2D views, coordinated 2D and 3D views, and hierarchical 3D cluster definition and exploration. For visually encoding data clusters in a 3D setup, we employ color coding of projected data points as well as four types of surface renderings. A second user study evaluates the suitability of these visual encodings. Several examples illustrate the framework's applicability for both visual exploration of multidimensional abstract (non-spatial) data as well as the feature space of multi-variate spatial data.

…

Article

The availability of powerful and affordable 3D PC graphics boards has made the rendering of rich immersiveenvironments possible at interactive speeds. The scene update rate and the appropriate behaviour of objects withinthe world are central to this immersive feeling. This paper is concerned with the behaviour computations involvedin the flocking algorithm, which has been used extensively to emulate the flocking behaviour of creatures found innature. The main contribution of this paper is a new method for hierarchically combining portions of the flocksinto groups to reduce the cost of the behavioural computation, allowing far larger flocks to be updated in real-timein the world.ACM CSS: I.3.7 Three-Dimensional Graphics and Realism—Animation

…

Article

As the efficiency of computer graphic rendering methods is increasing, generating realistic models is now becoming a limiting factor. In this paper we present a new technique to enhance already existing geometry models of real world objects with textures reconstructed from a sparse set of unregistered still photographs. The aim of the proposed technique is the generation of nearly photo-realistic models of arbitrarily shaped objects with minimal effort. In our approach, we require neither a prior calibration of the camera nor a high precision of the user's interaction. Two main problems have to be addressed of which the first is the recovery of the unknown positions and parameters of the camera. An initial estimate of the orientation is calculated from interactively selected point correspondences. Subsequently, the unknown parameters are accurately calculated by minimising a blend of objective functions in a 3D-2D projective registration approach. The key point of the proposed method of registration is a novel filtering approach which utilises the spatial information provided by the geometry model. Second, the individual images have to be combined yielding a set of consistent texture maps. We present a robust method to recover the texture from the photographs thereby preserving high spatial frequencies and eliminating artifacts, particularly specular highlights. Parts of the object not seen in any of the photographs are interpolated in the textured model. Results are shown for three complex example objects with different materials and numerous self-occlusions.

…

Article

using straight-ahead actions as well as pose-to-pose techniques. Our approach seeks to bring the expressiveness of real-time motion capture systems into a general-purpose multi-track system running on a graphics workstation. We emphasize the use of high-bandwidth interaction with 3D objects together with specific data reduction techniques for the automatic construction of editable representations of interactively sketched continuous parameter evolution. In this paper, we concentrate on providing a solution to the problem of applying data reduction techniques in an animation context. The requirements that must be fulfilled by the data reduction algorithm are analyzed. From the Lyche and Mrken knot removal strategy, we derive an incremental algorithm that computes a B-spline approximation to the original curve by considering only a small piece of the total curve at any time. This algorithm allows the processing of the user's captured motion in parallel with its specification, and guarantees constant latency time and memory needs for input motions composed of any number of samples. After showing the results obtained by applying our incremental algorithm to 3D animation paths, we describe an integrated environment to visually construct 3D animations, where all interaction is done directly in three dimensions. By recording the effects of user's manipulations and taking into account the temporal aspect of the interaction, straight-ahead animations can be defined. Our algorithm is automatically applied to continuous parameter evolution in order to obtain editable representations. The paper concludes with a presentation of future work.

…

Article

The paper deals with the parallelization of Delaunay triangulation algorithms, giving more emphasis to pratical issues and implementation than to theoretical complexity. Two parallel implementations are presented. The first one is built on De Wall, an Ed triangulator based on an original interpretation of the divide & conquer paradigm. The second is based on an incremental construction algorithm. The parallelization strategies are presented and evaluated. The target parallel machine is a distributed computing environment, composed of coarse grain processing nodes. Results of first implementations are reported and compared with the performance of the serial versions running on a Unix workstation.

…

Article

An algorithm for automatic reconstruction of 3D objects from their orthographic projections is presented in this paper. It makes some improvements to, and complements the, Wesley-Markowsky algorithm, which is a typical hierarchical reconstruction algorithm limited to polyhedral objects, and extracts the idea of pattern recognition expressed in the Aldefeld algorithm. It is shown in theory by analysis and in practice by implementation that the proposed algorithm successfully rejected pathological cases and found all solutions with the same set of orthographic views. Compared with the existing algorithms presented in references, this algorithm covers some more complex cases of objects incorporating cylinders.

…

Article

In this paper we present a novel method to approximate the force field of a discrete 3d object with a time complexity that is linear in the number of voxels. We define a rule, similar to the distance transform, to propagate forces associated with boundary points into the interior of the object. The result of this propagation depends on the order in which the points of the object are processed. Therefore we analyze how to obtain an order-invariant approximation formula. With the resulting formula it becomes possible to approximate the force field and to use its features for a fast and topology preserving skeletonization. We use a thinning strategy on the body-centered cubic lattice to compute the skeleton and ensure that critical points of the force field are not removed. This leads to improved skeletons with respect to the properties of centeredness and rotational invariance.

…

Article

In this paper, a new method for deformable 3D shape registration is proposed. The algorithm computes shape transitions based on local similarity transforms which allows to model not only as-rigid-as-possible deformations but also local and global scale. We formulate an ordinary differential equation (ODE) which describes the transition of a source shape towards a target shape. We assume that both shapes are roughly pre-aligned (e.g., frames of a motion sequence). The ODE consists of two terms. The first one causes the deformation by pulling the source shape points towards corresponding points on the target shape. Initial correspondences are estimated by closest-point search and then refined by an efficient smoothing scheme. The second term regularizes the deformation by drawing the points towards locally defined rest positions. These are given by the optimal similarity transform which matches the initial (undeformed) neighborhood of a source point to its current (deformed) neighborhood. The proposed ODE allows for a very efficient explicit numerical integration. This avoids the repeated solution of large linear systems usually done when solving the registration problem within general-purpose non-linear optimization frameworks. We experimentally validate the proposed method on a variety of real data and perform a comparison with several state-of-the-art approaches.

…

Top-cited authors