Computer Graphics Forum

Published by Wiley

Online ISSN: 1467-8659

·

Print ISSN: 0167-7055

Articles


Figure 1: The discontinuity problem. (a) A sample PDF (inverted for clarity) of flux density, B. (b) Plotting B along a slice through the PDF reveals its discontinuous structure. (c) An analytical differential is analogous to a series of delta functions at the discontinuities. Here, the magnitude of the gradient is infinite indicating the need to maximally constrain photons at these points. (d) The estimate of B reconstructed from the photon distribution is biased, yielding finite values of ∂B̂ proportional to the change in flux density. These estimates do not reliably indicate discontinuities. (e – f) Mapping |▿B̂| to photon constraints and relaxing the distribution results in the interior structure of the PDF becoming severely degraded.
Figure 1: Cognac Glass. (a) An unmodified photon map rendered with 15 photons in the radiance estimate exhibits high levels of noise. (b)
Figure 2: Cognac Glass. (a) An unmodified photon map rendered with 15 photons in the radiance estimate exhibits high levels of noise. (b) Photon relaxation as proposed by Spencer and Jones virtually eliminates noise, however overlapping filaments of the caustic are also degraded. (c) Our approach separates photons in a parametrised domain, correctly inhibiting migration even across interlaced boundaries.
Figure 3: Our method seeks to address the discontinuity problem by storing additional information at each photon – in this case whether it belong to set A or B. (c) This effectively separates gradient estimation into two different domains for each set. (d) When applied to the photon map, we can see that the boundary of each set is now properly constrained. (e) Parameter-aware relaxation rapidly removes noise but generates a sub-optimal distribution where the sets overlap. (f) Collapsing the extra dimensions and relaxing again results in the correct blue noise signature.
Figure 4: Light passes from a vacuum into a transparent dielectric before being diffusely reflected. (b) Photons are focussed toward the centre of the ground plane producing a prominent discontinuity in flux density at the point marked in red. Querying the gradient of the photon map at this point results in photons outside of the envelope also being included, weakening the boundary constraint. (c) Parameterising the incident ray origin means we can restrict which photons are included in the gradient estimate to those that are local to the queried photon in parameter space, thus preventing non-local photons from interfering with the structure extrapolation. (d) Furthermore, photons lying near but not on features are not affected. (e) Collimated emitters use the same parameterisation but over the 2-dimensional space of photon origins, [θ,ϕ].

+5

Photon Parameterisation for Robust Relaxation Constraints
  • Article
  • Full-text available

May 2013

·

353 Reads

B Spencer

·

This paper presents a novel approach to detecting and preserving fine illumination structure within photon maps. Data derived from each photon's primal trajectory is encoded and used to build a high-dimensional kd-tree. Incorporation of these new parameters allows for precise differentiation between intersecting ray envelopes, thus minimizing detail degradation when combined with photon relaxation. We demonstrate how parameter-aware querying is beneficial in both detecting and removing noise. We also propose a more robust structure descriptor based on principal components analysis that better identifies anisotropic detail at the sub-kernel level. We illustrate the effectiveness of our approach in several example scenes and show significant improvements when rendering complex caustics compared to previous methods.
Download
Share

Accelerating Ray Tracing using Constrained Tetrahedralizations

September 2008

·

104 Reads

Tracing a ray through a scene and finding the closest intersection with the scene geometry is a fundamental operation in computer graphics. During the last two decades, significant efforts have been made to accelerate this operation, with interactive ray tracing as one of the major driving forces. At the heart of a fast method for intersecting a scene with a ray lies the acceleration structure. Many different acceleration structures exist, but research has focused almost exclusively on a few well-tried and well-established techniques: regular and hierarchical grids, bounding volume hierarchies and kd-trees. Spectacular advances have been made, which have contributed significantly to making interactive ray tracing a possibility. However, despite the success of these acceleration structures, several problems remain open. Handling deforming and dynamic geometry still poses significant challenges, and the local vs. global complexity of acceleration structures is still not entirely understood. One therefore wonders whether other acceleration structures, that leave the beaten path of efficient grids, bounding volume hierarchies and kd-trees, can provide viable alternatives.


SmartAnnotator: An Interactive Tool for Annotating RGBD Indoor Images

March 2014

·

180 Reads

RGBD images with high quality annotations in the form of geometric (i.e., segmentation) and structural (i.e., how do the segments are mutually related in 3D) information provide valuable priors to a large number of scene and image manipulation applications. While it is now simple to acquire RGBD images, annotating them, automatically or manually, remains challenging especially in cluttered noisy environments. We present SmartAnnotator, an interactive system to facilitate annotating RGBD images. The system performs the tedious tasks of grouping pixels, creating potential abstracted cuboids, inferring object interactions in 3D, and comes up with various hypotheses. The user simply has to flip through a list of suggestions for segment labels, finalize a selection, and the system updates the remaining hypotheses. As objects are finalized, the process speeds up with fewer ambiguities to resolve. Further, as more scenes are annotated, the system makes better suggestions based on structural and geometric priors learns from the previous annotation sessions. We test our system on a large number of database scenes and report significant improvements over naive low-level annotation tools.

Probably Approximately Symmetric: Fast Rigid Symmetry Detection With Global Guarantees

March 2014

·

65 Reads

We present a fast algorithm for global 3D symmetry detection with approximation guarantees. The algorithm is guaranteed to find the best approximate symmetry of a given shape, to within a user-specified threshold, with an overwhelming probability. Our method uses a carefully designed sampling of the transformation space, where each transformation is efficiently evaluated using a property testing technique. We prove that the density of the sampling depends on the total variation of the shape, allowing us to derive formal bounds on the algorithm's complexity and approximation quality. We further investigate different volumetric shape representations (in the form of truncated distance transforms), and in such a way control the total variation of the shape and hence the sampling density and the runtime of the algorithm. A comprehensive set of experiments assesses the proposed method, including an evaluation on the eight categories of the COSEG data-set. This is the first large-scale evaluation of any symmetry detection technique that we are aware of.

Computing Minimum Area Homologies

October 2014

·

106 Reads

Calculating and categorizing the similarity of curves is a fundamental problem which has generated much recent interest. However, to date there are no implementations of these algorithms for curves on surfaces with provable guarantees on the quality of the measure. In this paper, we present a similarity measure for any two cycles that are homologous, where we calculate the minimum area of any homology (or connected bounding chain) between the two cycles. The minimum area homology exists for broader classes of cycles than previous measures which are based on homotopy. It is also much easier to compute than previously defined measures, yielding an efficient implementation that is based on linear algebra tools. We demonstrate our algorithm on a range of inputs, showing examples which highlight the feasibility of this similarity measure.

Sparse Modeling of Intrinsic Correspondences

September 2012

·

58 Reads

·

·

·

[...]

·

We present a novel sparse modeling approach to non-rigid shape matching using only the ability to detect repeatable regions. As the input to our algorithm, we are given only two sets of regions in two shapes; no descriptors are provided so the correspondence between the regions is not know, nor we know how many regions correspond in the two shapes. We show that even with such scarce information, it is possible to establish very accurate correspondence between the shapes by using methods from the field of sparse modeling, being this, the first non-trivial use of sparse models in shape correspondence. We formulate the problem of permuted sparse coding, in which we solve simultaneously for an unknown permutation ordering the regions on two shapes and for an unknown correspondence in functional representation. We also propose a robust variant capable of handling incomplete matches. Numerically, the problem is solved efficiently by alternating the solution of a linear assignment and a sparse coding problem. The proposed methods are evaluated qualitatively and quantitatively on standard benchmarks containing both synthetic and scanned objects.

Guidelines for determining when to use GKS and when to use PHIGS

December 1989

·

18 Reads

GKS, GKS-3D, and PHIGS are all approved ISO standards for the application programmer interface. How does a system analyst or programmer decide which standard to use for his application? This paper discusses the range of application requirements likely to be encountered, explores the suitability of GKS and PHIGS for satisfying these requirements, and offers guidelines to aid in the decision process.

Four-Dimensional Geometry Lens: A Novel Volumetric Magnification Approach

June 2013

·

77 Reads

We present a novel methodology that utilizes 4-Dimensional (4D) space deformation to simulate a magnification lens on versatile volume datasets and textured solid models. Compared with other magnification methods (e.g., geometric optics, mesh editing), 4D differential geometry theory and its practices are much more flexible and powerful for preserving shape features (i.e., minimizing angle distortion), and easier to adapt to versatile solid models. The primary advantage of 4D space lies at the following fact: we can now easily magnify the volume of regions of interest (ROIs) from the additional dimension, while keeping the rest region unchanged. To achieve this primary goal, we first embed a 3D volumetric input into 4D space and magnify ROIs in the 4th dimension. Then we flatten the 4D shape back into 3D space to accommodate other typical applications in the real 3D world. In order to enforce distortion minimization, in both steps we devise the high dimensional geometry techniques based on rigorous 4D geometry theory for 3D/4D mapping back and forth to amend the distortion. Our system can preserve not only focus region, but also context region and global shape. We demonstrate the effectiveness, robustness, and efficacy of our framework with a variety of models ranging from tetrahedral meshes to volume datasets.

Incorporating Sharp Features in the General Solid Sweep Framework

May 2014

·

108 Reads

This paper extends a recently proposed robust computational framework for constructing the boundary representation (brep) of the volume swept by a given smooth solid moving along a one parameter family $h$ of rigid motions. Our extension allows the input solid to have sharp features, i.e., to be of class G0 wherein, the unit outward normal to the solid may be discontinuous. In the earlier framework, the solid to be swept was restricted to be G1, and thus this is a significant and useful extension of that work. This naturally requires a precise description of the geometry of the surface generated by the sweep of a sharp edge supported by two intersecting smooth faces. We uncover the geometry along with the related issues like parametrization, self-intersection and singularities via a novel mathematical analysis. Correct trimming of such a surface is achieved by a delicate analysis of the interplay between the cone of normals at a sharp point and its trajectory under $h$. The overall topology is explicated by a key lifting theorem which allows us to compute the adjacency relations amongst entities in the swept volume by relating them to corresponding adjacencies in the input solid. Moreover, global issues related to body-check such as orientation are efficiently resolved. Many examples from a pilot implementation illustrate the efficiency and effectiveness of our framework.





EUROCRAPHICS UK 1986: Clasgow Conference

October 2007

·

5 Reads

The Conference was held on the three days 24th to 26th March at Glasgow University, the first day devoted to two parallel tutorials, the second day opening the conference proper and including meetings of special interest groups and the final day consisting of submitted and invited papers.



Eurographics 2010 Workshop on 3D Object Retrieval (EG 3DOR’10) in cooperation with ACM SIGGRAPH

February 2011

·

16 Reads

The 2010 Eurographics Workshop on 3D Object Retrieval was held in NorrkoN¨ping, Sweden on May 2,2010. The workshop co-chairs were Mohamed Daoudi and Tobias Schreck, while the program co-chairs were Michela Spagnuolo, Ioannis Pratikakis, Remco Veltkamp and Theoharis Theoharis. A program of sessions on 3D shape descriptors, part-based representation for retrieval, 3D face recognition and learning and benchmarking was formed. Raif M. Rustamov presented a volume-based shape descriptor that is robust with respect to changes in pose and topology. Shape distributions were aggregated throughout the entire volume contained within the shape in this approach, capturing information conveyed by the volumes of shapes. Some other presenters presented an interest point detector for 3D objects based on Harris Corner Detection, which had been used with good results in computer vision applications.

Anisotropic Solid Texture Synthesis Using Orthogonal 2D Views

December 2001

·

65 Reads

Analytical approaches, based on digitised 2D texture models, for an automatic solid (3D) texture synthesis have been recently introduced to Computer Graphics. However, these approaches cannot provide satisfactory solutions in the usual case of natural anisotropic textures (wood grain for example). Indeed, solid texture synthesis requires particular care, and sometimes external knowledge to "guess" the internal structure of solid textures because only 2D texture models are used for analysis. By making some basic assumptions about the internal structure of solid textures, we propose a very efficient method based on a hybrid analysis (spectral and histogram) for an automatic synthesis of solid textures. This new method allows us to obtain high precision solid textures (closely resembling initial models) in a large number of cases, including the difficult case of anisotropic textures.

Designing 2D Vector Fields of Arbitrary Topology

September 2002

·

45 Reads

We introduce a scheme of control polygons to design topological skeletons for vector fields of arbitrary topology. Based on this we construct piecewise linear vector fields of exactly the topology specified by the control polygons. This way a controlled construction of vector fields of any topology is possible. Finally we apply this method for topology-preserving compression of vector fields consisting of a simple topology.

Architectures of Graphic Processors for Interactive 2D Graphics

October 2007

·

19 Reads

Interactive 2-D systems have benefited greatly from the improvements in 1C technology. Today, the trend is to relieve the host computer from low level tasks through increasing the graphic system's computational power. The introduction of video RAMs has solved the problem of contention for memory cycles between the display generator and the video refresh controller. The improvements in graphic controllers have led from the first fixed instructions controllers to today's third generation of programmable graphic processors, able to support computer graphic interface standards. This article will present this evolution, and focus on a 2-D graphic processor designed at the Imagery, Instrumentation and Systems Laboratory, based on the separation of graphic generation and memory management functions.

A Direct Manipulation Technique for Specifying 3D Object Transformations with a 2D Input Device

December 1990

·

15 Reads

A mechanism is presented for direct manipulation of 3D objects with a conventional 2D input device, such as a mouse. The user can define and modify a model by graphical interaction on a 3D perspective or parallel projection. A gestural interface technique enables the specification of 3D transformations (translation, rotation and scaling) by 2D pick and drag operations. Interaction is not restricted to single objects but can be applied to compound objects as well. The method described in this paper is an easy-to-understand 3D input technique which does not require any special hardware and is compatible with the designer's mental model of object manipulation.

Tensor Topology Tracking: A Visualization Method For Time-Dependent 2D Symmetric Tensor Fields.

September 2001

·

72 Reads

Topological methods produce simple and meaningful depictions of symmetric, second order two-dimensional tensor fields. Extending previous work dealing with vector fields, we propose here a scheme for the visualization of time-dependent tensor fields. Basic notions of unsteady tensor topology are discussed. Topological changes - known as bifurcations - are precisely detected and identified by our method which permits an accurate tracking of degenerate points and related structures.

Topological Features in 2D Symmetric Higher‐Order Tensor Fields

June 2011

·

22 Reads

The topological structure of scalar, vector, and second-order tensor fields provides an important mathematical basis for data analysis and visualization. In this paper, we extend this framework towards higher-order tensors. First, we establish formal uniqueness properties for a geometrically constrained tensor decomposition. This allows us to define and visualize topological structures in symmetric tensor fields of orders three and four. We clarify that in 2D, degeneracies occur at isolated points, regardless of tensor order. However, for orders higher than two, they are no longer equivalent to isotropic tensors, and their fractional Poincaré index prevents us from deriving continuous vector fields from the tensor decomposition. Instead, sorting the terms by magnitude leads to a new type of feature, lines along which the resulting vector fields are discontinuous. We propose algorithms to extract these features and present results on higher-order derivatives and higher-order structure tensors.

Texture Design and Draping in 2D Images

August 2009

·

179 Reads

We present a complete system for designing and manipulating regular or near-regular textures in 2D images. We place emphasis on supporting creative workflows that produce artwork from scratch. As such, our system provides tools to create, arrange, and manipulate textures in images with intuitive controls, and without requiring 3D modeling. Additionally, we ensure continued, non-destructive editability by expressing textures via a fully parametric descriptor. We demonstrate the suitability of our approach with numerous example images, created by an artist using our system, and we compare our proposed workflow with alternative 2D and 3D methods.

Using a Visual Attention Model to Improve Gaze Tracking Systems in Interactive 3D Applications

September 2010

·

71 Reads

This paper introduces the use of a visual attention model to improve the accuracy of gaze tracking systems. Visual attention models simulate the selective attention part of the human visual system. For instance, in a bottom-up approach, a saliency map is defined for the image and gives an attention weight to every pixel of the image as a function of its colour, edge or intensity. Our algorithm uses an uncertainty window, defined by the gaze tracker accuracy, and located around the gaze point given by the tracker. Then, using a visual attention model, it searches for the most salient points, or objects, located inside this uncertainty window, and determines a novel, and hopefully, better gaze point. This combination of a gaze tracker together with a visual attention model is considered as the main contribution of the paper. We demonstrate the promising results of our method by presenting two experiments conducted in two different contexts: (1) a free exploration of a visually rich 3D virtual environment without a specific task, and (2) a video game based on gaze tracking involving a selection task. Our approach can be used to improve real-time gaze tracking systems in many interactive 3D applications such as video games or virtual reality applications. The use of a visual attention model can be adapted to any gaze tracker and the visual attention model can also be adapted to the application in which it is used.

MagicSphere: an insight tool for 3D data visualization

February 2003

·

245 Reads

How to render very complex datasets, and yet maintain interactive response times, is a hot topic in computer graphics. The MagicSphere idea originated as a solution to this problem, but its potential goes much further than this original scope. In fact, it has been designed as a very generical 3D widget: it defines a spherical volume of interest in the dataset modeling space. Then, several filters can be associated with the Magicsphere, which apply different visualization modalities to the data contained in the volume of interest. The visualization of multi-resolution datasets is selected here as a case study and an ad hoc filter has been designed, the MultiRes filter. Some results of a prototipal implementation are presented and discussed.

GosiP: a GKS-3D shell for PHIGS

October 2007

·

27 Reads

Gosip is an implementation of a GKS-3D level 2c interface to PHIGS. It allows GKS applications to run on PHIGS platforms, offering performance and portability across a wide range of high-performance 3D workstations. Compatibility of the standards is reviewed. A selection of design solutions is given for the problems of error processing, non-retained primitives and attribute management. The concepts of Workstation Display Session and atmbute state are introduced. Some comments are made on implementation dependencies, performance and portability.

Wavelet-Based 3D Compression Scheme for Interactive Visualization of Very Large Volume Data

February 2000

·

140 Reads

Interactive visualization of very large volume data has been recognized as a task requiring great effort in a variety of science and engineering fields. In particular, such data usually places considerable demands on run-time memory space. In this paper, we present an effective 3D compression scheme for interactive visualization of very large volume data, that exploits the power of wavelet theory. In designing our method, we have compromised between two important factors: high compression ratio and fast run-time random access ability. Our experimental results on the Visual Human data sets show that our method achieves fairly good compression ratios. In addition, it minimizes the overhead caused during run-time reconstruction of voxel values. This 3D compression scheme will be useful in developing many interactive visualization systems for huge volume data, especially when they are based on personal computers or workstations with limited memory. Keywords: very large volume dat...

Migration of GKS/GKS-3D and PHIGS discussed under the view of the computer graphics reference model

October 2007

·

108 Reads

The Migration Problem of GKS-3D and PHIGS is discussed in this report under the view of the graphics reference model's description technique: the component and framework concept. Therefore the components of GKS-3D and PHIGS are defined. The common and different concepts are indicated and the mapping from PHIGS to GKS-3D and from GKS-3D to PHIGS is done. This mapping is performed only on the components which have been proved as a description technique suited for such purposes. The migration has two different aspects. One is the theoretical approach to prove that such a migration is possible. The other one is more practical. It gives guide-lines for the use of PHIGS and GKS-3D and shows under which circumstances both can be used together.

Reconstruction of 3D Objects from Orthographic Projections

October 2007

·

72 Reads

An algorithm for automatic reconstruction of 3D objects from their orthographic projections is presented in this paper. It makes some improvements to, and complements the, Wesley-Markowsky algorithm, which is a typical hierarchical reconstruction algorithm limited to polyhedral objects, and extracts the idea of pattern recognition expressed in the Aldefeld algorithm. It is shown in theory by analysis and in practice by implementation that the proposed algorithm successfully rejected pathological cases and found all solutions with the same set of orthographic views. Compared with the existing algorithms presented in references, this algorithm covers some more complex cases of objects incorporating cylinders.

Fast Force Field Approximation and its Application to Skeletonization of Discrete 3D Objects

April 2008

·

44 Reads

In this paper we present a novel method to approximate the force field of a discrete 3d object with a time complexity that is linear in the number of voxels. We define a rule, similar to the distance transform, to propagate forces associated with boundary points into the interior of the object. The result of this propagation depends on the order in which the points of the object are processed. Therefore we analyze how to obtain an order-invariant approximation formula. With the resulting formula it becomes possible to approximate the force field and to use its features for a fast and topology preserving skeletonization. We use a thinning strategy on the body-centered cubic lattice to compute the skeleton and ensure that critical points of the force field are not removed. This leads to improved skeletons with respect to the properties of centeredness and rotational invariance.

Streaming of Complex 3D Scenes for Remote Walkthroughs

September 2001

·

103 Reads

We describe a new 3D scene streaming approach for remote walkthroughs. In a remote walkthrough, a user on a client machine interactively navigates through a scene that resides on a remote server. Our approach allows a user to walk through a remote 3D scene, without ever having to download the entire scene from the server. Our algorithm achieves this by selectively transmitting only small parts of the scene and lower quality representations of objects, based on the user's viewing parameters and the available connection bandwidth. An online optimization algorithm selects which object representations to send, based on the integral of a benefit measure along the predicted path of movement. The rendering quality at the client depends on the available bandwidth, but practical navigation of the scene is possible even when bandwidth is low.

Mapping Highly Detailed Colour Information on Extremely Dense 3D Models: The Case of David's Restoration

November 2008

·

73 Reads

The support of advanced information technology (IT) to preservation, restoration and documentation of Cultural Heritage (CH) is becoming a very important goal for the research community. Michelangelo's David was one of the first applications of 3D scanning technology on a highly popular work of art. The subsequent restoration campaign, started in 2002 and concluded in 2004, was also a milestone for the adoption of modern scientific analysis procedures and IT tools in the framework of a restoration process. One of the focuses in this restoration was also methodological, i.e. to plan and adopt innovative ways to document the restoration process. In this paper, we present the results of an integration of different restoration data (2D and 3D datasets) which has been concluded recently. The recent evolution of HW and SW graphics technologies gave us the possibility to interactively visualize an extremely dense 3D model which incorporates the colour information provided by two professional photographic campaigns, made before and after the restoration. Moreover, we present the results concerning the mapping, in this case on the 2D media, of the reliefs produced by restorers to assess and document the status of the marble surface before the restoration took place. This result could lead to new and fascinating applications of computer graphics for preservation, restoration and documentation of CH.

Figure 1: A single landscaping sketch, which can also be seen as an annotation of an existing 3D model; the two different views are automatically generated by our system. 
Figure 9: Example of artistic illustration. Three views of the same sketch area are shown. 
Figure 10: A second example of artistic illustration. Three views of the same sketch area are shown. The fourth view is the finished drawing. 
Figure 11: An example of annotation in 3D: annotating a heart model during an anatomy course. The text displayed is viewdependent. An unimplemented solution would consist in drawing it on a billboard, or using more sophisticated schemes 16 . 
Figure 12: Using a 3D model as a guide can be useful in fashion applications. 
Drawing for Illustration and Annotation in 3D

May 2001

·

1,521 Reads

We present a system for sketching in 3D, which strives to preserve the degree of expression, imagination, and simplicity of use achieved by 2D drawing. Our system directly uses user-drawn strokes to infer the sketches representing the same scene from different viewpoints, rather than attempting to reconstruct a 3D model. This is achieved by interpreting strokes as indications of a local surface silhouette or contour. Strokes thus deform and disappear progressively as we move away from the original viewpoint. They may be occluded by objects indicated by other strokes, or, in contrast, be drawn above such objects. The user draws on a plane which can be positioned explicitly or relative to other objects or strokes in the sketch. Our system is interactive, since we use fast algorithms and graphics hardware for rendering. We present applications to education, design, architecture and fashion, where 3D sketches can be used alone or as an annotation of an existing 3D model.

3D icons and architectural CAD

October 2007

·

60 Reads

3D computer input has been a recurring challenge to engineers developing effective CAD systems. The approach adopted in this paper attempts to address a specific type of 3D input which is applicable to architecture and some engineering design tasks. In these processes, the object being designed is often an assembly of defined components. In a conventional graphics based CAD system these components are usually represented by graphical Icons which are displayed on the graphics screen and are arranged by the user. The system described here consists of 3D modelling elements which the user physically assembles to form his design. Each modelling element contains an element processor consisting of a machine readable label, data paths and control logic. The CAD system interrogates the elements. The logic within the element processors and the data paths are then used to interrogate other adjacent elements in the model. This system can therefore be considered as a “user generated”“machine readable” modelling system. In an architectural application this provides the user with a system of 3D Icons with which to model and evaluate the built environment.

Figure 2: The correspondence field computed with a closest-point search (a) and with our smoothing procedure (b) for the input shapes on the right. The black rectangle on the right marks the part of the little finger magnified in (a) and (b). Obviously, our procedure estimates the correspondences much more accurate.
Figure 3: A deformation graph computed for a range scan of a hand. The magnified part on the right shows the nodes as yellow spheres. Neighboring nodes are connected with black lines.
Figure 4: Registering a doll head (upper left) to a head of a girl and a boy (lower left). The landmarks used for these registration tests are shown as red dots. Note that there is a significant difference in scale between the source and the target shapes. Our algorithm successfully performs the registration as shown on the right. The models were downloaded from the AIM@SHAPE Repository –http://shapes.aim-at-shape.net/
Figure 5: Registering two range images representing the front part of the same hand in two different poses. The data sets were obtained with a 3D geometry scanner [WLG07] and are publicly available on the authors webpage.
Figure 8: Registration of a bending arm.
Deformable 3D Shape Registration Based on Local Similarity Transforms

August 2011

·

399 Reads

In this paper, a new method for deformable 3D shape registration is proposed. The algorithm computes shape transitions based on local similarity transforms which allows to model not only as-rigid-as-possible deformations but also local and global scale. We formulate an ordinary differential equation (ODE) which describes the transition of a source shape towards a target shape. We assume that both shapes are roughly pre-aligned (e.g., frames of a motion sequence). The ODE consists of two terms. The first one causes the deformation by pulling the source shape points towards corresponding points on the target shape. Initial correspondences are estimated by closest-point search and then refined by an efficient smoothing scheme. The second term regularizes the deformation by drawing the points towards locally defined rest positions. These are given by the optimal similarity transform which matches the initial (undeformed) neighborhood of a source point to its current (deformed) neighborhood. The proposed ODE allows for a very efficient explicit numerical integration. This avoids the repeated solution of large linear systems usually done when solving the registration problem within general-purpose non-linear optimization frameworks. We experimentally validate the proposed method on a variety of real data and perform a comparison with several state-of-the-art approaches.

Stylized Vector Art from 3D Models with Region Support

June 2008

·

124 Reads

We describe a rendering system that converts a 3D meshed model into the stylized 2D filled-region vector-art commonly found in clip-art libraries. To properly define filled regions, we analyze and combine accurate but jagged face-normal contours with smooth but inaccurate interpolated vertex normal contours, and construct a new smooth shadow contour that properly surrounds the actual jagged shadow contour. We decompose region definition into geometric and topological components, using machine precision for geometry processing and raster-precision to accelerate topological queries. We extend programmable stylization to simplify, smooth and stylize filled regions. The result renders 10K-face meshes into custom clip-art in seconds.

Automatic Detection and Visualization of Distinctive Structures in 3D Unsteady Multi‐fields

May 2008

·

30 Reads

Current unsteady multi-field simulation data-sets consist of millions of data-points. To efficiently reduce this enormous amount of information, local statistical complexity was recently introduced as a method that identifies distinctive structures using concepts from information theory. Due to high computational costs this method was so far limited to 2D data. In this paper we propose a new strategy for the computation that is substantially faster and allows for a more precise analysis. The bottleneck of the original method is the division of spatio-temporal configurations in the field (light-cones) into different classes of behavior. The new algorithm uses a density-driven Voronoi tessellation for this task that more accurately captures the distribution of configurations in the sparsely sampled high-dimensional space. The efficient computation is achieved using structures and algorithms from graph theory. The ability of the method to detect distinctive regions in 3D is illustrated using flow and weather simulations.

Realizing 3D Visual Programming Environments within a Virtual Environment

July 2008

·

74 Reads

In the visual programming community, many interesting graphical metaphors have been reported upon for representing computer programs graphically. Most of them have a 2D or 2.5D appearance on the screen in order to reflect the inherent multi-dimensionality of the programming constructs being represented. By going into a three-dimensional representation, this reflection can go a stepfurther. With ever increasing3D graphics rendering capabilities on todays computers, it moreover becomes feasible to extend the dimensionality of the program (and data structure) depiction. We follow this approach by realizing 3D graphical programming techniques within CAEL, our interactive Computer Animation Environment Language. The paper elucidates how several concepts, traditionally found within the Virtual Environments area, can be utilized in the realization of three-dimensional Programming Environments.

Head-Tracked Stereo Viewing with TwoHanded 3D Interaction for Animated Character Construction

November 1997

·

28 Reads

In this paper, we demonstrate how a new interactive 3D desktop metaphor based on two-handed 3D direct manipulation registered with head-tracked stereo viewing can be applied to the task of constructing animated characters. In our configuration, a six degree-of-freedom head-tracker and CrystalEyes shutter glasses are used to produce stereo images that dynamically follow the user head motion. 3D virtual objects can be made to appear at a fixed location in physical space which the user may view from different angles by moving his head. To construct 3D animated characters, the user interacts with the simulated environment using both hands simultaneously: the left hand, controlling a Spaceball, is used for 3D navigation and object movement, while the right hand, holding a 3D mouse, is used to manipulate through a virtual tool metaphor the objects appearing in front of the screen. In this way, both incremental and absolute interactive input techniques are provided by the system. Hand-eye coo...

A low cost 3D scanner based on structured light

November 2001

·

6,168 Reads

Automatic 3D acquisition devices (often called 3D scanners) allow to build highly accurate models of real 3D objects in a cost- and time-effective manner. We have experimented this technology in a particular application context: the acquisition of Cultural Heritage artefacts. Specific needs of this domain are: medium-high accuracy, easy of use, affordable cost of the scanning device, self-registered acquisition of shape and color data, and finally operational safety for both the operator and the scanned artefacts. According to these requirements, we designed a low-cost 3D scanner based on structured light which adopts a new, versatile colored stripe pattern approach. We present the scanner architecture, the software technologies adopted, and the first results of its use in a project regarding the 3D acquisition of an archeological statue.

A Declarative Design Method for 3D Scene Sketch Modeling

August 1993

·

42 Reads

In this paper, we present a dynamic model associated with an intelligent CAD system aiming at the modeling of an architectural scene sketch. Our design methodology has been developed to simulate the process of a user who tries to give a description of a scene from a set of mental images. The scene creation is based on a script which describes the environment from the point of view of an observer who moves across the scene. The system is based on a declarative method viewed as a stepwise refinement process. For the scene representation, a qualitative model is used to describe the objects in terms of attributes, functions, methods and components. The links between objects and their components are expressed by a hierarchical structure, and a description of spatial configurations is given by using locative relations. The set of solutions consistent with the description is usually infinite. So, either one scene consistent with this description is calculated and visualized, or reasons of inconsistency are notified to the user. The resolution process consists of two steps: firstly a logical inference checks the consistency of the topological description, and secondly an optimization algorithm deals with the global description and provides a solution. Two examples illustrate our design methodology and the calculation of a scene model.

Large‐Scale Integer Linear Programming for Orientation Preserving 3D Shape Matching

August 2011

·

14 Reads

We study an algorithmic framework for computing an elastic orientation-preserving matching of non-rigid 3D shapes. We outline an Integer Linear Programming formulation whose relaxed version can be minimized globally in polynomial time. Because of the high number of optimization variables, the key algorithmic challenge lies in efficiently solving the linear program. We present a performance analysis of several Linear Programming algorithms on our problem. Furthermore, we introduce a multiresolution strategy which allows the matching of higher resolution models.

A Direct Manipulation Interface for 3D Computer Animation

August 1995

·

46 Reads

We present a new set of interface techniques for visualizing and editing animation directly in a single three‐dimensional scene. Motion is edited using direct‐manipulation tools which satisfy high‐level goals such as “reach this point at this time” or “go faster at this moment”. These tools can be applied over an arbitrary temporal range and maintain arbitrary degrees of spatial and temporal continuity. We separate spatial and temporal control of position by using two curves for each animated object: the motion path which describes the 3D spatial path along which an object travels, and the motion graph, a function describing the distance traveled along this curve over time. Our direct‐manipulation tools are implemented using displacement functions, a straightforward and scalable technique for satisfying motion constraints by composition of the displacement function with the motion graph or motion path. This paper will focus on applying displacement functions to positional change. However, the techniques presented are applicable to the animation of orientation, color, or any other attribute that varies over time.

A Video‐Based 3D‐Reconstruction of Soccer Games

September 2000

·

568 Reads

In this paper we present SoccerMan, a reconstruction system designed to generate animated, virtual 3D views from two synchronous video sequences of a short part of a given soccer game. After the reconstruction process, which needs also some manual interaction, the virtual 3D scene can be examined and ‘replayed’ from any viewpoint. Players are modeled as so-called animated texture objects, i.e. 2D player shapes are extracted from video and texture-mapped onto rectangles in 3D space. Animated texture objects have shown very appropriate as a 3D representation of soccer players in motion, as the visual nature of the original human motion is preserved. The trajectories of the players and the ball in 3D space are reconstructed accurately. In order to create a 3D reconstruction of a given soccer scene, the following steps have to be executed: 1) Camera parameters of all frames of both sequences are computed (camera calibration). 2) The playground texture is extracted from the video sequences. 3) Trajectories of the ball and the players' heads are computed after manually specifying their image positions in a few key frames. 4) Player textures are extracted automatically from video. 5) The shapes of colliding or occluding players are separated automatically. 6) For visualization, player shapes are texture-mapped onto appropriately placed rectangles in virtual space. SoccerMan is a novel experimental sports analysis system with fairly ambitious objectives. Its design decisions, in particular to start from two synchronous video sequences and to model players by texture objects, have already proven promising.

A Novel Approach for Delaunay 3D Reconstruction with a Comparative Analysis in the Light of Applications

June 2001

·

35 Reads

This paper presents a novel algorithm for volumetric reconstruction of objects from planar sections using Delaunay triangulation, which solves the main problems posed to models defined by reconstruction, particularly from the viewpoint of producing meshes that are suitable for interaction and simulation tasks. The requirements for these applications are discussed here and the results of the method are presented. Additionally, it is compared to another commonly used reconstruction algorithm based on Delaunay triangulation, showing the advantages of the reconstructions obtained by our technique.

Feature preserving Delaunay mesh generation from 3D multi‐material images

July 2009

·

164 Reads

Generating realistic geometric models from 3D segmented images is an important task in many biomedical applications. Segmented 3D images impose particular challenges for meshing algorithms because they contain multi-material junctions forming features such as surface patches, edges and corners. The resulting meshes should preserve these features to ensure the visual quality and the mechanical soundness of the models. We present a feature preserving Delaunay refinement algorithm which can be used to generate high-quality tetrahedral meshes from segmented images. The idea is to explicitly sample corners and edges from the input image and to constrain the Delaunay refinement algorithm to preserve these features in addition to the surface patches. Our experimental results on segmented medical images have shown that, within a few seconds, the algorithm outputs a tetrahedral mesh in which each material is represented as a consistent submesh without gaps and overlaps. The optimization property of the Delaunay triangulation makes these meshes suitable for the purpose of realistic visualization or finite element simulations.

A Shape Descriptor for 3D Objects Based on Rotational Symmetry

December 2010

·

81 Reads

The ability to extract spatial features from 3D objects is essential for applications such as shape matching and object classification. However, designing an effective feature vector which is invariant with respect to rotation, translation and scaling is a challenging task and is often solved by normalization techniques such as PCA, which can give rise to poor object alignment. In this paper, we introduce a novel method to extract robust and invariant 3D features based on rotational symmetry. By applying a rotation-variant similarity function on two instances of the same 3D object, we can define an autocorrelation on the object in the space of rotations. We use a special representation of the SO(3) and determine significant rotation axes for an object by means of optimization techniques. By sampling the similarity function via rotations around these axes, we obtain robust and invariant features, which are descriptive for the underlying geometry. The resulting feature vector cannot only be used to characterize an object with respect to rotational symmetry but also to define a distance between 3D models. Because the features are compact and pre-computable, our method is suitable to perform similarity searches in large 3D databases.


Top-cited authors