August 2008

·

411 Reads

Published by Association for Computing Machinery

Online ISSN: 1557-7368

·

Print ISSN: 0730-0301

August 2008

·

411 Reads

We introduce a computational framework for discovering regular or repeated geometric structures in 3D shapes. We describe and classify possible regular structures and present an effective algorithm for detecting such repeated geometric patterns in point- or mesh-based models. Our method assumes no prior knowledge of the geometry or spatial location of the individual elements that define the pattern. Structure discovery is made possible by a careful analysis of pairwise similarity transformations that reveals prominent lattice structures in a suitable model of transformation space. We introduce an optimization method for detecting such uniform grids specifically designed to deal with outliers and missing elements. This yields a robust algorithm that successfully discovers complex regular structures amidst clutter, noise, and missing geometry. The accuracy of the extracted generating transformations is further improved using a novel simultaneous registration method in the spatial domain. We demonstrate the effectiveness of our algorithm on a variety of examples and show applications to compression, model repair, and geometry synthesis.

March 2010

·

897 Reads

We present a probabilistic model of how viewers may use defocus blur in conjunction with other pictorial cues to estimate the absolute distances to objects in a scene. Our model explains how the pattern of blur in an image together with relative depth cues indicates the apparent scale of the image's contents. From the model, we develop a semiautomated algorithm that applies blur to a sharply rendered image and thereby changes the apparent distance and scale of the scene's contents. To examine the correspondence between the model/algorithm and actual viewer experience, we conducted an experiment with human viewers and compared their estimates of absolute distance to the model's predictions. We did this for images with geometrically correct blur due to defocus and for images with commonly used approximations to the correct blur. The agreement between the experimental data and model predictions was excellent. The model predicts that some approximations should work well and that others should not. Human viewers responded to the various types of blur in much the way the model predicts. The model and algorithm allow one to manipulate blur precisely and to achieve the desired perceived scale efficiently.

July 2005

·

239 Reads

We present a new meshless animation framework for elastic and plastic materials that fracture. Central to our method is a highly dynamic surface and volume sampling method that supports arbitrary crack initiation, propagation, and termination, while avoiding many of the stability problems of traditional mesh-based techniques. We explicitly model advancing crack fronts and associated fracture surfaces embedded in the simulation volume. When cutting through the material, crack fronts directly affect the coupling between simulation nodes, requiring a dynamic adaptation of the nodal shape functions. We show how local visibility tests and dynamic caching lead to an efficient implementation of these effects based on point collocation. Complex fracture patterns of interacting and branching cracks are handled using a small set of topological operations for splitting, merging, and terminating crack fronts. This allows continuous propagation of cracks with highly detailed fracture surfaces, independent of the spatial resolution of the simulation nodes, and provides effective mechanisms for controlling fracture paths. We demonstrate our method for a wide range of materials, from stiff elastic to highly plastic objects that exhibit brittle and/or ductile fracture.

November 2014

·

329 Reads

We present fast algorithms to perform accurate CCD queries between triangulated models. Our formulation uses properties of the Bernstein basis and Bézier curves and reduces the problem to evaluating signs of polynomials. We present a geometrically exact CCD algorithm based on the exact geometric computation paradigm to perform reliable Boolean collision queries. Our algorithm is more than an order of magnitude faster than prior exact algorithms. We evaluate its performance for cloth and FEM simulations on CPUs and GPUs, and highlight the benefits.

December 2009

·

116 Reads

We propose an efficient scheme for evaluating nonlinear subspace forces (and Jacobians) associated with subspace deformations. The core problem we address is efficient integration of the subspace force density over the 3D spatial domain. Similar to Gaussian quadrature schemes that efficiently integrate functions that lie in particular polynomial subspaces, we propose cubature schemes (multi-dimensional quadrature) optimized for efficient integration of force densities associated with particular subspace deformations, particular materials, and particular geometric domains. We support generic subspace deformation kinematics, and nonlinear hyperelastic materials. For an r-dimensional deformation subspace with O(r) cubature points, our method is able to evaluate subspace forces at O(r(2)) cost. We also describe composite cubature rules for runtime error estimation. Results are provided for various subspace deformation models, several hyperelastic materials (St.Venant-Kirchhoff, Mooney-Rivlin, Arruda-Boyce), and multimodal (graphics, haptics, sound) applications. We show dramatically better efficiency than traditional Monte Carlo integration. CR CATEGORIES: I.6.8 [Simulation and Modeling]: Types of Simulation-Animation, I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling-Physically based modeling G.1.4 [Mathematics of Computing]: Numerical Analysis-Quadrature and Numerical Differentiation.

August 2008

·

922 Reads

3d shape and scene layout are often misperceived when viewing stereoscopic displays. For example, viewing from the wrong distance alters an object's perceived size and shape. It is crucial to understand the causes of such misperceptions so one can determine the best approaches for minimizing them. The standard model of misperception is geometric. The retinal images are calculated by projecting from the stereo images to the viewer's eyes. Rays are back-projected from corresponding retinal-image points into space and the ray intersections are determined. The intersections yield the coordinates of the predicted percept. We develop the mathematics of this model. In many cases its predictions are close to what viewers perceive. There are three important cases, however, in which the model fails: 1) when the viewer's head is rotated about a vertical axis relative to the stereo display (yaw rotation); 2) when the head is rotated about a forward axis (roll rotation); 3) when there is a mismatch between the camera convergence and the way in which the stereo images are displayed. In these cases, most rays from corresponding retinal-image points do not intersect, so the standard model cannot provide an estimate for the 3d percept. Nonetheless, viewers in these situations have coherent 3d percepts, so the visual system must use another method to estimate 3d structure. We show that the non-intersecting rays generate vertical disparities in the retinal images that do not arise otherwise. Findings in vision science show that such disparities are crucial signals in the visual system's interpretation of stereo images. We show that a model that incorporates vertical disparities predicts the percepts associated with improper viewing of stereoscopic displays. Improving the model of misperceptions will aid the design and presentation of 3d displays.

July 2013

·

225 Reads

Image-based rendering (IBR) creates realistic images by enriching simple geometries with photographs, e.g., mapping the photograph of a building façade onto a plane. However, as soon as the viewer moves away from the correct viewpoint, the image in the retina becomes distorted, sometimes leading to gross misperceptions of the original geometry. Two hypotheses from vision science state how viewers perceive such image distortions, one claiming that they can compensate for them (and therefore perceive scene geometry reasonably correctly), and one claiming that they cannot compensate (and therefore can perceive rather significant distortions). We modified the latter hypothesis so that it extends to street-level IBR. We then conducted a rigorous experiment that measured the magnitude of perceptual distortions that occur with IBR for façade viewing. We also conducted a rating experiment that assessed the acceptability of the distortions. The results of the two experiments were consistent with one another. They showed that viewers' percepts are indeed distorted, but not as severely as predicted by the modified vision science hypothesis. From our experimental results, we develop a predictive model of distortion for street-level IBR, which we use to provide guidelines for acceptability of virtual views and for capture camera density. We perform a confirmatory study to validate our predictions, and illustrate their use with an application that guides users in IBR navigation to stay in regions where virtual views yield acceptable perceptual distortions.

June 2004

·

224 Reads

We describe how to create with machine learning techniques a generative, videorealistic, and speech animation module. A human subject is first recorded using a videocamera as he/she utters a pre-determined speech corpus. After processing the corpus automatically, a visual speech module is learned from the data that is capable of synthesizing the human subject's mouth uttering entirely novel utterances that were not recorded in the original video. The synthesized utterance is re-composited onto a background sequence, which contains natural head and eye movement. The final output is videorealistic in the sense that it looks like a video camera recording of the subject. At run time, the input to the system can be either real audio sequences or synthetic audio produced by a text-to-speech system, as long as they have been phonetically aligned.

July 2005

·

360 Reads

We present a novel algorithm based on least-squares minimization to approximate point cloud data in 2D plane with a smooth B-spline curve. The point cloud data may represent an open curve with self intersection and sharp corner. Unlike other existing methods, such as the moving least-squares method and the principle curve method, our algorithm does not need a thinning process. The idea of our algorithm is intuitive and simple - we make a B-spline curve grow along the tangential directions at its two end-points following local geometry of point clouds. Our algorithm generates appropriate control points of the fitting B-spline curve in the least squares sense. Although presented for the 2D case, our method can be extended in a straightforward manner to fitting data points by a B-spline curve in higher dimensions

November 2013

·

100 Reads

In this paper, we study the generation of maximal Poisson-disk sets with
varying radii. First, we present a geometric analysis of gaps in such disk
sets. This analysis is the basis for maximal and adaptive sampling in Euclidean
space and on manifolds. Second, we propose efficient algorithms and data
structures to detect gaps and update gaps when disks are inserted, deleted,
moved, or have their radius changed. We build on the concepts of the regular
triangulation and the power diagram. Third, we will show how our analysis can
make a contribution to the state-of-the-art in surface remeshing.

May 2016

·

3,957 Reads

Photo retouching enables photographers to invoke dramatic visual impressions
by artistically enhancing their photos through stylistic color and tone
adjustments. However, it is also a time-consuming and challenging task that
requires advanced skills beyond the abilities of casual photographers. Using an
automated algorithm is an appealing alternative to manual work but such an
algorithm faces many hurdles. Many photographic styles rely on subtle
adjustments that depend on the image content and even its semantics. Further,
these adjustments are often spatially varying. Because of these
characteristics, existing automatic algorithms are still limited and cover only
a subset of these challenges. Recently, deep machine learning has shown unique
abilities to address hard problems that resisted machine algorithms for long.
This motivated us to explore the use of deep learning in the context of photo
editing. In this paper, we explain how to formulate the automatic photo
adjustment problem in a way suitable for this approach. We also introduce an
image descriptor that accounts for the local semantics of an image. Our
experiments demonstrate that our deep learning formulation applied using these
descriptors successfully capture sophisticated photographic styles. In
particular and unlike previous techniques, it can model local adjustments that
depend on the image semantics. We show on several examples that this yields
results that are qualitatively and quantitatively better than previous work.

October 1994

·

304 Reads

This paper describes a general-purpose programming technique, called the Simulation of Simplicity, which can be used to cope with degenerate input data for geometric algorithms. It relieves the programmer from the task to provide a consistent treatment for every single special case that can occur. The programs that use the technique tend to be considerably smaller and more robust than those that do not use it. We believe that this technique will become a standard tool in writing geometric software. Comment: 38 pages

October 1994

·

1,965 Reads

Frequently, data in scientific computing is in its abstract form a finite point set in space, and it is sometimes useful or required to compute what one might call the ``shape'' of the set. For that purpose, this paper introduces the formal notion of the family of $\alpha$-shapes of a finite point set in $\Real^3$. Each shape is a well-defined polytope, derived from the Delaunay triangulation of the point set, with a parameter $\alpha \in \Real$ controlling the desired level of detail. An algorithm is presented that constructs the entire family of shapes for a given set of size $n$ in time $O(n^2)$, worst case. A robust implementation of the algorithm is discussed and several applications in the area of scientific computing are mentioned. Comment: 32 pages

June 2010

·

162 Reads

The control polygon of a Bezier curve is well-defined and has geometric
significance---there is a sequence of weights under which the limiting position
of the curve is the control polygon. For a Bezier surface patch, there are many
possible polyhedral control structures, and none are canonical. We propose a
not necessarily polyhedral control structure for surface patches, regular
control surfaces, which are certain C^0 spline surfaces. While not unique,
regular control surfaces are exactly the possible limiting positions of a
Bezier patch when the weights are allowed to vary.

August 2014

·

400 Reads

Blue noise refers to sample distributions that are random and well-spaced,
with a variety of applications in graphics, geometry, and optimization.
However, prior blue noise sampling algorithms typically suffer from the
curse-of-dimensionality, especially when striving to cover a domain maximally.
This hampers their applicability for high dimensional domains.
We present a blue noise sampling method that can achieve high quality and
performance across different dimensions. Our key idea is spoke-dart sampling,
sampling locally from hyper-annuli centered at prior point samples, using
lines, planes, or, more generally, hyperplanes. Spoke-dart sampling is more
efficient at high dimensions than the state-of-the-art alternatives: global
sampling and advancing front point sampling. Spoke-dart sampling achieves good
quality as measured by differential domain spectrum and spatial coverage. In
particular, it probabilistically guarantees that each coverage gap is small,
whereas global sampling can only guarantee that the sum of gaps is not large.
We demonstrate advantages of our method through empirical analysis and
applications across dimensions 8 to 23 in Delaunay graphs, global optimization,
and motion planning.

May 2013

·

443 Reads

In many graphics applications, the computation of exact geodesic distance is
very important. However, the high computational cost of the existing geodesic
algorithms means that they are not practical for large-scale models or
time-critical applications. To tackle this challenge, we propose the parallel
Chen-Han (or PCH) algorithm, which extends the classic Chen-Han (CH) discrete
geodesic algorithm to the parallel setting. The original CH algorithm and its
variant both lack a parallel solution because the windows (a key data structure
that carries the shortest distance in the wavefront propagation) are maintained
in a strict order or a tightly coupled manner, which means that only one window
is processed at a time. We propose dividing the CH's sequential algorithm into
four phases, window selection, window propagation, data organization, and
events processing so that there is no data dependence or conflicts in each
phase and the operations within each phase can be carried out in parallel. The
proposed PCH algorithm is able to propagate a large number of windows
simultaneously and independently. We also adopt a simple yet effective strategy
to control the total number of windows. We implement the PCH algorithm on
modern GPUs (such as Nvidia GTX 580) and analyze the performance in detail. The
performance improvement (compared to the sequential algorithms) is highly
consistent with GPU double-precision performance (GFLOPS). Extensive
experiments on real-world models demonstrate an order of magnitude improvement
in execution time compared to the state-of-the-art.

February 2013

·

224 Reads

We formalize sampling a function using k-d darts. A k-d Dart is a set of independent, mutually orthogonal, k-dimensional hyperplanes called k-d flats. A dart has d choose k flats, aligned with the coordinate axes for efficiency. We show k-d darts are useful for exploring a function's properties, such as estimating its integral, or finding an exemplar above a threshold. We describe a recipe for converting some algorithms from point sampling to k-d dart sampling, if the function can be evaluated along a k-d flat.
We demonstrate that k-d darts are more efficient than point-wise samples in high dimensions, depending on the characteristics of the domain: for example, the subregion of interest has small volume and evaluating the function along a flat is not too expensive. We present three concrete applications using line darts (1-d darts): relaxed maximal Poisson-disk sampling, high-quality rasterization of depth-of-field blur, and estimation of the probability of failure from a response surface for uncertainty quantification. Line darts achieve the same output fidelity as point sampling in less time. For Poisson-disk sampling, we use less memory, enabling the generation of larger point distributions in higher dimensions. Higher-dimensional darts provide greater accuracy for a particular volume estimation problem.

August 2014

·

1,068 Reads

In this paper, we address the following research problem: How can we generate a meaningful split grammar that explains a given facade layout? To evaluate if a grammar is meaningful, we propose a cost function based on the description length and minimize this cost using an approximate dynamic programming framework. Our evaluation indicates that our framework extracts meaningful split grammars that are competitive with those of expert users, while some users and all competing automatic solutions are less successful.

June 2004

·

340 Reads

Valuable 3D graphical models, such as high-resolution digital scans of cultural heritage objects, may require protection to prevent piracy or misuse, while still allowing for interactive display and manipulation by a widespread audience. We have investigated techniques for protecting 3D graphics content, and we have developed a remote rendering system suitable for sharing archives of 3D models while protecting the 3D geometry from unauthorized extraction. The system consists of a 3D viewer client that includes lowresolution versions of the 3D models, and a rendering server that renders and returns images of high-resolution models according to client requests. The server implements a number of defenses to guard against 3D reconstruction attacks, such as monitoring and limiting request streams, and slightly perturbing and distorting the rendered images. We consider several possible types of reconstruction attacks on such a rendering server, and we examine how these attacks can be defended against without excessively compromising the interactive experience for non-malicious users.

May 2002

·

265 Reads

The digitization of the 3D shape of real objects is a rapidly expanding field, with applications in entertainment, design, and archaeology. We propose a new 3D model acquisition system that permits the user to rotate an object by hand and see a continuously-updated model as the object is scanned. This tight feedback loop allows the user to find and fill holes in the model in real time, and determine when the object has been completely covered. Our system is based on a 60 Hz. structured-light rangefinder, a real-time variant of ICP (iterative closest points) for alignment, and point-based merging and rendering algorithms. We demonstrate the ability of our prototype to scan objects faster and with greater ease than conventional model acquisition pipelines.

March 2000

·

364 Reads

This paper presents a new 3D RGB image compression scheme designed for interactive real-time applications. In designing our compression method, we have compromised between two important goals: high compression ratio and fast random access ability, and have tried to minimize the overhead caused during runtime reconstruction. Our compression technique is suitable for applications wherein data are accessed in a somewhat unpredictable fashion, and real-time performance of decompression is necessary. The experimental results on three different kinds of 3D images from medical imaging, image-based rendering, and solid texture mapping suggest that the compression method can be used effectively in developing real-time applications that must handle large volume data, made of color samples taken in three- or higher-dimensional space. Keywords: 3D volume data, Data compression, Haar wavelets, Random access, Interactive real-time applications, Medical imaging, Image-based rendering, 3D text...

April 2003

·

615 Reads

Parametrization of 3D mesh data is important for many graphics applications, in particular for texture mapping, remeshing and morphing. Closed manifold genus-0 meshes are topologically equivalent to a sphere, hence this is the natural parameter domain for them. Parametrizing a triangle mesh onto the sphere means assigning a 3D position on the unit sphere to each of the mesh vertices, such that the spherical triangles induced by the mesh connectivity do not overlap. Satisfying the non-overlapping requirement is the most difficult and critical component of this process. We present a generalization of the method of barycentric coordinates for planar parametrization which solves the spherical parametrization problem, prove its correctness by establishing a connection to spectral graph theory and describe efficient numerical methods for computing these parametrizations.

December 2000

·

205 Reads

In this paper we apply perturbation methods to the problem of computing specular reflections in curved surfaces. The key idea is to generate families of closely related optical paths by expanding a given path into a high-dimensional Taylor series. Our path perturbation method is based on closed-form expressions for linear and higher-order approximations of ray paths, which are derived using Fermat's Variation Principle and the Implicit Function Theorem. The perturbation formula presented here holds for general multiple-bounce reflection paths and provides a mathematical foundation for exploiting path coherence in ray tracing acceleration techniques and incremental rendering. To illustrate its use, we describe an algorithm for fast approximation of specular reflections on curved surfaces; the resulting images are of high accuracy and nearly indistinguishable from ray traced images. Keywords: perturbation theory, implicit surfaces, optics, ray tracing, specular reflection 1 1 Introduct...

November 2000

·

123 Reads

Ray tracers, which sample radiance, are usually regarded as offline rendering algorithms that are too slow for interactive use. In this article we present a system that exploits object-space, ray-space, image-space, and temporal coherence to accelerate ray tracing. Our system uses per-surface interpolants to approximate radiance while conservatively bounding error. The techniques introduced in this article should enhance both interactive and batch ray tracers. Our approach explicitly decouples the two primary operations of a ray tracer - shading and visibility determination - and accelerates each of them independently. Shading is accelerated by quadrilinearly interpolating lazily acquired radiance samples. Interpolation error does not exceed a user-specified bound, allowing the user to control performance/quality tradeoffs. Error is bounded by adaptive sampling at discontinuities and radiance nonlinearities. Visibility determination at pixels is accelerated by reprojecting interpolants as the user's viewpoint changes. A fast scan-line algorithm then achieves high performance without sacrificing image quality. For a smoothly varying viewpoint, the combination of lazy interpolants and reprojection substantially accelerates the ray tracer. Additionally, an efficient cache management algorithm keeps the memory footprint of the system small with negligible overhead.

June 2003

·

72 Reads

Good character animation requires convincing skin deformations including subtleties and details like muscle bulges. Such effects are typically created in commercial animation packages which provide very general and powerful tools. While these systems are convenient and flexible for artists, the generality often leads to characters that are slow to compute or that require a substantial amount of memory and thus cannot be used in interactive systems. Instead, interactive systems restrict artists to a specific character deformation model which is fast and memory efficient but is notoriously difficult to author and can suffer from many deformation artifacts. This paper presents an automated framework that allows character artists to use the full complement of tools in high-end systems to create characters for interactive systems. Our method starts with an arbitrarily rigged character in an animation system. A set of examples is exported, consisting of skeleton configurations paired with the deformed geometry as static meshes. Using these examples, we fit the parameters of a deformation model that best approximates the original data yet remains fast to compute and compact in memory. Keywords: Interactive, Skin, Approximation I