## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

This paper discusses the method to display the surfaces that exhibit sparkling and depth effects. Sparkling effects are usually caused by the metallic flakes diffused in the paint or coating. The novelty of the approach is to explicitly model the sparkle normal vectors for rendering using an embedded device that allows us perceive depth effects in virtual reality like application. Light redirected by flakes to miscellaneous directions causes random twinkling particle effect. Since, each eye perceives light from different direction, there are two distinct perceived images with random particles for each eye. This effect causes the particles to be perceived within certain depth. We have created an application which allows us to render sparkling effect with arbitrary distributions of sparkles.

To read the full-text of this research,

you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.

This paper presents a novel framework for simulating the stretching and wiggling of liquids. We demonstrate that complex phase-interface dynamics can be effectively simulated by introducing the Eulerian vortex sheet method, which focuses on the vorticity at the interface (rather than the whole domain). We extend this model to provide user control for the production of visual effects. Then, the generated fluid flow creates complex surface details, such as thin and wiggling fluid sheets. To capture such high-frequency features efficiently, this work employs a denser grid for surface tracking in addition to the (coarser) simulation grid. In this context, the paper proposes a filter, called the liquid-biased filter, which is able to downsample the surface in the high-resolution grid into the coarse grid without unrealistic volume loss resulting from aliasing error. The proposed method, which runs on a single PC, realistically reproduces complex fluid scenes.

Current linear modal sound models are tightly coupled with their frequency content. Both the modal vibration of object surfaces and the resulting sound radiation depend on the vibration frequency. Whenever the user tweaks modal parameters to adjust frequencies the modal sound model changes completely, necessitating expensive recomputation of modal vibration and sound radiation.
We propose a new method for interactive and continuous editing as well as exploration of modal sound parameters. We start by sampling a number of key points around a vibrating object, and then devise a compact, low-memory representation of frequency-varying acoustic transfer values at each key point using Prony series. We efficiently precompute these series using an adaptive frequency sweeping algorithm and volume-velocity-preserving mesh simplification. At runtime, we approximate acoustic transfer values using standard multipole expansions. Given user-specified modal frequencies, we solve a small least-squares system to estimate the expansion coefficients, and thereby quickly compute the resulting sound pressure value at arbitrary listening locations. We demonstrate the numerical accuracy, the runtime performance of our method on a set of comparisons and examples, and evaluate sound quality with user perception studies.

Micro-appearance models explicitly model the interaction of light with microgeometry at the fiber scale to produce realistic appearance. To effectively match them to real fabrics, we introduce a new appearance matching framework to determine their parameters. Given a micro-appearance model and photographs of the fabric under many different lighting conditions, we optimize for parameters that best match the photographs using a method based on calculating derivatives during rendering. This highly applicable framework, we believe, is a useful research tool because it simplifies development and testing of new models.
Using the framework, we systematically compare several types of micro-appearance models. We acquired computed microtomography (micro CT) scans of several fabrics, photographed the fabrics under many viewing/illumination conditions, and matched several appearance models to this data. We compare a new fiber-based light scattering model to the previously used microflake model. We also compare representing cloth microgeometry using volumes derived directly from the micro CT data to using explicit fibers reconstructed from the volumes. From our comparisons, we make the following conclusions: (1) given a fiber-based scattering model, volume- and fiber-based microgeometry representations are capable of very similar quality, and (2) using a fiber-specific scattering model is crucial to good results as it achieves considerably higher accuracy than prior work.

Recent digital fabrication tools have opened up accessibility to personalized rapid prototyping; however, such tools are limited to product-scale objects. The materials currently available for use in 3D printing are too fine for large-scale objects, and CNC gantry sizes limit the scope of printable objects. In this paper, we propose a new method for printing architecture-scale objects. Our proposal includes three developments: (i) a construction material consisting of chopsticks and glue, (ii) a handheld chopstick dispenser, and (iii) a printing guidance system that uses projection mapping. The proposed chopstickglue material is cost effective, environmentally sustainable, and can be printed more quickly than conventional materials. The developed handheld dispenser enables consistent feeding of the chopstickglue material composite. The printing guidance system - consisting of a depth camera and a projector evaluates a given shape in real time and indicates where humans should deposit chopsticks by projecting a simple color code onto the form under construction. Given the mechanical specifications of the stickglue composite, an experimental pavilion was designed as a case study of the proposed method and built without scaffoldings and formworks. The case study also revealed several fundamental limitations, such as the projector does not work in daylight, which requires future investigations.

Metallophones such as glockenspiels produce sounds in response to contact. Building these instruments is a complicated process, limiting their shapes to well-understood designs such as bars. We automatically optimize the shape of arbitrary 2D and 3D objects through deformation and perforation to produce sounds when struck which match user-supplied frequency and amplitude spectra. This optimization requires navigating a complex energy landscape, for which we develop Latin Complement Sampling to both speed up finding minima and provide probabilistic bounds on landscape exploration. Our method produces instruments which perform similarly to those that have been professionally-manufactured, while also expanding the scope of shape and sound that can be realized, e.g., single object chords. Furthermore, we can optimize sound spectra to create overtones and to dampen specific frequencies. Thus our technique allows even novices to design metallophones with unique sound and appearance.

We introduce the Symmetric GGX (SGGX) distribution to represent spatially-varying properties of anisotropic microflake participating media. Our key theoretical insight is to represent a microflake distribution by the projected area of the microflakes. We use the projected area to parameterize the shape of an ellipsoid, from which we recover a distribution of normals. The representation based on the projected area allows for robust linear interpolation and prefiltering, and thanks to its geometric interpretation, we derive closed form expressions for all operations used in the microflake framework. We also incorporate microflakes with diffuse reflectance in our theoretical framework.
This allows us to model the appearance of rough diffuse materials in addition to rough specular materials. Finally, we use the idea of sampling the distribution of visible normals to design a perfect importance sampling technique for our SGGX microflake phase functions. It is analytic, deterministic, simple to implement, and one order of magnitude faster than previous work.

We present a triangle mesh-based technique for tracking the evolution of three-dimensional multimaterial interfaces undergoing complex deformations. It is the first non-manifold triangle mesh tracking method to simultaneously maintain intersection-free meshes and support the proposed broad set of multimaterial remeshing and topological operations. We represent the interface as a non-manifold triangle mesh with material labels assigned to each half-face to distinguish volumetric regions. Starting from proposed application-dependent vertex velocities, we deform the mesh, seeking a non-intersecting, watertight solution. This goal necessitates development of various collision-safe, label-aware non-manifold mesh operations: multimaterial mesh improvement; T1 and T2 processes, topological transitions arising in foam dynamics and multiphase flows; and multimaterial merging, in which a new interface is created between colliding materials. We demonstrate the robustness and effectiveness of our approach on a range of scenarios including geometric flows and multiphase fluid animation.

In computer graphics, rendering visually detailed scenes is often achieved through texturing. We propose a method for on-the-fly non-periodic infinite texturing of surfaces based on a single image. Pattern repetition is avoided by defining patches within each texture whose content can be changed at runtime. In addition, we consistently manage multi-scale using one input image per represented scale. Undersampling artifacts are avoided by accounting for fine-scale features while colors are transferred between scales. Eventually, we allow for relief-enhanced rendering and provide a tool for intuitive creation of height maps. This is done using an ad-hoc local descriptor that measures feature self-similarity in order to propagate height values provided by the user for a few selected texels only.
Thanks to the patch-based system, manipulated data are compact and our texturing approach is easy to implement on GPU. The multi-scale extension is capable of rendering finely detailed textures in real-time.

Humans recognize objects visually on the basis of material composition as well as shape. To acquire a certain level of photorealism, it is necessary to analyze, how the materials scatter the incident light. The key quantity for expressing the directional optical effect of materials on the incident radiance is the bidirectional reflectance distribution function (BRDF). Our work is devoted to the BRDF measurements, in order to render the synthetic images, mostly of the metallic paints. We measured the spectral reflectance off multiple paint samples then used the measured data to fit the analytical BRDF model, in order to acquire its parameters. In this paper we describe the methodology of the image synthesis from measured data. Materials such as the metallic paints exhibit a sparkling effect caused by the metallic particles scattered within the paint volume. Our analysis of sparkling effect is based on the processing of the multiple photographs. Results of analysis and the measurements were incorporated into the rendering process of car paint

We present AirCode, a technique that allows the user to tag physically fabricated objects with given information. An AirCode tag consists of a group of carefully designed air pockets placed beneath the object surface. These air pockets are easily produced during the fabrication process of the object, without any additional material or postprocessing. Meanwhile, the air pockets affect only the scattering light transport under the surface, and thus are hard to notice to our naked eyes. But, by using a computational imaging method, the tags become detectable. We present a tool that automates the design of air pockets for the user to encode information. AirCode system also allows the user to retrieve the information from captured images via a robust decoding algorithm. We demonstrate our tagging technique with applications for metadata embedding, robotic grasping, as well as conveying object affordances.

The diverse interactions between hair and liquid are complex and span multiple length scales, yet are central to the appearance of humans and animals in many situations. We therefore propose a novel multi-component simulation framework that treats many of the key physical mechanisms governing the dynamics of wet hair. The foundations of our approach are a discrete rod model for hair and a particle-in-cell model for fluids. To treat the thin layer of liquid that clings to the hair, we augment each hair strand with a height field representation. Our contribution is to develop the necessary physical and numerical models to evolve this new system and the interactions among its components. We develop a new reduced-dimensional liquid model to solve the motion of the liquid along the length of each hair, while accounting for its moving reference frame and influence on the hair dynamics. We derive a faithful model for surface tension-induced cohesion effects between adjacent hairs, based on the geometry of the liquid bridges that connect them. We adopt an empirically-validated drag model to treat the effects of coarse-scale interactions between hair and surrounding fluid, and propose new volume-conserving dripping and absorption strategies to transfer liquid between the reduced and particle-in-cell liquid representations. The synthesis of these techniques yields an effective wet hair simulator, which we use to animate hair flipping, an animal shaking itself dry, a spinning car wash roller brush dunked in liquid, and intricate hair coalescence effects, among several additional scenarios.

Physically Based Rendering: From Theory to Implementation, Third Edition, describes both the mathematical theory behind a modern photorealistic rendering system and its practical implementation. Through a method known as 'literate programming', the authors combine human-readable documentation and source code into a single reference that is specifically designed to aid comprehension. The result is a stunning achievement in graphics education. Through the ideas and software in this book, users will learn to design and employ a fully-featured rendering system for creating stunning imagery. This completely updated and revised edition includes new coverage on ray-tracing hair and curves primitives, numerical precision issues with ray tracing, LBVHs, realistic camera models, the measurement equation, and much more. It is a must-have, full color resource on physically-based rendering. Presents up-to-date revisions of the seminal reference on rendering, including new sections on bidirectional path tracing, numerical robustness issues in ray tracing, realistic camera models, and subsurface scattering. Provides the source code for a complete rendering system allowing readers to get up and running fast. Includes a unique indexing feature, literate programming, that lists the locations of each function, variable, and method on the page where they are first described. Serves as an essential resource on physically-based rendering.

The third edition of this classic tutorial and reference on procedural texturing and modeling is thoroughly updated to meet the needs of today's 3D graphics professionals and students. New for this edition are chapters devoted to real-time issues, cellular texturing, geometric instancing, hardware acceleration, futuristic environments, and virtual universes. In addition, the familiar authoritative chapters on which readers have come to rely contain all-new material covering L-systems, particle systems, scene graphs, spot geometry, bump mapping, cloud modeling, and noise improvements. There are many new spectacular color images to enjoy, especially in this edition's full-color format.As in the previous editions, the authors, who are the creators of the methods they discuss, provide extensive, practical explanations of widely accepted techniques as well as insights into designing new ones. New to the third edition are chapters by two well-known contributors: Bill Mark of NVIDIA and John Hart of the University of Illinois at Urbana-Champaign on state-of-the-art topics not covered in former editions.An accompanying Web site (www.texturingandmodeling.com) contains all of the book's sample code in C code segments (all updated to the ANSI C Standard) or in RenderMan shading language, plus files of many magnificent full-color illustrations.No other book on the market contains the breadth of theoretical and practical information necessary for applying procedural methods. More than ever, Texturing & Modeling remains the chosen resource for professionals and advanced students in computer graphics and animation.

An interactive method for segmentation and isosurface extraction of medical volume data is proposed. In conventional methods, users decompose a volume into multiple regions iteratively, segment each region using a threshold, and then manually clean the segmentation result by removing clutter in each region. However, this is tedious and requires many mouse operations from different camera views. We propose an alternative approach whereby the user simply applies painting operations to the volume using tools commonly seen in painting systems, such as flood fill and brushes. This significantly reduces the number of mouse and camera control operations. Our technical contribution is in the introduction of the threshold field, which assigns spatially-varying threshold values to individual voxels. This generalizes discrete decomposition of a volume into regions and segmentation using a constant threshold in each region, thereby offering a much more flexible and efficient workflow. This paper describes the details of the user interaction and its implementation. Furthermore, the results of a user study are discussed. The results indicate that the proposed method can be a few times faster than a conventional method.

This paper explores methods for synthesizing physics-based bubble sounds directly from two-phase incompressible simulations of bubbly water flows. By tracking fluid-air interface geometry, we identify bubble geometry and topological changes due to splitting, merging and popping. A novel capacitance-based method is proposed that can estimate volume-mode bubble frequency changes due to bubble size, shape, and proximity to solid and air interfaces. Our acoustic transfer model is able to capture cavity resonance effects due to near-field geometry, and we also propose a fast precomputed bubble-plane model for cheap transfer evaluation. In addition, we consider a bubble forcing model that better accounts for bubble entrainment, splitting, and merging events, as well as a Helmholtz resonator model for bubble popping sounds. To overcome frequency bandwidth limitations associated with coarse resolution fluid grids, we simulate micro-bubbles in the audio domain using a power-law model of bubble populations. Finally, we present several detailed examples of audiovisual water simulations and physical experiments to validate our frequency model.

Acoustic filters have a wide range of applications, yet customizing them with desired properties is difficult. Motivated by recent progress in additive manufacturing that allows for fast prototyping of complex shapes, we present a computational approach that automates the design of acoustic filters with complex geometries. In our approach, we construct an acoustic filter comprised of a set of parameterized shape primitives, whose transmission matrices can be precomputed. Using an efficient method of simulating the transmission matrix of an assembly built from these underlying primitives, our method is able to optimize both the arrangement and the parameters of the acoustic shape primitives in order to satisfy target acoustic properties of the filter. We validate our results against industrial laboratory measurements and high-quality off-line simulations. We demonstrate that our method enables a wide range of applications including muffler design, musical wind instrument prototyping, and encoding imperceptible acoustic information into everyday objects.

We propose a novel surface-only technique for simulating incompressible, inviscid and uniform-density liquids with surface tension in three dimensions. The liquid surface is captured by a triangle mesh on which a Lagrangian velocity field is stored. Because advection of the velocity field may violate the incompressibility condition, we devise an orthogonal projection technique to remove the divergence while requiring the evaluation of only two boundary integrals. The forces of surface tension, gravity, and solid contact are all treated by a boundary element solve, allowing us to perform detailed simulations of a wide range of liquid phenomena, including waterbells, droplet and jet collisions, fluid chains, and crown splashes.

Fabrics play a significant role in many applications in design, prototyping, and entertainment. Recent fiber-based models capture the rich visual appearance of fabrics, but are too onerous to design and edit. Yarn-based procedural models are powerful and convenient, but too regular and not realistic enough in appearance. In this paper, we introduce an automatic fitting approach to create high-quality procedural yarn models of fabrics with fiber-level details. We fit CT data to procedural models to automatically recover a full range of parameters, and augment the models with a measurement-based model of flyaway fibers. We validate our fabric models against CT measurements and photographs, and demonstrate the utility of this approach for fabric modeling and editing.

Photorealistic rendering of real world environments is important in a range of different areas; including Visual Special effects, Interior/Exterior Modelling, Architectural Modelling, Cultural Heritage, Computer Games and Automotive Design. Currently, rendering systems are able to produce photorealistic simulations of the appearance of many real-world materials. In the real world, viewer perception of objects depends on the lighting and object/material/surface characteristics, the way a surface interacts with the light and on how the light is reflected, scattered, absorbed by the surface and the impact these characteristics have on material appearance. In order to re-produce this, it is necessary to understand how materials interact with light. Thus the representation and acquisition of material models has become such an active research area. This survey of the state-of-the-art of BRDF Representation and Acquisition presents an overview of BRDF (Bidirectional Reflectance Distribution Function) models used to represent surface/material reflection characteristics, and describes current acquisition methods for the capture and rendering of photorealistic materials.

The fast multipole method is one of the most important algorithms in computing developed in the 20th century. Along with the fast multipole method, the boundary element method (BEM) has also emerged, as a powerful method for modeling large-scale problems. BEM models with millions of unknowns on the boundary can now be solved on desktop computers using the fast multipole BEM. This is the first book on the fast multipole BEM, which brings together the classical theories in BEM formulations and the recent development of the fast multipole method. Two- and three-dimensional potential, elastostatic, Stokes flow, and acoustic wave problems are covered, supplemented with exercise problems and computer source codes. Applications in modeling nanocomposite materials, bio-materials, fuel cells, acoustic waves, and image-based simulations are demonstrated to show the potential of the fast multipole BEM. This book will help students, researchers, and engineers to learn the BEM and fast multipole method from a single source.

A model is presented for the structure of staple fibre yarns, which includes some random characteristics. A number of properties of the yarn are then evaluated for these structures. The results are compared with experimental values for yarns made from Tencel fibres.

Solving the N-body problem, i.e. the Poisson problem with point sources, is a common task in graphics and simulation. The naive direct summation of the kernel function over all particles scales quadratically, rendering it too slow for large problems, while the optimal Fast Multipole Method has drastic implementation complexity and can sometimes carry too high an overhead to be practical. We present a new Particle-Particle Particle-Mesh (PPPM) algorithm which is fast, accurate, and easy to implement even in parallel on a GPU. We capture long-range interactions with a fast multigrid solver on a background grid with a novel boundary condition, while short-range interactions are calculated directly with a new error compensation to avoid error from the background grid. We demonstrate the power of PPPM with a new vortex particle smoke solver, which features a vortex segment-approach to the stretching term, potential flow to enforce no-stick solid boundaries on arbitrary moving solid boundaries, and a new mechanism for vortex shedding from boundary layers. Comparison against a simpler Vortex-in-Cell approach shows PPPM can produce significantly more detailed results with less computation. In addition, we use our PPPM solver for a Poisson surface reconstruction problem to show its potential as a general-purpose Poisson solver.

The author presents some straightforward algorithms for the generation and display in 3-D of fractal shapes. These techniques are very general and particularly adapted to shapes which are much more costly to generate than to display, such as those fractal surfaces defined by iteration of algebraic transformations. In order to deal with the large space and time requirements of calculating these shapes, the author introduces a boundary-tracking algorithm particularly adapted for array-processor implementation. The resulting surfaces are then shaded and displayed using z-buffer type algorithms. A new class of displayable geometric objects, with great diversity of form and texture, is introduced by these techniques.

We present a novel approach for wave-based sound propagation suitable for large, open spaces spanning hundreds of meters, with a small memory footprint. The scene is decomposed into disjoint rigid objects. The free-field acoustic behavior of each object is captured by a compact per-object transfer function relating the amplitudes of a set of incoming equivalent sources to outgoing equivalent sources. Pairwise acoustic interactions between objects are computed analytically to yield compact inter-object transfer functions. The global sound field accounting for all orders of interaction is computed using these transfer functions. The runtime system uses fast summation over the outgoing equivalent source amplitudes for all objects to auralize the sound field for a moving listener in real time. We demonstrate realistic acoustic effects such as diffraction, low-passed sound behind obstructions, focusing, scattering, high-order reflections, and echoes on a variety of scenes.

We present a method to increase the apparent resolution of particle-based liquid simulations. Our method first outputs a dense, temporally coherent, regularized point set from a coarse particle-based liquid simulation. We then apply a surface-only Lagrangian wave simulation to this high-resolution point set. We develop novel methods for seeding and simulating waves over surface points, and use them to generate high-resolution details. We avoid error-prone surface mesh processing, and robustly propagate waves without the need for explicit connectivity information. Our seeding strategy combines a robust curvature evaluation with multiple bands of seeding oscillators, injects waves with arbitrarily fine-scale structures, and properly handles obstacle boundaries. We generate detailed fluid surfaces from coarse simulations as an independent post-process that can be applied to most particle-based fluid solvers.

Most visual effects fluid solvers use a time-splitting approach where velocity is first advected in the flow, then projected to be incompressible with pressure. Even if a highly accurate advection scheme is used, the self-advection step typically transfers some kinetic energy from divergence-free modes into divergent modes, which are then projected out by pressure, losing energy noticeably for large time steps. Instead of taking smaller time steps or using significantly more complex time integration, we propose a new scheme called IVOCK (Integrated Vorticity of Convective Kinematics) which cheaply captures much of what is lost in self-advection by identifying it as a violation of the vorticity equation. We measure vorticity on the grid before and after advection, taking into account vortex stretching, and use a cheap multigrid V-cycle approximation to a vector potential whose curl will correct the vorticity error. IVOCK works independently of the advection scheme (we present examples with various semi-Lagrangian methods and FLIP), works independently of how boundary conditions are applied (it just corrects error in advection, leaving pressure etc. to take care of boundaries and other forces), and other solver parameters (we provide smoke, fire, and water examples). For 10 similar to 25% extra computation time per step much larger steps can be used, while producing detailed vorticial structures and convincing turbulence that are lost without correction.

Material appearance acquisition usually makes a trade-off between acquisition effort and richness of reflectance representation. In this paper, we instead aim for both a light-weight acquisition procedure and a rich reflectance representation simultaneously, by restricting ourselves to one, but very important, class of appearance phenomena: texture-like materials. While such materials' reflectance is generally spatially varying, they exhibit self-similarity in the sense that for any point on the texture there exist many others with similar reflectance properties. We show that the texturedness assumption allows reflectance capture using only two images of a planar sample, taken with and without a headlight flash. Our reconstruction pipeline starts with redistributing reflectance observations across the image, followed by a regularized texture statistics transfer and a nonlinear optimization to fit a spatially-varying BRDF (SVBRDF) to the resulting data. The final result describes the material as spatiallyvarying, diffuse and specular, anisotropic reflectance over a detailed normal map. We validate the method by side-by-side and novel-view comparisons to photographs, comparing normal map resolution to sub-micron ground truth scans, as well as simulated results. Our method is robust enough to use handheld, JPEG-compressed photographs taken with a mobile phone camera and built-in flash.

Many graph drawing methods apply node clustering techniques based on density of edges to find tightly connected subgraphs, and then hierarchically visualize the clustered graphs. On the other hand, users may want to focus on important nodes (called key nodes in this paper) and their connections to groups of other nodes while using some applications. It is not always preferable to apply the common graph clustering techniques for this purpose because such key nodes are often hidden in large clusters. For this requirement, it is effective to separately visualize the key nodes detected based on adjacency and attributes of the nodes. This paper presents a graph visualization technique for attribute-embedded graphs which applies a graph clustering algorithm taking into account the combination of connections and attributes. The graph clustering step divides the nodes according to the commonality of connected nodes and similarity of feature value vectors. It then calculates the distances between arbitrary pairs of clusters according to the number of connecting edges and similarity of feature value vectors, and finally places the clusters based on the distances. Consequently, the technique separates important nodes which have connections to multiple large clusters, and improves the visibility of connections of such nodes.

In this survey we overview the definitions and methods for graph clustering, that is, finding sets of ''related'' vertices in graphs. We review the many definitions for what is a cluster in a graph and measures of cluster quality. Then we present global algorithms for producing a clustering for the entire vertex set of an input graph, after which we discuss the task of identifying a cluster for a specific seed vertex by local computation. Some ideas on the application areas of graph clustering algorithms are given. We also address the problematics of evaluating clusterings and benchmarking cluster algorithms.

We present the first 3D algorithm capable of answering the question: what would a Mandelbrot-like set in the shape of a bunny look like? More concretely, can we find an iterated quaternion rational map whose potential field contains an isocontour with a desired shape? We show that it is possible to answer this question by casting it as a shape optimization that discovers novel, highly complex shapes. The problem can be written as an energy minimization, the optimization can be made practical by using an efficient method for gradient evaluation, and convergence can be accelerated by using a variety of multi-resolution strategies. The resulting shapes are not invariant under common operations such as translation, and instead undergo intricate, non-linear transformations.

Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture cloth models, specifically when considering computer aided design of cloth. Previous methods produce highly realistic images, however, they are either difficult to edit or require the measurement of large databases to capture all variations of a cloth sample. We propose a pipeline to reverse engineer cloth and estimate a parametrized cloth model from a single image. We introduce a geometric yarn model, integrating state-of-the-art textile research. We present an automatic analysis approach to estimate yarn paths, yarn widths, their variation and a weave pattern. Several examples demonstrate that we are able to model the appearance of the original cloth sample. Properties derived from the input image give a physically plausible basis that is fully editable using a few intuitive parameters.

Multispectral and polarized light reflectance measurements are very useful to characterize materials such as paint coatings. This article presents an overview of an automated high-angular resolved, in-plane multispectral polarized reflectometer and its calibration process. A compre-hensive study based on multispectral BRDF and DOLP measurements is conducted on different colour and glossy aspects of paint coatings. An original inverse method from in-plane measurements is used to model the out-of-plane BRDF and to investigate the role of the surface and subsurface scattering phenomena in its components.

We present two approaches for acquiring spatially varying reflectance of planar samples using a mobile device. For samples with rough specular BRDF, we propose to employ the back camera and flash pair on any typical mobile device for freeform handheld reflectance acquisition using dense backscattering measurements under flash illumination. For samples with highly specular BRDF, we instead employ a 10" tablet for illuminating the sample with extended illumination while employing the front camera for reflectance acquisition. With this setup, we also exploit the tablet's LCD screen polarization for diffuse-specular separation.

Computer displays play an important role in connecting the information world and the real world. In the era of ubiquitous computing, it is essential to be able to access information in a fluid way and non-obstructive integration of displays into our living environment is a basic requirement to achieve it. Here, we propose a display technology that utilizes the phenomenon whereby the shading properties of fur change as the fibers are raised or flattened. One can erase drawings by first flattening the fibers by sweeping the surface by hand in the fiber's growth direction and then draw lines by raising the fibers by moving a finger in the opposite direction. These material properties can be found in various items such as carpets and plush toy in our living environment. Our technology can turn these ordinary objects into displays without requiring or creating any non-reversible modifications to the objects. It can be used to make a large-scale display and the drawings it creates have no running costs.

Polynomial Julia sets have emerged as the most studied examples of fractal sets generated by a dynamical system. Apart from the beautiful mathematics, one of the reasons for their popularity is the beauty of the computer-generated images of such sets. The algorithms used to draw these pictures vary; the most naïve work by iterating the center of a pixel to determine if it lies in the Julia set. Milnor's distance-estimator algorithm [Mil] uses classical complex analysis to give a one-pixel estimate of the Julia set. This algorithm and its modifications work quite well for many examples, but it is well known that in some particular cases computation time will grow very rapidly with increase of the resolution. Moreover, there are examples, even in the family of quadratic polynomials, when no satisfactory pictures of the Julia set exist. In this paper we study computability properties of Julia sets of quadratic polynomials. Under the definition we use, a set is computable, if, roughly speaking, its image can be generated by a computer with an arbitrary precision. Under this notion of computability we show: Main Theorem. There exists a parameter value c ∈ ℂ such that the Julia set of the quadratic polynomial fc(z) = z2 + c is not computable. The structure of the paper is as follows. In the Introduction we discuss the question of computability of real sets and make the relevant definitions. Further in this section we briefly introduce the reader to the main concepts of Complex Dynamics and discuss the properties of Julia sets relevant to us. In the end of the Introduction, we outline the conceptual idea of the proof of the Main Theorem. Section 3 contains the technical lemmas on which the argument is based. In §4 we complete the proof.

We propose a method of increasing the apparent spatial resolution of an existing liquid simulation. Previous approaches to this “up-resing” problem have focused on increasing the turbulence of the underlying velocity field. Motivated by measurements in the free surface turbulence literature, we observe that past certain frequencies, it is sufficient to perform a wave simulation directly on the liquid surface, and construct a reduced-dimensional surface-only simulation. We sidestep the considerable problem of generating a surface parameterization by employing an embedding technique known as the Closest Point Method (CPM) that operates directly on a 3D extension field. The CPM requires 3D operators, and we show that for surface operators with no natural 3D generalization, it is possible to construct a viable operator using the inverse Abel transform. We additionally propose a fast, frozen core closest point transform, and an advection method for the extension field that reduces smearing considerably. Finally, we propose two turbulence coupling methods that seed the high-resolution wave simulation in visually expected regions.

We present an efficient grid structure that extends a uniform grid to create a significantly larger far-field grid by dynamically extending the cells surrounding a fine uniform grid while still maintaining fine resolution about the regions of interest. The far-field grid preserves almost every computational advantage of uniform grids including cache coherency, regular subdivisions for parallelization, simple data layout, the existence of efficient numerical discretizations and algorithms for solving partial differential equations, etc. This allows fluid simulations to cover large domains that are often infeasible to enclose with sufficient resolution using a uniform grid, while still effectively capturing fine scale details in regions of interest using dynamic adaptivity.

Existing hair capture systems fail to produce strands that reflect the structures of real-world hairstyles. We introduce a system that reconstructs coherent and plausible wisps aware of the underlying hair structures from a set of still images without any special lighting. Our system first discovers locally coherent wisp structures in the reconstructed point cloud and the 3D orientation field, and then uses a novel graph data structure to reason about both the connectivity and directions of the local wisp structures in a global optimization. The wisps are then completed and used to synthesize hair strands which are robust against occlusion and missing data and plausible for animation and simulation. We show reconstruction results for a variety of complex hairstyles including curly, wispy, and messy hair.

We present an efficient approach for performing smoke simulation on curvilinear grids. Our technique is based on a fast unconditionally-stable advection algorithm and on a new and efficient solution to enforce mass conservation. It uses a staggered-grid variable arrangement, and has linear cost on the number of grid cells. Our method naturally integrates itself with overlapping-grid techniques, lending to an efficient way of producing highly-realistic animations of dynamic scenes. Compared to approaches based on regular grids traditionally used in computer graphics, our method allows for better representation of boundary conditions, with just a small increment in computational cost. Thus, it can be used to evaluate aerodynamic properties, possibly enabling unexplored applications in computer graphics, such as interactive computation of lifting forces on complex objects.We demonstrate the effectiveness of our approach, both in 2-D and 3-D, through a variety of high-quality smoke animations.

Procedural texturing is a well known method to synthesize details onto virtual surfaces directly during rendering. But the creation of such textures is often a long and painstaking task. This paper introduces a new noise function, called multiple kernels noise. It is characterized by an arbitrary energy distribution in spectral domain. Multiple kernels noise is obtained by adaptively decomposing a user-defined power spectral density (PSD) into rectangular regions. These are then associated to kernel functions used to compute noise values by sparse convolution. We show how multiple kernels noise (1) increases the variety of noisy procedural textures that can be modeled and (2) helps creating structured procedural textures by automatic extraction of noise characteristics from user-supplied samples.