Marc Stamminger

Nuremberg University of Music, Nuremberg, Bavaria, Germany

Are you Marc Stamminger?

Claim your profile

Publications (121)31.92 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Scene environments in modern games include a wealth of moving and animated objects, which is key to creating vivid virtual worlds. An essential aspect in dynamic scenes is the interaction between scene objects. Unfortunately, many real-time applications only support rigid body collisions due to tight time budgets. In order to facilitate visual feedback of collisions, residuals such as scratches or impacts with soft materials like snow or sand are realized by dynamic decal texture placements. However, decals are not able to modify the underlying surface geometry which would be highly desired to improve upon realism. In this chapter, we present a novel real-time technique to overcome this limitation by enabling fully automated fine-scale surface deformations resulting from object collisions. That is, we propose an efficient method to incorporate high-frequency deformations upon physical contact into dynamic displacement maps directly on the GPU. Overall, we can handle large dynamic scene environments with many objects at minimal runtime overhead.
    03/2015;
  • Conference Paper: Enhanced Sphere Tracing
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we present several performance and quality enhancements to classical sphere tracing: First, we propose a safe, over-relaxation-based method for accelerating sphere tracing. Second, a method for dynamically preventing self-intersections upon converting signed distance bounds enables controlling precision and rendering performance. In addition, we present a method for significantly accelerating the sphere tracing intersection test for convex objects that are enclosed in convex bounding volumes. We also propose a screen-space metric for the retrieval of a good intersection point candidate, in case sphere tracing does not converge thus increasing rendering quality without sacrificing performance. Finally, discontinuity artifacts common in sphere tracing are reduced using a fixed-point iteration algorithm. We demonstrate complex scenes rendered in real-time with our method. The methods presented in this paper have more universal applicability beyond rendering procedurally generated scenes in real-time and can also be combined with path-tracing-based global illumination solutions.
    Smart Tools and Apps for Graphics, Cagliari, Italy; 09/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a novel method to adaptively apply modifications to scene data stored in GPU memory. Such modifications may include interactive painting and sculpting operations in an authoring tool, or deformations resulting from collisions between scene objects detected by a physics engine. We only allocate GPU memory for the faces affected by these modifications to store fine-scale color or displacement values. This requires dynamic GPU memory management in order to assign and adaptively apply edits to individual faces at runtime. We present such a memory management technique based on a scan-operation that is efficiently parallelizable. Since our approach runs entirely on the GPU, we avoid costly CPU-GPU memory transfer and eliminate typical bandwidth limitations. This minimizes runtime overhead to under a millisecond and makes our method ideally suited to many real-time applications such as video games and interactive authoring tools. In addition, our algorithm significantly reduces storage requirements and allows for much higher-resolution content compared to traditional global texturing approaches. Our technique can be applied to various mesh representations, including Catmull-Clark subdivision surfaces, as well as standard triangle and quad meshes. In this paper, we demonstrate several scenarios for these mesh types where our algorithm enables adaptive mesh refinement, local surface deformations, and interactive on-mesh painting and sculpting.
    Computer Graphics Forum 08/2014; · 1.64 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a novel real-time approach for fine-scale surface deformations resulting from collisions. Deformations are represented by a high-resolution displacement function. When two objects collide, these offsets are updated directly on the GPU based on a dynamically generated binary voxelization of the overlap region. Consequently, we can handle collisions with arbitrary animated geometry. Our approach runs entirely on the GPU, avoiding costly CPU-GPU memory transfer and exploiting the GPU’s computational power. Surfaces are rendered with the hardware tessellation unit, allowing for adaptively-rendered, high-frequency surface detail. Ultimately, our algorithm enables fine-scale surface deformations from geometry impact with very little computational overhead, running well below a millisecond even in complex scenes. As our results demonstrate, our approach is ideally suited to many real-time applications such as video games and authoring tools.
    Proceedings of the 6th ACM SIGGRAPH/ Eurographics Conference on High-Performance Graphics, Lyon, France; 06/2014
  • Source
    Jan Kretschmar, Bernhard Preim, Marc Stamminger
    EuroVis Short Paper; 06/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a novel method for the interactive markerless reconstruction of human heads using a single commodity RGB-D sensor. Our entire reconstruction pipeline is implemented on the graphics processing unit and allows to obtain high-quality reconstructions of the human head using an interactive and intuitive reconstruction paradigm. The core of our method is a fast graphics processing unit-based nonlinear quasi-Newton solver that allows us to leverage all information of the RGB-D stream and fit a statistical head model to the observations at interactive frame rates. By jointly solving for shape, albedo and illumination parameters, we are able to reconstruct high-quality models including illumination corrected textures. All obtained reconstructions have a common topology and can be directly used as assets for games, films and various virtual reality applications. We show motion retargeting, retexturing and relighting examples. The accuracy of the presented algorithm is evaluated by a comparison against ground truth data. Copyright © 2014 John Wiley & Sons, Ltd.
    Computer Animation and Virtual Worlds 05/2014; 25(3-4). · 0.44 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: For a long time, GPUs have primarily been optimized to render more and more triangles with increasingly flexible shading. However, scene data itself has typically been generated on the CPU and then uploaded to GPU memory. Therefore, widely used techniques that generate geometry at render time on demand for the rendering of smooth and displaced surfaces were not applicable to interactive applications. As a result of recent advances in graphics hardware, in particular the GPU tessellation unit's ability to overcome this limitation, complex geometry can now be generated within the GPU's rendering pipeline on the fly. GPU hardware tessellation enables the generation of smooth parametric surfaces or application of displacement mapping in real-time applications. However, many well-established approaches in offline rendering are not directly transferable, due to the limited tessellation patterns or the parallel execution model of the tessellation stage. In this state of the art report, we provide an overview of recent work and challenges in this topic by summarizing, discussing and comparing methods for the rendering of smooth and highly detailed surfaces in real-time.
    Eurographics 2014; 04/2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The precise modeling of vascular structures plays a key role in medical imaging applications, such as diagnosis, therapy planning and blood flow simulations. For the simulation of blood flow in particular, high-precision models are required to produce accurate results. It is thus common practice to perform extensive manual data polishing on vascular segmentations prior to simulation. This usually involves a complex tool chain which is highly impractical for clinical on-site application. To close this gap in current blood flow simulation pipelines, we present a novel technique for interactive vascular modeling which is based on implicit sweep surfaces. Our method is able to generate and correct smooth high-quality models based on geometric centerline descriptions on the fly. It supports complex vascular free-form contours and consequently allows for an accurate and fast modeling of pathological structures such as aneurysms or stenoses. We extend the concept of implicit sweep surfaces to achieve increased robustness and applicability as required in the medical field. We finally compare our method to existing techniques and provide case studies that confirm its contribution to current simulation pipelines.
    IEEE transactions on visualization and computer graphics. 12/2013; 19(12):2828-37.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Online 3D reconstruction is gaining newfound interest due to the availability of real-time consumer depth cameras. The basic problem takes live overlapping depth maps as input and incrementally fuses these into a single 3D model. This is challenging particularly when real-time performance is desired without trading quality or scale. We contribute an online system for large and fine scale volumetric reconstruction based on a memory and speed efficient data structure. Our system uses a simple spatial hashing scheme that compresses space, and allows for real-time access and updates of implicit surface data, without the need for a regular or hierarchical grid data structure. Surface data is only stored densely where measurements are observed. Additionally, data can be streamed efficiently in or out of the hash table, allowing for further scalability during sensor motion. We show interactive reconstructions of a variety of scenes, reconstructing both fine-grained details and large scale environments. We illustrate how all parts of our pipeline from depth map pre-processing, camera pose estimation, depth map fusion, and surface rendering are performed at real-time rates on commodity graphics hardware. We conclude with a comparison to current state-of-the-art online systems, illustrating improved performance and reconstruction quality.
    ACM Transactions on Graphics (TOG). 11/2013; 32(6).
  • [Show abstract] [Hide abstract]
    ABSTRACT: Hardware tessellation is one of the latest GPU features. Triangle or quad meshes are tessellated on-the-fly, where the tessellation level is chosen adaptively in a separate shader. The hardware tessellator only generates topology; attributes such as positions or texture coordinates of the newly generated vertices are determined in a domain shader. Typical applications of hardware tessellation are view dependent tessellation of parametric surfaces and displacement mapping. Often, the attributes for the newly generated vertices are stored in textures, which requires uv unwrapping, chartification, and atlas generation of the input mesh—a process that is time consuming and often requires manual intervention. In this paper, we present an alternative representation that directly stores optimized attribute values for typical hardware tessellation patterns and simply assigns these attributes to the generated vertices at render time. Using a multilevel fitting approach, the attribute values are optimized for several resolutions. Thereby, we require no parameterization, save memory by adapting the density of the samples to the content, and avoid discontinuities by construction. Our representation is optimally suited for displacement mapping: it automatically generates seamless, view-dependent displacement mapped models. The multilevel fitting approach generates better low-resolution displacement maps than simple downfiltering. By properly blending levels, we avoid artifacts such as popping or swimming surfaces. We also show other possible applications such as signal-optimized texturing or light baking. Our representation can be evaluated in a pixel shader, resulting in signal adaptive, parameterization-free texturing, comparable to PTex or Mesh Colors. Performance evaluation shows that our representation is on par with standard texture mapping and can be updated in real time, allowing for application such as interactive sculpting.
    IEEE transactions on visualization and computer graphics. 09/2013; 19(9):1488-1498.
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a novel representation for storing sub-triangle signals, such as colors, normals, or displacements directly with the triangle mesh. Signal samples are stored as guided by hardware-tessellation patterns. Thus, we can directly render from our representation by assigning signal samples to attributes of vertices generated by the hardware tessellator. Contrary to texture mapping, our approach does not require any atlas generation, chartification, or uv-unwrapping. Thus, it does not suffer from texture-related artifacts, such as discontinuities across chart boundaries or distortion. Moreover, our approach allows specifying the optimal sampling rate adaptively on a per triangle basis, resulting in significant memory savings for most signal types. We propose a signal optimal approach for converting arbitrary signals, including existing assets with textures or mesh colors, into our representation. Further, we provide efficient algorithms for mip-mapping, bi- and tri-linear interpolation directly in our representation. Our approach is optimally suited for displacement mapping: it automatically generates crack-free, view-dependent displacement mapped models enabling continuous level-of-detail.
    IEEE Transactions on Visualization and Computer Graphics 08/2013; 19(9):1488-1498. · 1.90 Impact Factor
  • Henry Schäfer, Benjamin Keinert, Marc Stamminger
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose a novel method for local displacement events in large scenes, such as scratches, footsteps, or sculpting operations. Deformations are stored as displacements for vertices generated by hardware tessellation. Adaptive mesh refinement, application of the displacement and all involved memory management happen completely on the GPU. We show various extensions to our approach, such as on-the-fly normal computation and multi-resolution editing. In typical game scenes we perform local deformations at arbitrary positions in far less than one millisecond. This makes the method particularly suited for games and interactive sculpting applications.
    Proceedings of the ACM SIGGRAPH Symposium on High Performance Graphics, Anaheim, CA; 07/2013
  • F. Bauer, M. Stamminger
    [Show abstract] [Hide abstract]
    ABSTRACT: Creating content is a vital task in computer graphics. In this paper we evaluate a constraint based scene description using a system of multiple-agents known from artificial intelligence. By using agents we separate the modeling process into small and easy to understand tasks. The parameters for each agent can be changed at any time. Re-evaluating the agent system results in a consistently updated scene, a process that allows artists to experiment until they find the desired result while still leveraging the power of constraint based modelling. Since we only need to evaluate modified agents when updating the scene, we can even use this description to perform modeling tasks on mobile devices.
    Proceedings of the 28th Spring Conference on Computer Graphics; 03/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Hardware tessellation is one of the latest GPU features. Triangle or quad meshes are tessellated on-the-fly, where the tessellation level is chosen adaptively in a separate shader. The hardware tessellator only generates topology; attributes such as positions or texture coordinates of the newly generated vertices are determined in a domain shader. Typical applications of hardware tessellation are view dependent tessellation of parametric surfaces and displacement mapping. Often, the attributes for the newly generated vertices are stored in textures, which requires uv unwrapping, chartification, and atlas generation of the input mesh - a process that is time consuming and often requires manual intervention. In this paper, we present an alternative representation that directly stores optimized attribute values for typical hardware tessellation patterns and simply assigns these attributes to the generated vertices at render time. Using a multi-level fitting approach, the attribute values are optimized for several resolutions. Thereby, we require no parameterization, save memory by adapting the density of the samples to the content, and avoid discontinuities by construction.
    IEEE transactions on visualization and computer graphics. 02/2013;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Existing GPU antialiasing techniques, such as MSAA or MLAA, focus on reducing aliasing artifacts along silhouette boundaries or edges in image space. However, they neglect aliasing from shading in case of high-frequency geometric detail. This may lead to a shading aliasing artifact that resembles Bailey's Bead Phenomenon—the degradation of continuous specular highlights to a string of pearls. These types of artifacts are particularly striking for high-quality surfaces. So far, the only way of removing aliasing from shading is by globally supersampling the entire image with a large number of samples. However, globally supersampling the image is slow and significantly increases bandwidth consumption. We propose three adaptive approaches that locally supersample triangles only where necessary on the GPU. Thereby, we efficiently remove artifacts from shading while aliasing along silhouettes is reduced by efficient hardware MSAA.
    Computers & Graphics 01/2013; 37(8):955–962. · 0.79 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We think that state-of-the-art techniques in computer graphics and geometry processing can be leveraged in training and entertainment to make the topic of cultural heritage more accessible to a wider audience. In a cooperation with the “Antikensammlung” in Erlangen we produced five unique applications — all based around emperor Augustus — to visualize different scientific aspects for a big event targeted towards the general public. The applied methods include blending, geometric fitting, animation transfer and visualization techniques. Besides being entertaining, some of the presented applications are the foundation for more substantial research. (For our results video please visit http://lgdv.cs.fau.de/uploads/video/ISPA2013.mov).
    Image and Signal Processing and Analysis (ISPA), 2013 8th International Symposium on; 01/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Abdominal aortic aneurysms are a common disease of the aorta which are treated minimally invasive in about 33 % of the cases. Treatment is done by placing a stent graft in the aorta to prevent the aneurysm from growing. Guidance during the procedure is facilitated by fluoroscopic imaging. Unfortunately, due to low soft tissue contrast in X-ray images, the aorta itself is not visible without the application of contrast agent. To overcome this issue, advanced techniques allow to segment the aorta from pre-operative data, such as CT or MRI. Overlay images are then subsequently rendered from a mesh representation of the segmentation and fused to the live fluoroscopic images with the aim of improving the visibility of the aorta during the procedure. The current overlay images typically use forward projections of the mesh representation. This fusion technique shows deficiencies in both the 3-D information of the overlay and the visibility of the fluoroscopic image underneath. We present a novel approach to improve the visualization of the overlay images using non-photorealistic rendering techniques. Our method preserves the visibility of the devices in the fluoroscopic images while, at the same time, providing 3-D information of the fused volume. The evaluation by clinical experts shows that our method is preferred over current state-of-the-art overlay techniques. We compared three visualization techniques to the standard visualization. Our silhouette approach was chosen by clinical experts with 67 %, clearly showing the superiority of our new approach.
    SPIE Medical Imaging; 01/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: We propose a lossless, single-rate triangle mesh topology codec tailored for fast data-parallel GPU decompression. Our compression scheme coherently orders generalized triangle strips in memory. To unpack generalized triangle strips efficiently, we propose a novel parallel and scalable algorithm. We order vertices coherently to further improve our compression scheme. We use a variable bit-length code for additional compression benefits, for which we propose a scalable data-parallel decompression algorithm. For a set of standard benchmark models, we obtain (min: 3.7, med: 4.6, max: 7.6) bits per triangle. Our CUDA decompression requires only about 15% of the time it takes to render the model even with a simple shader. © 2012 Wiley Periodicals, Inc.
    Computer Graphics Forum 12/2012; 31(8):2541-2553. · 1.64 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In real-time rendering, global lighting information that is too expensive to be computed on the fly is typically pre-computed and baked as vertex attributes or into textures. Prominent examples are view independent effects, such as ambient occlusion, shadows, indirect lighting, or radiance transfer coefficients. Vertex baking usually requires less memory, but exhibits artifacts on large triangles. These artifacts are avoided by baking lighting information into textures, but at the expense of significant memory consumption and additional work to obtain a parameterization. In this paper, we propose a memory efficient and performant hybrid approach that combines texture- and vertex-based baking. Cheap vertex baking is applied by default and textures are used only where vertex baking is insufficient to represent the signal. Seams at transitions between both representations are hidden using a simple shader which smoothly blends between vertex- and texture-based shading. With our fully automatic approach, we can significantly reduce memory requirements without negative impact on rendering quality or performance.
    Computers & Graphics 05/2012; 36(3):193-200. · 0.79 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Accurate visualizations of complex vascular structures are essential for medical applications, such as diagnosis, therapy planning and medical education. Vascular trees are usually described using centerlines, since they capture both the topology and the geometry of the vasculature in an intuitive manner. State-of-the-art vessel segmentation algorithms deliver vascular outlines as free-form contours along the centerline, since this allows capturing anatomical pathologies. However, existing methods for generating surface representations from centerlines can only cope with circular outlines. We present a novel model-based technique that is capable of generating intersection-free surfaces from centerlines with complex outlines. Vascular segments are described by local signed distance functions and combined using Boolean operations. An octree-based surface generation strategy automatically computes watertight, scale-adaptive meshes with a controllable quality. In contrast to other approaches, our method generates a reliable representation that guarantees to capture all vessels regardless of their size. © 2012 Wiley Periodicals, Inc.
    Computer Graphics Forum 01/2012; 31(3):1055-1064. · 1.64 Impact Factor

Publication Stats

1k Citations
31.92 Total Impact Points

Institutions

  • 2002–2013
    • Nuremberg University of Music
      Nuremberg, Bavaria, Germany
  • 1997–2012
    • Universitätsklinikum Erlangen
      Erlangen, Bavaria, Germany
  • 1997–2007
    • Friedrich-Alexander Universität Erlangen-Nürnberg
      • Department of Computer Science
      Erlangen, Bavaria, Germany
  • 2002–2003
    • Bauhaus Universität Weimar
      Weimar, Thuringia, Germany
  • 2000
    • Max Planck Institute for Informatics
      Saarbrücken, Saarland, Germany