Marc Stamminger

Nuremberg University of Music, Nuremberg, Bavaria, Germany

Are you Marc Stamminger?

Claim your profile

Publications (127)57.03 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Scene environments in modern games include a wealth of moving and animated objects, which is key to creating vivid virtual worlds. An essential aspect in dynamic scenes is the interaction between scene objects. Unfortunately, many real-time applications only support rigid body collisions due to tight time budgets. In order to facilitate visual feedback of collisions, residuals such as scratches or impacts with soft materials like snow or sand are realized by dynamic decal texture placements. However, decals are not able to modify the underlying surface geometry which would be highly desired to improve upon realism. In this chapter, we present a novel real-time technique to overcome this limitation by enabling fully automated fine-scale surface deformations resulting from object collisions. That is, we propose an efficient method to incorporate high-frequency deformations upon physical contact into dynamic displacement maps directly on the GPU. Overall, we can handle large dynamic scene environments with many objects at minimal runtime overhead.
    GPU Pro 6, Edited by Wolfgang Engel, 03/2015: chapter Real-Time Deformation of Subdivision Surfaces on Object Collisions; A K Peters/CRC Press.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Rendering of the Frog model (left), using feature-adaptive subdivision [Nießner et al. 2012a] takes 0.68ms (middle); our method only takes 0.36ms by performing locally adaptive subdivision (right); colors denote different subdivision levels. Abstract Feature-adaptive subdivision (FAS) is one of the state-of-the art real-time rendering methods for subdivision surfaces on modern GPUs. It enables efficient and accurate rendering of subdivision surfaces in many interactive applications, such as video games or authoring tools. In this paper, we present dynamic feature-adaptive subdivision (DFAS), which improves upon FAS by enabling an independent subdivision depth for every irregularity. Our subdi-vision kernels fill a dynamic patch buffer on-the-fly with the ap-propriate number of patches corresponding to the chosen level-of-detail scheme. By reducing the number of generated and pro-cessed patches, DFAS significantly improves upon the performance of static FAS.
    I3D 2015; 02/2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Dedicated visualization methods are among the most important tools of modern computer-aided medical applications. Reformation methods such as Multiplanar Reformation or Curved Planar Reformation have evolved as useful tools that facilitate diagnostic and therapeutic work. In this paper, we present a novel approach that can be seen as a generalization of Multiplanar Reformation to curved surfaces. The main concept is to generate reformatted medical volumes driven by the individual anatomical geometry of a specific patient. This process generates flat views of anatomical structures that facilitate many tasks such as diagnosis, navigation and annotation. Our reformation framework is based on a non-linear as-rigid-as-possible volumetric deformation scheme that uses generic triangular surface meshes as input. To manage inevitable distortions during reformation, we introduce importance maps which allow controlling the error distribution and improving the overall visual quality in areas of elevated interest. Our method seamlessly integrates with well-established concepts such as the slice-based inspection of medical datasets and we believe it can improve the overall efficiency of many medical workflows. To demonstrate this, we additionally present an integrated visualization system and discuss several use cases that substantiate its benefits.
    IEEE Transactions on Visualization and Computer Graphics 11/2014; 20(5). DOI:10.1109/TVCG.2014.2346405 · 1.92 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we introduce a novel technique for pre-filtering multi-layer shadow maps. The occluders in the scene are stored as variable-length lists of fragments for each texel. We show how this representation can be filtered by progressively merging these lists. In contrast to previous pre-filtering techniques, our method better captures the distribution of depth values, resulting in a much higher shadow quality for overlapping occluders and occluders with different depths. The pre-filtered maps are generated and evaluated directly on the GPU, and provide efficient queries for shadow tests with arbitrary filter sizes. Accurate soft shadows are rendered in real-time even for complex scenes and difficult setups. Our results demonstrate that our pre-filtered maps are general and particularly scalable.
    Computer Graphics Forum 10/2014; 34(1). DOI:10.1111/cgf.12506 · 1.60 Impact Factor
  • Conference Paper: Enhanced Sphere Tracing
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we present several performance and quality enhancements to classical sphere tracing: First, we propose a safe, over-relaxation-based method for accelerating sphere tracing. Second, a method for dynamically preventing self-intersections upon converting signed distance bounds enables controlling precision and rendering performance. In addition, we present a method for significantly accelerating the sphere tracing intersection test for convex objects that are enclosed in convex bounding volumes. We also propose a screen-space metric for the retrieval of a good intersection point candidate, in case sphere tracing does not converge thus increasing rendering quality without sacrificing performance. Finally, discontinuity artifacts common in sphere tracing are reduced using a fixed-point iteration algorithm. We demonstrate complex scenes rendered in real-time with our method. The methods presented in this paper have more universal applicability beyond rendering procedurally generated scenes in real-time and can also be combined with path-tracing-based global illumination solutions.
    Smart Tools and Apps for Graphics, Cagliari, Italy; 09/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a novel method to adaptively apply modifications to scene data stored in GPU memory. Such modifications may include interactive painting and sculpting operations in an authoring tool, or deformations resulting from collisions between scene objects detected by a physics engine. We only allocate GPU memory for the faces affected by these modifications to store fine-scale color or displacement values. This requires dynamic GPU memory management in order to assign and adaptively apply edits to individual faces at runtime. We present such a memory management technique based on a scan-operation that is efficiently parallelizable. Since our approach runs entirely on the GPU, we avoid costly CPU-GPU memory transfer and eliminate typical bandwidth limitations. This minimizes runtime overhead to under a millisecond and makes our method ideally suited to many real-time applications such as video games and interactive authoring tools. In addition, our algorithm significantly reduces storage requirements and allows for much higher-resolution content compared to traditional global texturing approaches. Our technique can be applied to various mesh representations, including Catmull-Clark subdivision surfaces, as well as standard triangle and quad meshes. In this paper, we demonstrate several scenarios for these mesh types where our algorithm enables adaptive mesh refinement, local surface deformations, and interactive on-mesh painting and sculpting.
    Computer Graphics Forum 08/2014; DOI:10.1111/cgf.12456 · 1.60 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a combined hardware and software solution for marker-less reconstruction of non-rigidly deforming physical objects with arbitrary shape in real-time. Our system uses a single self-contained stereo camera unit built from off-the-shelf components and consumer graphics hardware to generate spatio-temporally coherent 3D models at 30 Hz. A new stereo matching algorithm estimates real-time RGB-D data. We start by scanning a smooth template model of the subject as they move rigidly. This geometric surface prior avoids strong scene assumptions, such as a kinematic human skeleton or a parametric shape model. Next, a novel GPU pipeline performs non-rigid registration of live RGB-D data to the smooth template using an extended non-linear as-rigid-as-possible (ARAP) framework. High-frequency details are fused onto the final mesh using a linear deformation model. The system is an order of magnitude faster than state-of-the-art methods, while matching the quality and robustness of many offline algorithms. We show precise real-time reconstructions of diverse scenes, including: large deformations of users' heads, hands, and upper bodies; fine-scale wrinkles and folds of skin and clothing; and non-rigid interactions performed by users on flexible objects such as toys. We demonstrate how acquired models can be used for many interactive scenarios, including re-texturing, online performance capture and preview, and real-time shape and motion re-targeting.
    ACM Transactions on Graphics 07/2014; 33(4):1-12. DOI:10.1145/2601097.2601165 · 3.73 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a novel real-time approach for fine-scale surface deformations resulting from collisions. Deformations are represented by a high-resolution displacement function. When two objects collide, these offsets are updated directly on the GPU based on a dynamically generated binary voxelization of the overlap region. Consequently, we can handle collisions with arbitrary animated geometry. Our approach runs entirely on the GPU, avoiding costly CPU-GPU memory transfer and exploiting the GPU’s computational power. Surfaces are rendered with the hardware tessellation unit, allowing for adaptively-rendered, high-frequency surface detail. Ultimately, our algorithm enables fine-scale surface deformations from geometry impact with very little computational overhead, running well below a millisecond even in complex scenes. As our results demonstrate, our approach is ideally suited to many real-time applications such as video games and authoring tools.
    Proceedings of the 6th ACM SIGGRAPH/ Eurographics Conference on High-Performance Graphics, Lyon, France; 06/2014
  • Source
    Jan Kretschmar, Bernhard Preim, Marc Stamminger
    EuroVis Short Paper; 06/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a novel method for the interactive markerless reconstruction of human heads using a single commodity RGB-D sensor. Our entire reconstruction pipeline is implemented on the graphics processing unit and allows to obtain high-quality reconstructions of the human head using an interactive and intuitive reconstruction paradigm. The core of our method is a fast graphics processing unit-based nonlinear quasi-Newton solver that allows us to leverage all information of the RGB-D stream and fit a statistical head model to the observations at interactive frame rates. By jointly solving for shape, albedo and illumination parameters, we are able to reconstruct high-quality models including illumination corrected textures. All obtained reconstructions have a common topology and can be directly used as assets for games, films and various virtual reality applications. We show motion retargeting, retexturing and relighting examples. The accuracy of the presented algorithm is evaluated by a comparison against ground truth data. Copyright © 2014 John Wiley & Sons, Ltd.
    Computer Animation and Virtual Worlds 05/2014; 25(3-4). DOI:10.1002/cav.1584 · 0.44 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: For a long time, GPUs have primarily been optimized to render more and more triangles with increasingly flexible shading. However, scene data itself has typically been generated on the CPU and then uploaded to GPU memory. Therefore, widely used techniques that generate geometry at render time on demand for the rendering of smooth and displaced surfaces were not applicable to interactive applications. As a result of recent advances in graphics hardware, in particular the GPU tessellation unit's ability to overcome this limitation, complex geometry can now be generated within the GPU's rendering pipeline on the fly. GPU hardware tessellation enables the generation of smooth parametric surfaces or application of displacement mapping in real-time applications. However, many well-established approaches in offline rendering are not directly transferable, due to the limited tessellation patterns or the parallel execution model of the tessellation stage. In this state of the art report, we provide an overview of recent work and challenges in this topic by summarizing, discussing and comparing methods for the rendering of smooth and highly detailed surfaces in real-time.
    Eurographics 2014; 04/2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The precise modeling of vascular structures plays a key role in medical imaging applications, such as diagnosis, therapy planning and blood flow simulations. For the simulation of blood flow in particular, high-precision models are required to produce accurate results. It is thus common practice to perform extensive manual data polishing on vascular segmentations prior to simulation. This usually involves a complex tool chain which is highly impractical for clinical on-site application. To close this gap in current blood flow simulation pipelines, we present a novel technique for interactive vascular modeling which is based on implicit sweep surfaces. Our method is able to generate and correct smooth high-quality models based on geometric centerline descriptions on the fly. It supports complex vascular free-form contours and consequently allows for an accurate and fast modeling of pathological structures such as aneurysms or stenoses. We extend the concept of implicit sweep surfaces to achieve increased robustness and applicability as required in the medical field. We finally compare our method to existing techniques and provide case studies that confirm its contribution to current simulation pipelines.
    IEEE Transactions on Visualization and Computer Graphics 12/2013; 19(12):2828-37. DOI:10.1109/TVCG.2013.169 · 1.92 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Existing GPU antialiasing techniques, such as MSAA or MLAA, focus on reducing aliasing artifacts along silhouette boundaries or edges in image space. However, they neglect aliasing from shading in case of high-frequency geometric detail. This may lead to a shading aliasing artifact that resembles Bailey's Bead Phenomenon—the degradation of continuous specular highlights to a string of pearls. These types of artifacts are particularly striking for high-quality surfaces. So far, the only way of removing aliasing from shading is by globally supersampling the entire image with a large number of samples. However, globally supersampling the image is slow and significantly increases bandwidth consumption. We propose three adaptive approaches that locally supersample triangles only where necessary on the GPU. Thereby, we efficiently remove artifacts from shading while aliasing along silhouettes is reduced by efficient hardware MSAA.
    Computers & Graphics 12/2013; 37(8):955–962. DOI:10.1016/j.cag.2013.08.002 · 1.03 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Online 3D reconstruction is gaining newfound interest due to the availability of real-time consumer depth cameras. The basic problem takes live overlapping depth maps as input and incrementally fuses these into a single 3D model. This is challenging particularly when real-time performance is desired without trading quality or scale. We contribute an online system for large and fine scale volumetric reconstruction based on a memory and speed efficient data structure. Our system uses a simple spatial hashing scheme that compresses space, and allows for real-time access and updates of implicit surface data, without the need for a regular or hierarchical grid data structure. Surface data is only stored densely where measurements are observed. Additionally, data can be streamed efficiently in or out of the hash table, allowing for further scalability during sensor motion. We show interactive reconstructions of a variety of scenes, reconstructing both fine-grained details and large scale environments. We illustrate how all parts of our pipeline from depth map pre-processing, camera pose estimation, depth map fusion, and surface rendering are performed at real-time rates on commodity graphics hardware. We conclude with a comparison to current state-of-the-art online systems, illustrating improved performance and reconstruction quality.
    ACM Transactions on Graphics 11/2013; 32(6). DOI:10.1145/2508363.2508374 · 3.73 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Hardware tessellation is one of the latest GPU features. Triangle or quad meshes are tessellated on-the-fly, where the tessellation level is chosen adaptively in a separate shader. The hardware tessellator only generates topology; attributes such as positions or texture coordinates of the newly generated vertices are determined in a domain shader. Typical applications of hardware tessellation are view dependent tessellation of parametric surfaces and displacement mapping. Often, the attributes for the newly generated vertices are stored in textures, which requires uv unwrapping, chartification, and atlas generation of the input mesh—a process that is time consuming and often requires manual intervention. In this paper, we present an alternative representation that directly stores optimized attribute values for typical hardware tessellation patterns and simply assigns these attributes to the generated vertices at render time. Using a multilevel fitting approach, the attribute values are optimized for several resolutions. Thereby, we require no parameterization, save memory by adapting the density of the samples to the content, and avoid discontinuities by construction. Our representation is optimally suited for displacement mapping: it automatically generates seamless, view-dependent displacement mapped models. The multilevel fitting approach generates better low-resolution displacement maps than simple downfiltering. By properly blending levels, we avoid artifacts such as popping or swimming surfaces. We also show other possible applications such as signal-optimized texturing or light baking. Our representation can be evaluated in a pixel shader, resulting in signal adaptive, parameterization-free texturing, comparable to PTex or Mesh Colors. Performance evaluation shows that our representation is on par with standard texture mapping and can be updated in real time, allowing for application such as interactive sculpting.
    09/2013; 19(9):1488-1498. DOI:10.1109/TVCG.2013.44
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a novel representation for storing sub-triangle signals, such as colors, normals, or displacements directly with the triangle mesh. Signal samples are stored as guided by hardware-tessellation patterns. Thus, we can directly render from our representation by assigning signal samples to attributes of vertices generated by the hardware tessellator. Contrary to texture mapping, our approach does not require any atlas generation, chartification, or uv-unwrapping. Thus, it does not suffer from texture-related artifacts, such as discontinuities across chart boundaries or distortion. Moreover, our approach allows specifying the optimal sampling rate adaptively on a per triangle basis, resulting in significant memory savings for most signal types. We propose a signal optimal approach for converting arbitrary signals, including existing assets with textures or mesh colors, into our representation. Further, we provide efficient algorithms for mip-mapping, bi- and tri-linear interpolation directly in our representation. Our approach is optimally suited for displacement mapping: it automatically generates crack-free, view-dependent displacement mapped models enabling continuous level-of-detail.
    IEEE Transactions on Visualization and Computer Graphics 08/2013; 19(9):1488-1498. DOI:10.1145/2159616.2159645 · 1.92 Impact Factor
  • Henry Schäfer, Benjamin Keinert, Marc Stamminger
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose a novel method for local displacement events in large scenes, such as scratches, footsteps, or sculpting operations. Deformations are stored as displacements for vertices generated by hardware tessellation. Adaptive mesh refinement, application of the displacement and all involved memory management happen completely on the GPU. We show various extensions to our approach, such as on-the-fly normal computation and multi-resolution editing. In typical game scenes we perform local deformations at arbitrary positions in far less than one millisecond. This makes the method particularly suited for games and interactive sculpting applications.
    Proceedings of the ACM SIGGRAPH Symposium on High Performance Graphics, Anaheim, CA; 07/2013
  • F. Bauer, M. Stamminger
    [Show abstract] [Hide abstract]
    ABSTRACT: Creating content is a vital task in computer graphics. In this paper we evaluate a constraint based scene description using a system of multiple-agents known from artificial intelligence. By using agents we separate the modeling process into small and easy to understand tasks. The parameters for each agent can be changed at any time. Re-evaluating the agent system results in a consistently updated scene, a process that allows artists to experiment until they find the desired result while still leveraging the power of constraint based modelling. Since we only need to evaluate modified agents when updating the scene, we can even use this description to perform modeling tasks on mobile devices.
    Proceedings of the 28th Spring Conference on Computer Graphics; 03/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Hardware tessellation is one of the latest GPU features. Triangle or quad meshes are tessellated on-the-fly, where the tessellation level is chosen adaptively in a separate shader. The hardware tessellator only generates topology; attributes such as positions or texture coordinates of the newly generated vertices are determined in a domain shader. Typical applications of hardware tessellation are view dependent tessellation of parametric surfaces and displacement mapping. Often, the attributes for the newly generated vertices are stored in textures, which requires uv unwrapping, chartification, and atlas generation of the input mesh - a process that is time consuming and often requires manual intervention. In this paper, we present an alternative representation that directly stores optimized attribute values for typical hardware tessellation patterns and simply assigns these attributes to the generated vertices at render time. Using a multi-level fitting approach, the attribute values are optimized for several resolutions. Thereby, we require no parameterization, save memory by adapting the density of the samples to the content, and avoid discontinuities by construction.
    02/2013;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Abdominal aortic aneurysms are a common disease of the aorta which are treated minimally invasive in about 33 % of the cases. Treatment is done by placing a stent graft in the aorta to prevent the aneurysm from growing. Guidance during the procedure is facilitated by fluoroscopic imaging. Unfortunately, due to low soft tissue contrast in X-ray images, the aorta itself is not visible without the application of contrast agent. To overcome this issue, advanced techniques allow to segment the aorta from pre-operative data, such as CT or MRI. Overlay images are then subsequently rendered from a mesh representation of the segmentation and fused to the live fluoroscopic images with the aim of improving the visibility of the aorta during the procedure. The current overlay images typically use forward projections of the mesh representation. This fusion technique shows deficiencies in both the 3-D information of the overlay and the visibility of the fluoroscopic image underneath. We present a novel approach to improve the visualization of the overlay images using non-photorealistic rendering techniques. Our method preserves the visibility of the devices in the fluoroscopic images while, at the same time, providing 3-D information of the fused volume. The evaluation by clinical experts shows that our method is preferred over current state-of-the-art overlay techniques. We compared three visualization techniques to the standard visualization. Our silhouette approach was chosen by clinical experts with 67 %, clearly showing the superiority of our new approach.
    SPIE Medical Imaging; 01/2013

Publication Stats

2k Citations
57.03 Total Impact Points

Institutions

  • 2002–2015
    • Nuremberg University of Music
      Nuremberg, Bavaria, Germany
  • 1997–2014
    • Universitätsklinikum Erlangen
      Erlangen, Bavaria, Germany
  • 1997–2007
    • Friedrich-Alexander Universität Erlangen-Nürnberg
      • Department of Computer Science
      Erlangen, Bavaria, Germany
  • 2003
    • University of Nice-Sophia Antipolis
      Nice, Provence-Alpes-Côte d'Azur, France
  • 2002–2003
    • Bauhaus Universität Weimar
      Weimar, Thuringia, Germany
  • 2000
    • Max Planck Institute for Informatics
      Saarbrücken, Saarland, Germany