Marc Stamminger

Friedrich-Alexander-University of Erlangen-Nürnberg, Erlangen, Bavaria, Germany

Are you Marc Stamminger?

Claim your profile

Publications (137)82.42 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: The 3D reconstruction of archeological sites is still an expensive and time-consuming task. In this article, we present a novel interactive, low-cost approach to 3D reconstruction and compare it to a standard photogrammetry pipeline based on highresolution photographs. Our novel real-time reconstruction pipeline is based on a low-cost, consumer-level hand-held RGB-D sensor. While scanning, the user sees a live view of the current reconstruction, allowing the user to intervene immediately and adapt the sensor path to the current scanning result. After a raw reconstruction has been acquired, the digital model is interactively warped to fit a geo-referenced map using a handle-based deformation paradigm. Even large sites can be scanned within a few minutes, and no costly postprocessing is required. The quality of the acquired digitized raw 3D models is evaluated by comparing them to actual imagery, a geo-referenced map of the excavation site, and a photogrammetry-based reconstruction. We made extensive tests under real-world conditions on an archeological excavation in Metropolis, Ionia, Turkey. We found that the reconstruction quality of our approach is comparable to that of photogrammetry. Yet, both approaches have advantages and shortcomings in specific setups, which we analyze and discuss.
    No preview · Article · Nov 2015 · Journal on Computing and Cultural Heritage
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a method for the real-time transfer of facial expressions from an actor in a source video to an actor in a target video, thus enabling the ad-hoc control of the facial expressions of the target actor. The novelty of our approach lies in the transfer and photo-realistic re-rendering of facial deformations and detail into the target video in a way that the newly-synthesized expressions are virtually indistinguishable from a real video. To achieve this, we accurately capture the facial performances of the source and target subjects in real-time using a commodity RGB-D sensor. For each frame, we jointly fit a parametric model for identity, expression, and skin reflectance to the input color and depth data, and also reconstruct the scene lighting. For expression transfer, we compute the difference between the source and target expressions in parameter space, and modify the target parameters to match the source expressions. A major challenge is the convincing re-rendering of the synthesized target face into the corresponding video stream. This requires a careful consideration of the lighting and shading design, which both must correspond to the real-world environment. We demonstrate our method in a live setup, where we modify a video conference feed such that the facial expressions of a different person (e.g., translator) are matched in real-time.
    No preview · Article · Oct 2015 · ACM Transactions on Graphics
  • [Show abstract] [Hide abstract]
    ABSTRACT: Spherical Fibonacci point sets yield nearly uniform point distributions on the unit sphere S-2 subset of R-3. The forward generation of these point sets has been widely researched and is easy to implement, such that they have been used in various applications. Unfortunately, the lack of an efficient mapping from points on the unit sphere to their closest spherical Fibonacci point set neighbors rendered them impractical for a wide range of applications, especially in computer graphics. Therefore, we introduce an inverse mapping from points on the unit sphere which yields the nearest neighbor in an arbitrarily sized spherical Fibonacci point set in constant time, without requiring any precomputations or table lookups. We show how to implement this inverse mapping on GPUs while addressing arising floating point precision problems. Further, we demonstrate the use of this mapping and its variants, and show how to apply it to fast unit vector quantization. Finally, we illustrate the means by which to modify this inverse mapping for texture mapping with smooth filter kernels and showcase its use in the field of procedural modeling.
    No preview · Article · Oct 2015 · ACM Transactions on Graphics
  • [Show abstract] [Hide abstract]
    ABSTRACT: Using projection mapping enables us to bring virtual worlds into shared physical spaces. In this paper, we present a novel, adaptable and real-time projection mapping system, which supports multiple projectors and high quality rendering of dynamic content on surfaces of complex geometrical shape. Our system allows for smooth blending across multiple projectors using a new optimization framework that simulates the diffuse direct light transport of the physical world to continuously adapt the color output of each projector pixel. We present a real-time solution to this optimization problem using off-the-shelf graphics hardware, depth cameras and projectors. Our approach enables us to move projectors, depth camera or objects while maintaining the correct illumination, in real-time, without the need for markers on the object. It also allows for projectors to be removed or dynamically added, and provides compelling results with only commodity hardware.
    No preview · Article · Oct 2015 · ACM Transactions on Graphics
  • Randolf Schärfig · Marc Stamminger · Kai Hormann
    [Show abstract] [Hide abstract]
    ABSTRACT: Indirect illumination is an essential part of realistically rendering virtual scenes. In this paper we present a new method for computing multi-bounce indirect illumination for diffuse surfaces which is particularly well-suited for indoor scenes with complex occlusion, where an appropriate simulation of the indirect illumination is extremely important. The technique presented in this paper combines the benefits of shooting methods with the concept of photon mapping to compute a convincing light map for the scene with full diffuse lighting effects. The main idea is to carry out a multi-bounce light distribution almost entirely on the GPU using a shooting approach with virtual point lights. The final result is then stored in a texture atlas by projecting the energy from each virtual point light into the texels visible from its perspective. The technique uses only few resources on the graphics card and is flexible in the sense that it can easily be adjusted for either quality or speed, allowing the user to create convincing results in a matter of seconds or minutes, depending on the scene complexity.
    No preview · Article · Oct 2015 · Computers & Graphics
  • M. Nießner · B. Keinert · M. Fisher · M. Stamminger · C. Loop · H. Schäfer
    [Show abstract] [Hide abstract]
    ABSTRACT: Graphics hardware has progressively been optimized to render more triangles with increasingly flexible shading. For highly detailed geometry, interactive applications restricted themselves to performing transforms on fixed geometry, since they could not incur the cost required to generate and transfer smooth or displaced geometry to the GPU at render time. As a result of recent advances in graphics hardware, in particular the GPU tessellation unit, complex geometry can now be generated on the fly within the GPU's rendering pipeline. This has enabled the generation and displacement of smooth parametric surfaces in real-time applications. However, many well-established approaches in offline rendering are not directly transferable due to the limited tessellation patterns or the parallel execution model of the tessellation stage. In this survey, we provide an overview of recent work and challenges in this topic by summarizing, discussing, and comparing methods for the rendering of smooth and highly detailed surfaces in real time.
    No preview · Article · Sep 2015 · Computer Graphics Forum
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a novel method to obtain fine-scale detail in 3D reconstructions generated with low-budget RGB-D cameras or other commodity scanning devices. As the depth data of these sensors is noisy, truncated signed distance fields are typically used to regularize out the noise, which unfortunately leads to over-smoothed results. In our approach, we leverage RGB data to refine these reconstructions through shading cues, as color input is typically of much higher resolution than the depth data. As a result, we obtain reconstructions with high geometric detail, far beyond the depth resolution of the camera itself. Our core contribution is shading-based refinement directly on the implicit surface representation, which is generated from globally-aligned RGB-D images. We formulate the inverse shading problem on the volumetric distance field, and present a novel objective function which jointly optimizes for fine-scale surface geometry and spatially-varying surface reflectance. In order to enable the efficient reconstruction of sub-millimeter detail, we store and process our surface using a sparse voxel hashing scheme which we augment by introducing a grid hierarchy. A tailored GPU-based Gauss-Newton solver enables us to refine large shape models to previously unseen resolution within only a few seconds.
    No preview · Article · Jul 2015 · ACM Transactions on Graphics
  • [Show abstract] [Hide abstract]
    ABSTRACT: Scene environments in modern games include a wealth of moving and animated objects, which is key to creating vivid virtual worlds. An essential aspect in dynamic scenes is the interaction between scene objects. Unfortunately, many real-time applications only support rigid body collisions due to tight time budgets. In order to facilitate visual feedback of collisions, residuals such as scratches or impacts with soft materials like snow or sand are realized by dynamic decal texture placements. However, decals are not able to modify the underlying surface geometry which would be highly desired to improve upon realism. In this chapter, we present a novel real-time technique to overcome this limitation by enabling fully automated fine-scale surface deformations resulting from object collisions. That is, we propose an efficient method to incorporate high-frequency deformations upon physical contact into dynamic displacement maps directly on the GPU. Overall, we can handle large dynamic scene environments with many objects at minimal runtime overhead.
    No preview · Chapter · Mar 2015
  • Source
    H Schäfer · J Raab · B Keinert · M Meyer · M Stamminger · M Nießner
    [Show abstract] [Hide abstract]
    ABSTRACT: Rendering of the Frog model (left), using feature-adaptive subdivision [Nießner et al. 2012a] takes 0.68ms (middle); our method only takes 0.36ms by performing locally adaptive subdivision (right); colors denote different subdivision levels. Abstract Feature-adaptive subdivision (FAS) is one of the state-of-the art real-time rendering methods for subdivision surfaces on modern GPUs. It enables efficient and accurate rendering of subdivision surfaces in many interactive applications, such as video games or authoring tools. In this paper, we present dynamic feature-adaptive subdivision (DFAS), which improves upon FAS by enabling an independent subdivision depth for every irregularity. Our subdi-vision kernels fill a dynamic patch buffer on-the-fly with the ap-propriate number of patches corresponding to the chosen level-of-detail scheme. By reducing the number of generated and pro-cessed patches, DFAS significantly improves upon the performance of static FAS.
    Full-text · Conference Paper · Feb 2015
  • K. Selgrad · C. Reintges · D. Penk · P. Wagner · M. Stamminger
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a novel technique for rendering depth of field that addresses difficult overlap cases, such as close, but out-of-focus, geometry in the near-field. Such scene configurations are not managed well by state-of-the-art post-processing approaches since essential information is missing due to occlusion. Our proposed algorithm renders the scene from a single camera position and computes a layered image using a single pass by constructing per-pixel lists. These lists can be filtered progressively to generate differently blurred representations of the scene. We show how this structure can be exploited to generate depth of field in real-time, even in complicated scene constellations.
    No preview · Article · Feb 2015
  • [Show abstract] [Hide abstract]
    ABSTRACT: (Figure Presented) We present the first real-time method for refinement of depth data using shape-from-shading in general uncontrolled scenes. Per frame, our real-time algorithm takes raw noisy depth data and an aligned RGB image as input, and approximates the time-varying incident lighting, which is then used for geometry refinement. This leads to dramatically enhanced depth maps at 30Hz. Our algorithm makes few scene assumptions, handling arbitrary scene objects even under motion. To enable this type of real-time depth map enhancement, we contribute a new highly parallel algorithm that reformulates the inverse rendering optimization problem in prior work, allowing us to estimate lighting and shape in a temporally coherent way at video frame-rates. Our optimization problem is minimized using a new regular grid Gauss-Newton solver implemented fully on the GPU. We demonstrate results showing enhanced depth maps, which are comparable to offline methods but are computed orders of magnitude faster, as well as baseline comparisons with online filtering-based methods. We conclude with applications of our higher quality depth maps for improved real-time surface reconstruction and performance capture.
    No preview · Article · Nov 2014 · ACM Transactions on Graphics
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Dedicated visualization methods are among the most important tools of modern computer-aided medical applications. Reformation methods such as Multiplanar Reformation or Curved Planar Reformation have evolved as useful tools that facilitate diagnostic and therapeutic work. In this paper, we present a novel approach that can be seen as a generalization of Multiplanar Reformation to curved surfaces. The main concept is to generate reformatted medical volumes driven by the individual anatomical geometry of a specific patient. This process generates flat views of anatomical structures that facilitate many tasks such as diagnosis, navigation and annotation. Our reformation framework is based on a non-linear as-rigid-as-possible volumetric deformation scheme that uses generic triangular surface meshes as input. To manage inevitable distortions during reformation, we introduce importance maps which allow controlling the error distribution and improving the overall visual quality in areas of elevated interest. Our method seamlessly integrates with well-established concepts such as the slice-based inspection of medical datasets and we believe it can improve the overall efficiency of many medical workflows. To demonstrate this, we additionally present an integrated visualization system and discuss several use cases that substantiate its benefits.
    Full-text · Article · Nov 2014 · IEEE Transactions on Visualization and Computer Graphics
  • Source
    K. Selgrad · C. Dachsbacher · Q. Meyer · M. Stamminger
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we introduce a novel technique for pre-filtering multi-layer shadow maps. The occluders in the scene are stored as variable-length lists of fragments for each texel. We show how this representation can be filtered by progressively merging these lists. In contrast to previous pre-filtering techniques, our method better captures the distribution of depth values, resulting in a much higher shadow quality for overlapping occluders and occluders with different depths. The pre-filtered maps are generated and evaluated directly on the GPU, and provide efficient queries for shadow tests with arbitrary filter sizes. Accurate soft shadows are rendered in real-time even for complex scenes and difficult setups. Our results demonstrate that our pre-filtered maps are general and particularly scalable.
    Preview · Article · Oct 2014 · Computer Graphics Forum
  • Conference Paper: Enhanced Sphere Tracing
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we present several performance and quality enhancements to classical sphere tracing: First, we propose a safe, over-relaxation-based method for accelerating sphere tracing. Second, a method for dynamically preventing self-intersections upon converting signed distance bounds enables controlling precision and rendering performance. In addition, we present a method for significantly accelerating the sphere tracing intersection test for convex objects that are enclosed in convex bounding volumes. We also propose a screen-space metric for the retrieval of a good intersection point candidate, in case sphere tracing does not converge thus increasing rendering quality without sacrificing performance. Finally, discontinuity artifacts common in sphere tracing are reduced using a fixed-point iteration algorithm. We demonstrate complex scenes rendered in real-time with our method. The methods presented in this paper have more universal applicability beyond rendering procedurally generated scenes in real-time and can also be combined with path-tracing-based global illumination solutions.
    No preview · Conference Paper · Sep 2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a novel method to adaptively apply modifications to scene data stored in GPU memory. Such modifications may include interactive painting and sculpting operations in an authoring tool, or deformations resulting from collisions between scene objects detected by a physics engine. We only allocate GPU memory for the faces affected by these modifications to store fine-scale color or displacement values. This requires dynamic GPU memory management in order to assign and adaptively apply edits to individual faces at runtime. We present such a memory management technique based on a scan-operation that is efficiently parallelizable. Since our approach runs entirely on the GPU, we avoid costly CPU-GPU memory transfer and eliminate typical bandwidth limitations. This minimizes runtime overhead to under a millisecond and makes our method ideally suited to many real-time applications such as video games and interactive authoring tools. In addition, our algorithm significantly reduces storage requirements and allows for much higher-resolution content compared to traditional global texturing approaches. Our technique can be applied to various mesh representations, including Catmull-Clark subdivision surfaces, as well as standard triangle and quad meshes. In this paper, we demonstrate several scenarios for these mesh types where our algorithm enables adaptive mesh refinement, local surface deformations, and interactive on-mesh painting and sculpting.
    Preview · Article · Aug 2014 · Computer Graphics Forum
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a combined hardware and software solution for marker-less reconstruction of non-rigidly deforming physical objects with arbitrary shape in real-time. Our system uses a single self-contained stereo camera unit built from off-the-shelf components and consumer graphics hardware to generate spatio-temporally coherent 3D models at 30 Hz. A new stereo matching algorithm estimates real-time RGB-D data. We start by scanning a smooth template model of the subject as they move rigidly. This geometric surface prior avoids strong scene assumptions, such as a kinematic human skeleton or a parametric shape model. Next, a novel GPU pipeline performs non-rigid registration of live RGB-D data to the smooth template using an extended non-linear as-rigid-as-possible (ARAP) framework. High-frequency details are fused onto the final mesh using a linear deformation model. The system is an order of magnitude faster than state-of-the-art methods, while matching the quality and robustness of many offline algorithms. We show precise real-time reconstructions of diverse scenes, including: large deformations of users' heads, hands, and upper bodies; fine-scale wrinkles and folds of skin and clothing; and non-rigid interactions performed by users on flexible objects such as toys. We demonstrate how acquired models can be used for many interactive scenarios, including re-texturing, online performance capture and preview, and real-time shape and motion re-targeting.
    No preview · Article · Jul 2014 · ACM Transactions on Graphics
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a novel real-time approach for fine-scale surface deformations resulting from collisions. Deformations are represented by a high-resolution displacement function. When two objects collide, these offsets are updated directly on the GPU based on a dynamically generated binary voxelization of the overlap region. Consequently, we can handle collisions with arbitrary animated geometry. Our approach runs entirely on the GPU, avoiding costly CPU-GPU memory transfer and exploiting the GPU’s computational power. Surfaces are rendered with the hardware tessellation unit, allowing for adaptively-rendered, high-frequency surface detail. Ultimately, our algorithm enables fine-scale surface deformations from geometry impact with very little computational overhead, running well below a millisecond even in complex scenes. As our results demonstrate, our approach is ideally suited to many real-time applications such as video games and authoring tools.
    No preview · Conference Paper · Jun 2014
  • Source
    Jan Kretschmar · Bernhard Preim · Marc Stamminger

    Full-text · Conference Paper · Jun 2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a novel method for the interactive markerless reconstruction of human heads using a single commodity RGB-D sensor. Our entire reconstruction pipeline is implemented on the graphics processing unit and allows to obtain high-quality reconstructions of the human head using an interactive and intuitive reconstruction paradigm. The core of our method is a fast graphics processing unit-based nonlinear quasi-Newton solver that allows us to leverage all information of the RGB-D stream and fit a statistical head model to the observations at interactive frame rates. By jointly solving for shape, albedo and illumination parameters, we are able to reconstruct high-quality models including illumination corrected textures. All obtained reconstructions have a common topology and can be directly used as assets for games, films and various virtual reality applications. We show motion retargeting, retexturing and relighting examples. The accuracy of the presented algorithm is evaluated by a comparison against ground truth data. Copyright © 2014 John Wiley & Sons, Ltd.
    No preview · Article · May 2014 · Computer Animation and Virtual Worlds
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: For a long time, GPUs have primarily been optimized to render more and more triangles with increasingly flexible shading. However, scene data itself has typically been generated on the CPU and then uploaded to GPU memory. Therefore, widely used techniques that generate geometry at render time on demand for the rendering of smooth and displaced surfaces were not applicable to interactive applications. As a result of recent advances in graphics hardware, in particular the GPU tessellation unit's ability to overcome this limitation, complex geometry can now be generated within the GPU's rendering pipeline on the fly. GPU hardware tessellation enables the generation of smooth parametric surfaces or application of displacement mapping in real-time applications. However, many well-established approaches in offline rendering are not directly transferable, due to the limited tessellation patterns or the parallel execution model of the tessellation stage. In this state of the art report, we provide an overview of recent work and challenges in this topic by summarizing, discussing and comparing methods for the rendering of smooth and highly detailed surfaces in real-time.
    Full-text · Conference Paper · Apr 2014

Publication Stats

2k Citations
82.42 Total Impact Points

Institutions

  • 1997-2015
    • Friedrich-Alexander-University of Erlangen-Nürnberg
      • • Pattern Recognition Lab
      • • Department of Computer Science
      Erlangen, Bavaria, Germany
    • Universitätsklinikum Erlangen
      • Department of Neurosurgery
      Erlangen, Bavaria, Germany
  • 2002-2013
    • Nuremberg University of Music
      Nuremberg, Bavaria, Germany
  • 2003
    • University of Nice-Sophia Antipolis
      Nice, Provence-Alpes-Côte d'Azur, France
  • 2002-2003
    • Bauhaus Universität Weimar
      Weimar, Thuringia, Germany
  • 2000
    • Max Planck Institute for Informatics
      Saarbrücken, Saarland, Germany