Figure 2 - uploaded by Jay Busch
Content may be subject to copyright.

(left)Seven of the cameras used to capture the perfor- mance. (right) The array of 216 video projectors used to display the subject.
Context in source publication
Context 1
... present a system for capturing and rendering life-size 3D human subjects on an automultiscopic display. Automultiscopic 3D displays allow a large number of viewers to experience 3D content simultaneously without the hassle of special glasses or head gear. Such displays are ideal for human subjects as they allow for natural personal interactions with 3D cues such as eye-gaze and complex hand gestures. In this talk, we will focus on a case-study where our system was used to digitize television host Morgan Spurlock for his documentary show ”Inside Man” on CNN. Automultiscopic displays work by generating many simultaneous views with high- angular density over a wide-field of view. The angular spacing between between views must be small enough that each eye perceives a distinct and different view. As the user moves around the display, the eye smoothly transitions from one view to the next. We generate multiple views using a dense horizontal array of video projectors. As video projectors continue to shrink in size, power consumption, and cost, it is now possible to closely stack hundreds of projectors so that their lenses are almost continuous. However this display presents a new challenge for content acquisition. It would require hundreds of cameras to directly measure every projector ray. We achieve similar quality with a new view interpolation algorithm suitable for dense automultiscopic displays. Our interpolation algorithm builds on Einarsson et al. [2006] who used optical flow to resample a sparse light field. While Einarsson et al. was limited to cyclical motions using a rotating turntable, we use an array of 30 unsynchronized Panasonic X900MK 60p con- sumer cameras spaced over 180 degrees to capture unconstrained motion. We first synchronize our videos within 1/120 of a sec- ond by aligning their corresponding sound waveforms. We com- pute pair-wise spatial flow correspondences between cameras using GPU optical flow. As each camera pair is processed independently, the pipeline can be highly parallelized. As a result, we achieve much shorter processing times than traditional multi-camera stereo reconstructions. Our view interpolation algorithm maps images directly from the original video sequences to all the projectors in real- time, and could easily scale to handle additional cameras or projectors. For the ”Inside Man” documentary we recorded a 54 minute interview with Morgan Spurlock, and processed 7 minutes of 3D video for the final show. Our projector array consists of 216 video projectors mounted in a semi-circle with a 3.4m radius. We have a narrow 0 . 625 ◦ spacing between projectors which provides a large display depth of field with minimal aliasing. We use LED-powered Qumi v3 projectors in a portrait orientation (Fig. 2). At this distance the projected pixels fill a 2m tall anisotropic screen with a life-size human body (Fig. 1). The screen material consists of a vertically-anisotropic light shaping diffuser manufactured by Luiminit Co. The material scatters light vertically ( 60 ◦ ) so that each pixel can be seen at multiple viewing heights and while maintaining a narrow horizontal blur ( 1 ◦ ) to smoothly fill in the gaps between the projectors with adjacent pixels. More details on the screen material can be found in Jones et al. [2014]. We use six computers to render the projector images. Each computer contains two ATI Eyefinity 7800 graphics cards with 12 total video outputs. Each video signal is then divided three ways using a Matrox TripleHead-to-Go video HDMI splitter. In the future, we plan on capturing longer format interviews and other dynamic performances. We are working to incorporate natural language processing to allow for true interactive conversations with realistic 3D humans. E INARSSON , P., C HABERT , C.-F., J ONES , A., M A , W.-C., L A MOND , B., H AWKINS , T., B OLAS , M., S YLWAN , S., AND D E BEVEC , P. 2006. Relighting human locomotion with flowed reflectance fields. In Rendering Techniques 2006: 17th Euro- graphics Symposium on Rendering , 183–194. J ONES , A., N AGANO , K., L IU , J., B USCH , J., Y U , X., B OLAS , M., AND D EBEVEC , P. 2014. Interpolating vertical parallax for an autostereoscopic 3d projector ...
Similar publications
Erklären lernt man im Mathematikunterricht nicht durch Instruktion oder reine Konstruktion, sondern in den Interaktionen zwischen allen Beteiligten. Doch was genau wird in den jeweiligen Klassen dazu tatsächlich gelernt? Ist der Lerngegenstand überhaupt in allen Klassen gleich? Wenn nicht, inwiefern unterscheidet er sich?
Die interdisziplinäre Vide...
Das laufende Forschungsvorhaben gliedert sich in drei Untersuchungsbereiche. In der vorliegenden Veröffentlichung werden die Ergebnisse der bereits durchgeführten, ersten qualitativen Befragung mit Designmanagern, Designern und Entwicklungsfachleuten dargestellt. Befragt wurden hierbei sowohl Unternehmen als auch Designagenturen, die für diese Unte...
Ein Forschungs- und Theoriestrang, der sich parallel zur erwachsenenbildnerischen Adressatenforschung entwickelt, ist die so genannte Interesseforschung. Sie ist, soweit ich es übersehen kann, kaum in der Erwachsenen- und Weiterbildungsforschung rezipiert worden, sieht man von einigen Ausnahmen ab (Schmidt 2006; Kade 1979). Anders als Schmidt hat K...
We have an elementary introduction to Khovanov-Lipshitz-Sarkarstable homotopy type.
We show the author's result obtained with Kau�man, and the author's result obtained with Kau�man and Nikonov.
We ask some open questions in section 4.
Citations
... Bonding a spherical lenslet array [24], cylindrical lenticular array [22], or parallax barrier [15] onto a conventional highresolution 2D display is a popular approach. Another option is to combine multiple projectors using a reflective or trans missive screen that has a very narrow scattering profile [2,22,17]. Xia et al. [37] use light field generation to achieve a 360-degree surround viewable volumetric display with proper occlusion. Jones et al. [16] combine a fast spinning slanted anisotropic mirror with a synchronized projector to reproduce a light field that can be viewed from any angle. ...
This paper describes a simple 3D display that can be built from a tablet computer and a plastic sheet folded into a cone. This display allows viewing a three-dimensional object from any direction over a 360-degree path of travel without the use of special glasses. Inspired by the classic Pepper's Ghost illusion, our approach uses a curved transparent surface to reflect the image displayed on a 2D display. By properly pre-distorting the displayed image our system can produce a perspective-correct image to the viewer that appears to be suspended inside the reflector. We use the gyroscope integrated into modern tablets to adjust the rendered image based on the relative orientation of the viewer. Our particular reflector geometry was determined by analyzing the optical performance and stereo-compatibility of a space of rotationally-symmetric conic surfaces. We present several prototypes along with side-by side comparisons with reference images.
... One of the most well known camera arrays for capturing light fields is the Stanford Multi-Camera Array [21], consisting of 128 video cameras that can be arranged in various layouts, such as a linear array of parallel cameras or a converging array of cameras having horizontal and/or vertical parallax. Numerous other multicamera setups have been built since then for both research and commercial purposes, e.g. the 100 camera system at Nagoya University [22], the 27-camera system at Holografika [23] (discussed later in this paper) or the 30-camera system from USC Institute for Creative Technologies [24]. These camera systems capture a sufficiently dense (in terms of angular resolution) and wide (in terms of baseline) light field, so that the captured data can be visualized on a light field display without synthesizing additional views beforehand. ...
Light field 3D displays represent a major step forward in visual realism, providing glasses-free spatial vision of real or virtual scenes. Applications that capture and process live imagery have to process data captured by potentially tens to hundreds of cameras and control tens to hundreds of projection engines making up the human perceivable 3D light field using a distributed processing system. The associated massive data processing is difficult to scale beyond a specific number and resolution of images, limited by the capabilities of the individual computing nodes. The authors therefore analyze the bottlenecks and data flow of the light field conversion process and identify possibilities to introduce better scalability. Based on this analysis they propose two different architectures for distributed light field processing. To avoid using uncompressed video data all along the processing chain, the authors also analyze how the operation of the proposed architectures can be supported by existing image/video codecs.
We propose a compact multi-projection system based on integral floating method with waveguide projection. Waveguide projection can reduce the projection distance by multiple folding of optical path inside the waveguide. The proposed system is composed of a wedge prism, which is used as a waveguide, multiple projection-units, and an anisotropic screen made of floating lens combined with a vertical diffuser. As the projected image propagates through the wedge prism, it is reflected at the surfaces of prism by total internal reflections, and the final view image is created by the floating lens at the viewpoints. The position of view point is decided by the lens equation, and the interval of view point is calculated by the magnification of collimating lens and interval of projection-units. We believe that the proposed method can be useful for implementing a large-scale autostereoscopic 3D system with high quality of 3D images using projection optics. In addition, the reduced volume of the system will alleviate the restriction of installment condition, and will widen the applications of a multi-projection 3D display.