Fig 4 - uploaded by Kenji Tanaka
Content may be subject to copyright.
Function of prism sheet. 

Function of prism sheet. 

Source publication
Article
Full-text available
One of the key techniques for vision-based communication is omnidirectional stereo (omnistereo) imaging, in which stereoscopic images for an arbitrary horizontal direction are captured and presented according to the viewing direction of the observer. Although omnistereo models have been surveyed in several studies, few omnistereo sensors have actua...

Context in source publication

Context 1
... prism sheet is an optical element that bends incident light rays at a certain angle (Fig. 4). The incident angle for the light ray exiting perpendicularly from the prism (uneven) side of our prism sheet is 23 degrees, as calculated in Appendix A. We can replace the impeller-shaped mirrors with prism sheets. Light rays for the left and right eyes are delivered to the center of the rotator with the prism sheets for their ...

Similar publications

Article
Full-text available
Back-focal-plane detection of micrometer-sized beads offers subnanometer resolution for single-molecule, optical trapping experiments. However, laser beam-pointing instability and mechanical drift of the microscope limit the resolution of optical-trapping experiments. By combining two infrared lasers with improved differential beam-pointing stabili...

Citations

... To retrieve depth information from these multiple cameras, sufficient bandwidth and computing power to calculate stereo pairs are required. Although they are not full-view spherical capture, curved mirrors and reflective optics have also been used to capture wide-angle or 360-degree depth and environment [16] [11] [17]. Broxton et al. [5] used 46 cameras on the surface of a hemispherical dome to reconstruct immersive light field video. ...
Preprint
In this paper, we describe a method to capture nearly entirely spherical (360 degree) depth information using two adjacent frames from a single spherical video with motion parallax. After illustrating a spherical depth information retrieval using two spherical cameras, we demonstrate monocular spherical stereo by using stabilized first-person video footage. Experiments demonstrated that the depth information was retrieved on up to 97% of the entire sphere in solid angle. At a speed of 30 km/h, we were able to estimate the depth of an object located over 30 m from the camera. We also reconstructed the 3D structures (point cloud) using the obtained depth data and confirmed the structures can be clearly observed. We can apply this method to 3D structure retrieval of surrounding environments such as 1) previsualization, location hunting/planning of a film, 2) real scene/computer graphics synthesis and 3) motion capture. Thanks to its simplicity, this method can be applied to various videos. As there is no pre-condition other than to be a 360 video with motion parallax, we can use any 360 videos including those on the Internet to reconstruct the surrounding environments. The cameras can be lightweight enough to be mounted on a drone. We also demonstrated such applications.
... Some attempts have been made to capture ODS video directly. The system of Tanaka and Tachi uses a rotating prism sheet to capture the relevant rays, but this requires a complex setup and the resulting video is of low quality [Tanaka and Tachi 2005]. The mirror based system of Weissig et al.has the advantage of no moving parts and significantly higher video quality, but the vertical field of view is limited to 60 degrees [Weissig et al. 2012]. ...
... Directly capturing the rays necessary to build an ODS panorama is difficult for time varying scenes. While this has been attempted [Tanaka and Tachi 2005] the quality of such approaches is currently below that of the computational approaches to ODS capture. Most previous approaches [Ishiguro et al. 1990;Peleg et al. 2001;Richardt et al. 2013] capture ODS panoramas by rotating a camera on a circle of diameter greater than that of the ODS viewing circle as shown in Figure 6. ...
Article
We present Jump, a practical system for capturing high resolution, omnidirectional stereo (ODS) video suitable for wide scale consumption in currently available virtual reality (VR) headsets. Our system consists of a video camera built using off-the-shelf components and a fully automatic stitching pipeline capable of capturing video content in the ODS format. We have discovered and analyzed the distortions inherent to ODS when used for VR display as well as those introduced by our capture method and show that they are small enough to make this approach suitable for capturing a wide variety of scenes. Our stitching algorithm produces robust results by reducing the problem to one of pairwise image interpolation followed by compositing. We introduce novel optical flow and compositing methods designed specifically for this task. Our algorithm is temporally coherent and efficient, is currently running at scale on a distributed computing platform, and is capable of processing hours of footage each day.
... Also to reduce depth distortions, the number of cameras needs to be increased, which in turn makes the system bulkier. Tanaka and Tachi [22] also proposed a method to capture omnistereo video sequences. Their rotating optics system consisted of prism sheets, circular or linear polarizing films, and a hyperboloidal mirror. ...
Conference Paper
We present a practical solution for generating 360 degrees stereo panoramic videos using a single camera. Current approaches either use a moving camera that captures multiple images of a scene, which are then stitched together to form the final panorama, or use multiple cameras that are synchronized. A moving camera limits the solution to static scenes, while multi-camera solutions require dedicated calibrated setups. Our approach improves upon the existing solutions in two significant ways: It solves the problem using a single camera, thus minimizing the calibration problem and providing us the ability to convert any digital camera into a panoramic stereo capture device. It captures all the light rays required for stereo panoramas in a single frame using a compact custom designed mirror, thus making the design practical to manufacture and easier to use. We analyze several properties of the design as well as present panoramic stereo and depth estimation results.
... Omnistereo imaging of static scenes was introduced to allow a robot to discover its environment [11]. Standard methods typically capture several vertical slits images, i.e. small horizontal field of view images captured from cameras rotating off-axis [10,11,14,16,19]. All captured rays are tangent to a viewing circle, but in opposite directions for the left and right views (see Fig. 2). ...
Conference Paper
Full-text available
We introduce in this paper a camera setup for stereo immersive (omnistereo) capture. An omnistereo pair of images gives stereo information up to 360 degrees around a central observer. Previous methods to produce omnistereo images assume a static scene in order to stitch together multiple images captured by a stereo camera rotating on a fixed tripod. Our omnipolar camera setup uses a minimum of 3 cameras with fisheye lenses. The multiple epipoles are used as locations to stitch the images together and produce omnistereo images with no horizontal misalignments due to parallax. We show results of using 3 cameras to capture an unconstrained dynamic scene while the camera is travelling. The produced omnistereo videos are formatted to be displayed on a cylindrical screen or dome.
Article
Surrounding vehicle detection is one of the most important modules for a vision-based driver assistance system (VB-DAS) or an autonomous vehicle. In this paper, we put forward a wireless panoramic camera system for real-time and seamless imaging of the 360-degree driving scene. Using an embedded FPGA design, the proposed panoramic camera system can perform fast image stitching and produce panoramic videos in real-time, which greatly relives the computation and storage burden of a traditional multi-camera-based panoramic system. For surrounding vehicle detection, we present a novel deep convolutional neural network - EZ-Net, which perceives the potential vehicles by using 13 convolutional layers and locates the vehicles by a local non-maximum suppression process. Experimental results demonstrate that, the proposed EZ-Net performs vehicle detection on the panoramic video at a speed of 140 fps while holding a competing accuracy with the state-of-the-art detectors.
Article
This paper proposes a low-cost and portable polycamera system and accompanying methods for capturing and synthesizing stereoscopic 360° panoramas. The polycamera consists of only four cameras with fisheye lenses. Synthesizing panoramas from only four views is challenging because the cameras view very differently and the captured images have significant distortions and color degradation including vignetting, contrast loss, and blurriness. For coping with these challenges, this paper proposes methods for rectifying the polyview images, estimating depth of the scene and synthesizing stereoscopic panoramas. The proposed camera is compact in size, light in weight, and inexpensive. The proposed methods allow the synthesis of visually pleasing stereoscopic 360° panoramas using the images captured with the proposed polycamera. We have built a prototype of the polycamera and tested it on a set of scenes with different characteristics of depth ranges and depth variations. The experiments show that the proposed camera and methods are effective in generating stereoscopic 360° panoramas that can be viewed on popular virtual reality displays.
Article
Streaming of 360° content is gaining attention as an immersive way to remotely experience live events. However live capture is presently limited to 2D content due to the prohibitive computational cost associated with multi-camera rigs. In this work we present a system that directly captures streaming 3D virtual reality content. Our approach does not suffer from spatial or temporal seams and natively handles phenomena that are challenging for existing systems, including refraction, reflection, transparency and speculars. Vortex natively captures in the omni-directional stereo (ODS) format, which is widely supported by VR displays and streaming pipelines. We identify an important source of distortion inherent to the ODS format, and demonstrate a simple means of correcting it. We include a detailed analysis of the design space, including tradeoffs between noise, frame rate, resolution, and hardware complexity. Processing is minimal, enabling live transmission of immersive, 3D, 360° content. We construct a prototype and demonstrate capture of 360° scenes at up to 8192 X 4096 pixels at 5 fps, and establish the viability of operation up to 32 fps.