Article

Fuwa-Vision: an auto-stereoscopic floating-image display

Authors:
  • VIVITA INC. JAPAN
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We propose Fuwa-Vision, a simple interactive auto-stereoscopic display for multiple users, which does not require glasses or mechanical moving components. This system can project auto-stereoscopic images in midair with a simple structure. This provides a proper view of the combination of stereoscopic objects and real objects. In this setup, it can detect the location of the object, and allow the interaction with the 3D image. For example, it can detect the position of the candle, and render virtual flame for this candle as if it were with real fire. This display is consist of four parts, backlighted Liquid Crystal Display(LCD), transparent LCD, the Fresnel convex lens on the optical axis and the head tracking system for the viewer's position. It control the direction of light precisely.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Several methods are available for forming mid-air images; these include those that use a Fresnel lens [1,2], a concave mirror [3], a dihedral corner reflector array (DCRA) [4,5], an ASKA3D plate [6], a roof mirror array (RMA) [7,8], and aerial imaging by retro-reflection (AIRR) [9]. ...
... FLOATS V [1] and FuwaVision [2] are available as techniques using Fresnel lenses. These methods can display three-dimensional (3D) images by presenting different images to the left and right eyes. ...
Article
Full-text available
The mid-air image is a very powerful method for presenting computer graphics in a real environment, but it cannot be used in bright locations owing to the decrease in brightness during the imaging process. Therefore, to form a mid-air image with a high-brightness light source, a square pyramidal mirror structure was investigated, and the sunlight concentration was simulated. We simulated the tilt angle and combination angle of the condenser as parameters to calculate the luminance of the surface of a transparent liquid crystal display. The light collector was installed at 55∘ from the horizontal plane and mirror. A high level of illumination was obtained when these were laminated together at an angle of 70∘. To select a suitable diffuser, we prototyped and measured the brightness of the mid-air image with an LED lamp to simulate sunlight in three settings: summer solstice, autumnal equinox, and winter solstice. The maximum luminance of the mid-air image displayed by collecting actual sunlight was estimated to be 998.6 cd/m2. This is considerably higher than the maximum smartphone brightness to allow for outdoor viewing, and it can ensure fully compatible visibility.
... Optical systems that produce a mid-air image on a glossy reflective surface are a promising approach to fusing public spaces and mid-air images [12,13,14]. HaptoMIRAGE [15] and Fuwa-Vision [16] can display an autostereoscopic mid-air image using a Fresnel lens. These optics have the advantage of a wide visible range and the ability to represent a three-dimensional mid-air image; however, linear movement in the mid-air image is difficult because the image position is determined by the lens. ...
Article
Full-text available
To develop mid-air imaging optical systems, several simulation methods have been used by tracking ray paths that generate a mid-air image as well as undesired images. Although these methods can simulate viewing space and the undesired images of mid-air imaging optical systems, the appearance lacks in terms of chip artifacts, shape of the undesired images, and luminance. In this study, we found that the appearance of a mid-air image and characteristics of micro-mirror array plates (MMAPs) can be reproduced by computer graphics-based simulation using ray tracing with the appropriate modeling parameters of the MMAPs. To simulate the appearance of the mid-air image, we investigated appropriate modeling methods for MMAPs. We also evaluated the validity of the simulation by comparing the characteristics of the rendered mid-air images with the actual mid-air image. Furthermore, we demonstrated several prototypes by employing our method in the actual design of optical systems.
... Smoot [18] demonstrates a volumetric display which consists of a large-aperture, a rimdriven, an adjustable-resonance, and a varifocal beamsplitter. FuwaVision [19] and HaptoMirage [20] use a frenel lens and a transparent LCD to form 3D mid-air images. fVisiOn uses multiple projectors and special screen to from a 360 degree multi-view 3D image on the table, but it requires the use of 280 projectors [21]. ...
Conference Paper
For glass-free mixed reality (MR), mid-air imaging is a promising way of superimposing a virtual image onto a real object. We focus on attaching virtual images to non-static real life objects. In previous work, moving the real object causes latency in the superimposing system, and the virtual image seems to follow the object with a delay. This is caused by delays due to sensors, displays and computational devices for position sensing, and occasionally actuators for moving the image generation source. In order to avoid this problem, this paper proposes to separate the object-anchored imaging effect from the position sensing. Our proposal is a retro-reflective system called "SkyAnchor," which consists of only optical devices: two mirrors and an aerial-imaging plate. The system reflects light from a light source anchored under the physical object itself, and forms an image anchored around the object. This optical solution does not cause any latency in principle and is effective for high-quality mixed reality applications. We consider two types of light sources to be attached to physical objects: reflecting content from a touch table on which the object rests, or attaching the source directly on the object. As for position sensing, we utilize a capacitive marker on the bottom of the object, tracked on a touch table. We have implemented a prototype, where mid-air images move with the object, and whose content may change based on its position.
Article
Full-text available
Mid-air imaging technology enables computer graphics (CG) images to move around in the real world. However, the big form-factor of large-format mid-air displays makes them bulky and limits their applicability. To reduce the size of such displays, we propose an optical design that realizes mid-air image movement by rotating the reflecting surface and the light source with motors and thus moving the virtual image of the light source. We made a prototype based on the calculation of the mirror and display angles and the distance between them and evaluated the luminance, sharpness, and position of the mid-air image of this prototype. We confirmed that the size of the prototype was smaller than that produced by the previous method, which will allow a smaller form-factor for creating mid-air images for different applications.
Article
We propose a virtual camera that can pass through a small hole in an obstruction to capture an image on the other side. Recently, free-viewpoint television technology has enabled the generation of video in which viewpoint images are captured from locations where cameras are not actually placed. However, capturing the images of objects concealed behind obstructions or beyond a camera’s field of view is a challenge. We designed an optical transformation system that utilizes a conventional camera, concave lens, and transmissive mirror device (TMD); this system enables the capture of images through small holes in walls or other obstructions. Our experimental prototype demonstrated that it is possible to capture images of the area on the other side of a wall through a 5 mm hole. In this paper, modulation transfer function (MTF) comparison is used to show that a combination of a concave lens and TMD is an effective optical design for a midair camera.
Conference Paper
Humans use letters, which are two-dimensional static symbols, for communication. Writing these letters requires body movement as well as spending a certain amount of time; therefore, it can be demonstrated that a letter is a trajectory of movement and time. Based on this notion, the author conducted studies regarding multidimensional kinetic typography, primarily using robots to display a letter and visualize its time and movement simultaneously. This paper describes the project background and design of the three types of robotic displays that were developed and discusses possible expressions using robotic displays.
Conference Paper
We propose a mid-air imaging technique that is visible under sunlight and that passively reacts to light conditions in a bright space. Optical imaging is used to form a mid-air image through the reflection and refraction of a light source. It seamlessly connects a virtual world and the real world by superimposing visual images onto the real world. Previous research introduced light emitting displays as a light source. However, attenuation of the brightness under a strong light environment presents a problem. We designed a mid-air imaging optical system that captures ambient light using a transparent LCD (liquid crystal display) and a diffuser. We built a prototype to confirm our design principles in sunlight and evaluated several diffusers. Our contribution is three-fold. First, we confirmed the principle of the mid-air imaging optical system in sunlight. Second, we chose an appropriate diffuser in an evaluation. Third, we proposed a practical design which can remove disturbance light for outdoor use.
Conference Paper
We present a two-part exploration on mobile true-3D displays, i.e. displaying volumetric 3D content in mid-air. We first identify and study the parameters of a mobile true-3D projection, in terms of the projection's distance to the phone, angle to the phone, display volume and position within the display. We identify suitable parameters and constraints, which we propose as requirements for developing mobile true-3D systems. We build on the first outcomes to explore methods for coordinating the display configurations of the mobile true-3D setup. We explore the resulting design space through two applications: 3D map navigation and 3D interior design. We discuss the implications of our results for the future design of mobile true-3D displays.
Chapter
This project aims to construct an information environment that is based on our proposed theory of haptic primary colors and is both visible and tangible. This environment will be one where communication in real space, human interfaces, and media processing are integrated. We have succeeded in transmitting fine haptic sensations, such as material texture and temperature, from an avatar robot’s fingers to a human user’s fingers by using a telexistence anthropomorphic robot dubbed TELESAR V. Other results of this research project include RePro3D, a full-parallax, autostereoscopic 3D (three-dimensional) display; TECHTILE Toolkit, a prototyping tool for the design and improvement of haptic media; and HaptoMIRAGE, a 180°-field-of-view autostereoscopic 3D display that up to three users can enjoy at the same time.
Article
A seamless connection between a game space and the real world is a long-sought goal of the entertainment computing field. Recent games have intuitive interactions through gestures and motion-based control. However, their images are still confined to the displays. In this paper, we propose a novel interaction experience in which mid-air images interact with physical objects. Our “Mid-air Augmented Reality Interaction with Objects” (MARIO) system enables images to be displayed in 3D spaces beyond screens. By creating a spatial link between images appearing in mid-air and physical objects, we extend video games into the real world. For entertainment purposes, a game character appears in mid-air and jumps over real blocks that users have arranged with their hands. A user study was conducted at public exhibitions, and the results showed the popularity and effectiveness of the system as an entertainment application.
Article
HaptoMIRAGE is a simultaneous multi-user autostereoscopic display for seamless interaction with mixed reality environments. This system can project 3D contents in mid-air and enable as many as three participants observe the same contents with a 150 degrees wide-angle view. The system can be used in several scenarios, such as superimposition of 3D content onto real objects, and multi-user collaborative drawing in the real world. Further, users can interact with the 3D content, such as rotating it and viewing it from different angles, allowing them to easily create and feel the mixed reality world via tangible objects with multi-modal sensations.
Conference Paper
HaptoMIRAGE is a visuo-haptic display that provides a wide-angle autostereoscopic 3D image on the content-adjustable haptic display so that it enables us to have an enchant interaction with the virtual world via tangible object with multi-modal sensation not only by one user but also by multi users.
Conference Paper
In the field of glass-free autostereoscopic display, the working area of the display is an important topic for research. It is desirable to have a display that provides a projected image with a large working area. The working area is defined on the basis of two parameters: the distance and the direction from the display to the target user. We have previously proposed the glass-free autostereoscopic display using Active-shuttered Real Image Autostereoscopy (ARIA) technology[Nii et al. 2012]. To increase the work distance of this system, we propose the double-layer active-shutter control method. ARIA is a simple technology for developing a glass-free auto-stereoscopic display without any mechanical moving components. The display made with this technology consists of two LCD devices and a large projection lens with an eye-tracking system. One of the LCD devices acts as the image display while the other serves as an active-shutter panel made of a set of pixels. This panel controls the light source by changing the level of transparency of its pixels in order to control the direction of the projection area on the basis of the eye location. As shown in Fig. 1(a), the image projected at the left eye while a shutter blocks the light travelling to the right eye, so that the next image is projected at the right eye in the next frame. This mechanism enables time division for the stereo images. The shutter was installed at a fixed distance from the lens. When the user moves from the designated position, thus changing the distance from the display, the image for the right eye may be received at the left eye. Please observe the line at the "Near point" in Fig. 1(a). When the user moves to the near point of the real image, the right eye views the image intended for the left eye. To avoid this problem, the image size is decreased to be smaller than the designated distance. The method for estimation was described for a specific distance.
Article
Full-text available
Autostereoscopic 3D displays using static parallax barrier or lenticular lens are commercially available in these days. In these methods, however, the number of viewpoints is fixed at the time of manufacturing. Active parallax barrier [Perlin, 2000] and dynamic parallax barrier [Peterka, 2008] have been proposed to improve the resolution and number of viewpoints by moving the position of the parallax barriers. To implement an autostereoscopic display, with dynamic parallax barriers, dual-layered LCD panel is a common approach as is mentioned by [Hirsch, 2010]. And a method of optimizing the parallax barriers [Lanman, 2010] has been proposed to represent more perceptually-acceptable imagery. Remaining limitations of the dual-layered LCD method are the refresh rate and brightness of the LCD panel. Commonly used high-speed LCD has 120Hz in refresh rate and around 500cd/m2 in brightness so that conventional systems did not have enough quality to be used in public. To solve this problem, we propose an adaptive parallax autostereoscopic display composed of a high-density/high-frequency LED panel and a high-speed/high-contrast LCD panel.
Conference Paper
Full-text available
We present a display device which solves a long-standing problem: to give a true stereoscopic view of simulated objects, without artifacts, to a single unencumbered observer, while allowing the observer to freely change position and head rotation.Based on a novel combination of temporal and spatial multiplexing, this technique will enable artifact-free stereo to become a standard feature of display screens, without requiring the use of special eyewear. The availability of this technology may significantly impact CAD and CHI applications, as well as entertainment graphics. The underlying algorithms and system architecture are described, as well as hardware and software aspects of the implementation.
Article
Full-text available
We optimize automultiscopic displays built by stacking a pair of modified LCD panels. To date, such dual-stacked LCDs have used heuristic parallax barriers for view-dependent imagery: the front LCD shows a fixed array of slits or pinholes, independent of the multi-view content. While prior works adapt the spacing between slits or pinholes, depending on viewer position, we show both layers can also be adapted to the multi-view content, increasing brightness and refresh rate. Unlike conventional barriers, both masks are allowed to exhibit non-binary opacities. It is shown that any 4D light field emitted by a dual-stacked LCD is the tensor product of two 2D masks. Thus, any pair of 1D masks only achieves a rank-1 approximation of a 2D light field. Temporal multiplexing of masks is shown to achieve higher-rank approximations. Non-negative matrix factorization (NMF) minimizes the weighted Euclidean distance between a target light field and that emitted by the display. Simulations and experiments characterize the resulting content-adaptive parallax barriers for low-rank light field approximation.
Article
Full-text available
Autostereoscopic displays provide 3D perception without the need for special glasses or other head gear. Three basic technologies exist to make such displays: spatial multiplex, multi-projector and time-sequential. These can be used to make two types of useful device: two-view, head-tracked displays; and multi-view displays. The former tend to be single-viewer systems while the latter can support multiple viewers. However, the latter tend to require more processing power because they have more views than the former. Both types can be expected to find uses in their own niches.
Conference Paper
Motion parallax is important to recognize the depth of a 3D image. In recent years, many D display methods that enable parallax images to be seen with the naked eye have been developed. In addition, there has been an increase in research to design interfaces that enable humans to intuitively interact with and operate 3D objects using their hands. However, realizing 3D object interaction as if the user is actually touching the object in the real world is quite difficult. One of the reasons for this is that the screen shape in conventional methods is restricted to a flat panel. In addition, it is difficult to achieve a balance between displaying the 3D image and sensing the user input. Therefore, we propose a novel full-parallax D display system that is suitable for interactive 3D applications. We call this system RePro3D. Our approach is based on a retro-reflective projection technology[Inami et al. 2000]. A number of images from a projector array are projected onto the retro-reflective screen. When a user looks at the screen through a half mirror, he or she, without the use of glasses, can view a 3D image that has motion parallax. We can choose the screen shape depending on the application. Image correction according to the screen shape is not required. Consequently, we can design a touch-sensitive soft screen, a complexly curved screen, or a screen with an automatically moving surface. RePro3D has a sensor function to recognize the user input. Some interactive features, such as operation of 3D objects, can be achieved by using it.
Conference Paper
A novel autostereoscopic display capable of real-time computer interaction, and refresh rate fast enough for stereoscopic television has been built at Cambridge University, in a joint programme of the Department of Engineering and Computer, Laboratory. The authors discuss the display, the dynamic lens principle and 3D display assembly, image generation, view angles, picture rates, slot shutter, CRT phosphor, drive signals, viewing distance, image depth, image resolution and picture distortion
Autostereoscopic dsiplay with read-image screen
  • H Kakeya
  • Y Arakawa
-11-17. Glasses-free three-dimensional display for wide viewing zone: How it works and commercial product spec
  • Kobayashi H.
Repro3d: full-parallax 3d display using retro-reflective projection technology
  • Acm Siggraph