Conference Paper

Two-Edge-Resolved 3d Non-Line-of-Sight Imaging: A Fisher Information Equalized Discretization

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
We introduce an approach for three-dimensional full-colour non-line-of-sight imaging with an ordinary camera that relies on a complementary combination of a new measurement acquisition strategy, scene representation model, and tailored reconstruction method. From an ordinary photograph of a matte line-of-sight surface illuminated by the hidden scene, our approach reconstructs a three-dimensional image of the scene hidden behind an occluding structure by exploiting two orthogonal edges of the structure for transverse resolution along azimuth and elevation angles and an information orthogonal scene representation for accurate range resolution. Prior demonstrations beyond two-dimensional reconstructions used expensive, specialized optical systems to gather information about the hidden scene. Here, we achieve accurate three-dimensional imaging using inexpensive, and ubiquitous hardware, without requiring a calibration image. Thus, our system may find use in indoor situations like reconnaissance and search-and-rescue.
Article
Full-text available
The ability to form reconstructions beyond line-of-sight view could be transformative in a variety of fields, including search and rescue, autonomous vehicle navigation, and reconnaissance. Most existing active non-line-of-sight (NLOS) imaging methods use data collection steps in which a pulsed laser is directed at several points on a relay surface, one at a time. The prevailing approaches include raster scanning of a rectangular grid on a vertical wall opposite the volume of interest to generate a collection of confocal measurements. These and a recent method that uses a horizontal relay surface are inherently limited by the need for laser scanning. Methods that avoid laser scanning to operate in a snapshot mode are limited to treating the hidden scene of interest as one or two point targets. In this work, based on more complete optical response modeling yet still without multiple illumination positions, we demonstrate accurate reconstructions of foreground objects while also introducing the capability of mapping the stationary scenery behind moving objects. The ability to count, localize, and characterize the sizes of hidden objects, combined with mapping of the stationary hidden scene, could greatly improve indoor situational awareness in a variety of applications.
Article
Full-text available
In traditional optics education, shadows are often regarded as a mere triviality, namely as silhouettes of obstacles to the propagation of light. However, by examining a series of shadow phenomena from an embedded perspective, we challenge this view and demonstrate how in general both the shape of object and light source have significant impact on the resulting soft shadow images. Through experimental and mathematical analysis of the imaging properties of inverse objects, we develop a generalized concept of shadow images as complementary phenomena. Shadow images are instructive examples of optical convolution and provide an opportunity to learn about the power of embedded perspective for the study of optical phenomena in the classroom. Additionally, we introduce the less known phenomenon of the bright shadow.
Article
Full-text available
Non-line-of-sight (NLoS) imaging is an important challenge in many fields ranging from autonomous vehicles and smart cities to defense applications. Several recent works in optics and acoustics tackle the challenge of imaging targets hidden from view (e.g. placed around a corner) by measuring time-of-flight information using active SONAR/LiDAR techniques, effectively mapping the Green functions (impulse responses) from several controlled sources to an array of detectors. Here, leveraging passive correlations-based imaging techniques (also termed ’acoustic daylight imaging’), we study the possibility of acoustic NLoS target localization around a corner without the use of controlled active sources. We demonstrate localization and tracking of a human subject hidden around a corner in a reverberating room using Green functions retrieved from correlations of broadband uncontrolled noise sources recorded by multiple detectors. Our results demonstrate that for NLoS localization controlled active sources can be replaced by passive detectors as long as a sufficiently broadband noise is present in the scene.
Article
Full-text available
At THz frequencies, many building materials exhibit mirror-like reflectivity, greatly facilitating the 3D spatial location estimate of non-line-of-sight objects. Using a custom THz measurement setup that employs a high sensitivity room temperature THz sensor, we measure the spatial and angular components of the radiation from hidden objects scattered from rough walls. The three-dimensional location of a thermally elevated object can then be determined using this “light field” information together with a refocusing algorithm. We experimentally demonstrate accurate location estimates of human-like NLOS objects in realistic situations.
Article
Full-text available
We investigate the use of plenoptic data for locating non-line-of-sight (NLOS) objects from a scattered light signature. Using Fourier analysis, the resolution limits of the depth and transversal location estimates are derived from fundamental considerations on scattering physics and measurement noise. Based on the refocusing algorithm developed in the computer vision field, we derive an alternative formulation of the projection slice theorem in a form directly connecting the light field and a full spatial frequency spectrum including both depth and transversal dimensions. Using this alternative formulation, we propose an efficient spatial frequency filtering method for location estimation that is defined on a newly introduced mixed space frequency plane and achieves the theoretically limited depth resolution. A comparison with experimental results is reported.
Article
Full-text available
Non-line-of-sight (NLOS) imaging is a rapidly growing field seeking to form images of objects outside the field of view, with potential applications in autonomous navigation, reconnaissance, and even medical imaging. The critical challenge of NLOS imaging is that diffuse reflections scatter light in all directions, resulting in weak signals and a loss of directional information. To address this problem, we propose a method for seeing around corners that derives angular resolution from vertical edges and longitudinal resolution from the temporal response to a pulsed light source. We introduce an acquisition strategy, scene response model, and reconstruction algorithm that enable the formation of 2.5-dimensional representations—a plan view plus heights—and a 180∘ field of view for large-scale scenes. Our experiments demonstrate accurate reconstructions of hidden rooms up to 3 meters in each dimension despite a small scan aperture (1.5-centimeter radius) and only 45 measurement locations.
Article
Full-text available
We present a methodology for recovering the perspective imagery of a non-line-of-sight scene based on plenoptic observations of indirect photons scattered from a homogeneous surface. Our framework segregates the visual contents observed along the scattering surface into angular and spatial components. Given the reflectance characteristics of the scatterer, we show that the former can be deduced from scattering measurements employing diversity in angle at individual surface points, whereas the latter can be deduced from captured images of the scatterer based on prior knowledge of occlusions within the scene. We then combine the visual contents from both components into a plenoptic modality capable of imaging at higher resolutions than what is allowed by the angular information content and discriminating against extraneous signals in complex scenes that spatial information struggles to discern. We demonstrate the efficacy of this approach by reconstructing the imagery of test scenes from both synthetic and measured data.
Article
Full-text available
Non-line-of-sight imaging allows objects to be observed when partially or fully occluded from direct view, by analysing indirect diffuse reflections off a secondary relay surface. Despite many potential applications1–9, existing methods lack practical usability because of limitations including the assumption of single scattering only, ideal diffuse reflectance and lack of occlusions within the hidden scene. By contrast, line-of-sight imaging systems do not impose any assumptions about the imaged scene, despite relying on the mathematically simple processes of linear diffractive wave propagation. Here we show that the problem of non-line-of-sight imaging can also be formulated as one of diffractive wave propagation, by introducing a virtual wave field that we term the phasor field. Non-line-of-sight scenes can be imaged from raw time-of-flight data by applying the mathematical operators that model wave propagation in a conventional line-of-sight imaging system. Our method yields a new class of imaging algorithms that mimic the capabilities of line-of-sight cameras. To demonstrate our technique, we derive three imaging algorithms, modelled after three different line-of-sight systems. These algorithms rely on solving a wave diffraction integral, namely the Rayleigh–Sommerfeld diffraction integral. Fast solutions to Rayleigh–Sommerfeld diffraction and its approximations are readily available, benefiting our method. We demonstrate non-line-of-sight imaging of complex scenes with strong multiple scattering and ambient light, arbitrary materials, large depth range and occlusions. Our method handles these challenging cases without explicitly inverting a light-transport model. We believe that our approach will help to unlock the potential of non-line-of-sight imaging and promote the development of relevant applications not restricted to laboratory conditions.
Article
Full-text available
Computing the amounts of light arriving from different directions enables a diffusely reflecting surface to play the part of a mirror in a periscope—that is, perform non-line-of-sight imaging around an obstruction. Because computational periscopy has so far depended on light-travel distances being proportional to the times of flight, it has mostly been performed with expensive, specialized ultrafast optical systems1–12. Here we introduce a two-dimensional computational periscopy technique that requires only a single photograph captured with an ordinary digital camera. Our technique recovers the position of an opaque object and the scene behind (but not completely obscured by) the object, when both the object and scene are outside the line of sight of the camera, without requiring controlled or time-varying illumination. Such recovery is based on the visible penumbra of the opaque object having a linear dependence on the hidden scene that can be modelled through ray optics. Non-line-of-sight imaging using inexpensive, ubiquitous equipment may have considerable value in monitoring hazardous environments, navigation and detecting hidden adversaries.
Article
Full-text available
How to image objects that are hidden from a camera's view is a problem of fundamental importance to many fields of research, with applications in robotic vision, defence, remote sensing, medical imaging and autonomous vehicles. Non-line-of-sight (NLOS) imaging at macroscopic scales has been demonstrated by scanning a visible surface with a pulsed laser and a time-resolved detector. Whereas light detection and ranging (LIDAR) systems use such measurements to recover the shape of visible objects from direct reflections, NLOS imaging reconstructs the shape and albedo of hidden objects from multiply scattered light. Despite recent advances, NLOS imaging has remained impractical owing to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multiply scattered light. Here we show that a confocal scanning procedure can address these challenges by facilitating the derivation of the light-cone transform to solve the NLOS reconstruction problem. This method requires much smaller computational and memory resources than previous reconstruction methods do and images hidden objects at unprecedented resolution. Confocal scanning also provides a sizeable increase in signal and range when imaging retroreflective objects. We quantify the resolution bounds of NLOS imaging, demonstrate its potential for real-time tracking and derive efficient algorithms that incorporate image priors and a physically accurate noise model. Additionally, we describe successful outdoor experiments of NLOS imaging under indirect sunlight.
Article
Full-text available
Active non-line-of-sight imaging systems are of growing interest for diverse applications. The most commonly proposed approaches to date rely on exploiting time-resolved measurements, i.e., measuring the time it takes for short light pulses to transit the scene. This typically requires expensive, specialized, ultrafast lasers and detectors that must be carefully calibrated. We develop an alternative approach that exploits the valuable role that natural occluders in a scene play in enabling accurate and practical image formation in such settings without such hardware complexity. In particular, we demonstrate that the presence of occluders in the hidden scene can obviate the need for collecting time-resolved measurements, and develop an accompanying analysis for such systems and their generalizations. Ultimately, the results suggest the potential to develop increasingly sophisticated future systems that are able to identify and exploit diverse structural features of the environment to reconstruct scenes hidden from view.
Article
Full-text available
The recovery of objects obscured by scattering is an important goal in imaging and has been approached by exploiting, for example, coherence properties, ballistic photons or penetrating wavelengths. Common methods use scattered light transmitted through an occluding material, although these fail if the occluder is opaque. Light is scattered not only by transmission through objects, but also by multiple reflection from diffuse surfaces in a scene. This reflected light contains information about the scene that becomes mixed by the diffuse reflections before reaching the image sensor. This mixing is difficult to decode using traditional cameras. Here we report the combination of a time-of-flight technique and computational reconstruction algorithms to untangle image information mixed by diffuse reflection. We demonstrate a three-dimensional range camera able to look around a corner using diffusely reflected light that achieves sub-millimetre depth precision and centimetre lateral precision over 40 cm×40 cm×40 cm of hidden space.
Article
Passive non-line-of-sight (NLOS) imaging has drawn great attention in recent years. However, all existing methods are in common limited to simple hidden scenes, low-quality reconstruction, and small-scale datasets. In this paper, we propose NLOS-OT, a novel passive NLOS imaging framework based on manifold embedding and optimal transport, to reconstruct high-quality complicated hidden scenes. NLOS-OT converts the high-dimensional reconstruction task to a low-dimensional manifold mapping through optimal transport, alleviating the ill-posedness in passive NLOS imaging. Besides, we create the first large-scale passive NLOS imaging dataset, NLOS-Passive, which includes 50 groups and more than 3,200,000 images. NLOS-Passive collects target images with different distributions and their corresponding observed projections under various conditions, which can be used to evaluate the performance of passive NLOS imaging algorithms. It is shown that the proposed NLOS-OT framework achieves much better performance than the state-of-the-art methods on NLOS-Passive. We believe that the NLOS-OT framework together with the NLOS-Passive dataset is a big step and can inspire many ideas towards the development of learning-based passive NLOS imaging. Codes and dataset are publicly available ( https://github.com/ruixv/NLOS-OT ).
Article
2017 IEEE. We show that walls, and other obstructions with edges, can be exploited as naturally-occurring 'cameras' that reveal the hidden scenes beyond them. In particular, we demonstrate methods for using the subtle spatio-temporal radiance variations that arise on the ground at the base of a wall's edge to construct a one-dimensional video of the hidden scene behind the wall. The resulting technique can be used for a variety of applications in diverse physical settings. From standard RGB video recordings, we use edge cameras to recover 1-D videos that reveal the number and trajectories of people moving in an occluded scene. We further show that adjacent wall edges, such as those that arise in the case of an open doorway, yield a stereo camera from which the 2-D location of hidden, moving objects can be recovered. We demonstrate our technique in a number of indoor and outdoor environments involving varied floor surfaces and illumination conditions.
Article
We recover a video of the motion taking place in a hidden scene by observing changes in indirect illumination in a nearby uncalibrated visible region. We solve this problem by factoring the observed video into a matrix product between the unknown hidden scene video and an unknown light transport matrix. This task is extremely ill-posed as any non-negative factorization will satisfy the data. Inspired by recent work on the Deep Image Prior, we parameterize the factor matrices using randomly initialized convolutional neural networks trained in a one-off manner, and show that this results in decompositions that reflect the true motion in the hidden scene.
Article
Passive non-line-of-sight imaging methods are often faster and stealthier than their active counterparts, requiring less complex and costly equipment. However, many of these methods exploit motion of an occluder or the hidden scene, or require knowledge or calibration of complicated occluders. The edge of a wall is a known and ubiquitous occluding structure that may be used as an aperture to image the region hidden behind it. Light from around the corner is cast onto the floor forming a fan-like penumbra rather than a sharp shadow. Subtle variations in the penumbra contain a remarkable amount of information about the hidden scene. Previous work has leveraged the vertical nature of the edge to demonstrate 1D (in angle measured around the corner) reconstructions of moving and stationary hidden scenery from as little as a single photograph of the penumbra. In this work, we introduce a second reconstruction dimension: range measured from the edge. We derive a new forward model, accounting for radial falloff, and propose two inversion algorithms to form 2D reconstructions from a single photograph of the penumbra. Performances of both algorithms are demonstrated on experimental data corresponding to several different hidden scene configurations. A Cramer-Rao bound analysis further demonstrates the feasibility (and utility) of the 2D corner camera.
Conference Paper
We introduce the new non-line-of-sight imaging problem of imaging behind an occluder. The behind-an-occluder problem can be solved if the hidden space is flanked by opposing visible surfaces. We illuminate one surface and observe light that scatters off of the opposing surface after traveling through the hidden space. Hidden objects attenuate light that passes through the hidden space, leaving an observable signature that can be used to reconstruct their shape. Our method uses a simple capture setup—we use an eye-safe laser pointer as a light source and off-the-shelf RGB or RGB-D cameras to estimate the geometry of relay surfaces and observe two-bounce light. We analyze the photometric and geometric challenges of this new imaging problem, and develop a robust method that produces high-quality 3D reconstructions in uncontrolled settings where relay surfaces may be non-planar.
Conference Paper
We show that walls, and other obstructions with edges, can be exploited as naturally-occurring “cameras” that reveal the hidden scenes beyond them. In particular, we demonstrate methods for using the subtle spatio-temporal radiance variations that arise on the ground at the base of a wall's edge to construct a one-dimensional video of the hidden scene behind the wall. The resulting technique can be used for a variety of applications in diverse physical settings. From standard RGB video recordings, we use edge cameras to recover 1-D videos that reveal the number and trajectories of people moving in an occluded scene. We further show that adjacent wall edges, such as those that arise in the case of an open doorway, yield a stereo camera from which the 2-D location of hidden, moving objects can be recovered. We demonstrate our technique in a number of indoor and outdoor environments involving varied floor surfaces and illumination conditions.
Conference Paper
We show that multi-path analysis using images from a timeof-flight (ToF) camera provides a tantalizing opportunity to infer about 3D geometry of not only visible but hidden parts of a scene. We provide a novel framework for reconstructing scene geometry from a single viewpoint using a camera that captures a 3D time-image I(x, y, t) for each pixel. We propose a framework that uses the time-image and transient reasoning to expose scene properties that may be beyond the reach of traditional computer vision. We corroborate our theory with free space hardware experiments using a femtosecond laser and an ultrafast photo detector array. The ability to compute the geometry of hidden elements, unobservable by both the camera and illumination source, will create a range of new computer vision opportunities.
Article
Conventional imaging uses steady-state illumination and light sensing with focusing optics; variations of the light field with time are not exploited. We develop a signal processing frame- work for estimating the reflectance of a Lambertian planar sur- face in a known position using omnidirectional, time-varying il- lumination and unfocused, time-resolved sensing in place of tra- ditional optical elements such as lenses and mirrors. Our model associates time sampling of the intensity of light incident at each sensor with a linear functional of . The discrete-time samples are processed to obtain -regularized estimates of . Improving on previous work, using nonimpulsive, bandlimited light sources instead of impulsive illumination significantly improves signal-to- noise ratio (SNR) and reconstruction quality. Our simulations sug- gest that practical diffuse imaging applications may be realized with commercially-available temporal light intensity modulators and sensors used in standard optical communication systems.
Non-line-of-sight snapshots and background mapping with an active corner camera
  • Sheila Seidel
  • Iris Hoover Rueda-Chacón
  • Federica Cusini
  • Franco Villa
  • Christopher Zappa
  • Yu
  • K Vivek
  • Goyal
Exploiting occlusion in non-line-of-sight active imaging
  • Christos Thrampoulidis
  • Gal Shulkind
  • Feihu Xu
  • T William
  • Freeman
  • H Jeffrey
  • Antonio Shapiro
  • Torralba