Conference PaperPDF Available

Computational time-resolved imaging, single-photon sensing, and non-line-of-sight imaging

Authors:
  • algorithmslive
Nature reviews | Physics
Computational time-resolved
imaging, single-photon sensing,
and non-line-of-sight imaging
David LindellMatthew O'TooleSrinivasa
Narasimhan Ramesh Raskar
Abstract | Emerging single-photon- sensitive sensors produce picosecond-
accurate time- stamped photon counts. Applying advanced inverse methods to
process these data has resulted in unprecedented imaging capabilities, such as
non- line- of- sight (NLOS) imaging. Rather than imaging photons that travel
along direct paths from a source to an object and back to the detector, NLOS
methods analyse photons that travel along indirect light paths, scattered from
multiple surfaces, to estimate 3D images of scenes outside the direct line of
sight of a camera, hidden by a wall or other obstacles. We review the transient
imaging techniques that underlie many NLOS imaging approaches, discuss
methods for reconstructing hidden scenes from time- resolved measurements,
describe some other methods for NLOS imaging that do not require transient
imaging and discuss the future of ‘seeing around corners’.
The ability to image objects outside the
direct line of sight of a camera would enable
applications in robotic vision, remote
sensing, medical imaging, autonomous
driving and many other domains. For
example, the ability to see hidden obstacles
could enable autonomous vehicles to avoid
collisions, drive more efficiently and plan
driving actions further in advance. Present-
day 3D imaging systems commonly used in
automotive sensing, such as light detection
and ranging (LiDAR), measure the time it
takes a light pulse to travel along a direct
path from a source to a visible object and
back to a sensor. Non-line- of- sight
(NLOS) imaging goes one step further by
analysing light scattered from multiple
surfaces along indirect paths, with the goal
of revealing the 3D shape and visual
appearance of objects outside the direct line
of sight1,2 ( ).
NLOS imaging poses several challenges.
One challenge is that only a few of the
many recorded photons carry the
information necessary to estimate hidden
objects. Whereas the photon count of light
directly reflected from a single scattering
point falls off with a factor proportional to
the inverse of the square distance, the signal
strength of light scattered from multiple
surfaces decreases several orders of
magnitude faster. Robustly detecting and
time-stamping the few indirectly scattered
photons in the presence of the much
brighter signal
returning directly from the visible scene
requires single- photon- sensitive detectors
with a high dynamic range or with gating
capabilities. A second challenge is that the
inverse problem of estimating 3D shape
and appearance of hidden objects from
intensity measurements alone is ill-posed.
Solving the NLOS problem robustly
requires advanced imaging systems
capable of picosecond- accurate time-
resolved measurement, mathematical
priors on the imaged scenes, or other
unconventional approaches. A third
challenge is that the inverse problems
associated with NLOS imaging are
extremely large. Developing efficient
algorithms to compute solutions in
reasonable times and with memory
resources available on a single computer is
crucial to make this emerging imaging
modality practical.
Over the past 8 years, various approaches
addressing the NLOS problem have been
proposed. Some of these focus on advanced
measurement systems, using femtosecond
and picosecond time-resolved detectors
25, interferometry6,7, acoustic
systems8, passive imaging systems911 or
thermal imaging12,13. Others explore models
of light transport that make certain
assumptions on the reflectance or other
properties of the hidden scenes. At the
convergence of physics, signal processing,
optics and electronics, NLOS imaging is an
interdisciplinary challenge that has seen
much progress. Nevertheless,
continued effort in both theory and
experimental systems is necessary to make
the idea of seeing around corners practical
‘in the wild’.
Time- resolved imaging systems that use
pulsed light sources along with single-
photon detectors are some of the most
promising candidates for practical solutions
in NLOS imaging. The measurement process
of time- resolved NLOS imaging systems
can be understood from an example scene in
which a pulsed laser with a pulse width in,
for example, the range 100 fs to 100 ps,
illuminates a wall that acts as a relay surface
at one point ( ). The light reaching the
wall subsequently scatters into the hidden
region where it re-scatters off any hidden
objects before returning to the wall where
the time-resolved indirect light transport is
measured. Individual areas on the object
scatter back spherical waves, which upon
intersecting the wall give rise to ellipsoids
that expand outwards in time (shown
PersPectives
www.nature.com/natrevphys
schematically in  ). It is these time- varying ellipsoids that contain all the information
required to reconstruct a full 3D image of the hidden scene. The key requirement for time-
resolved NLOS approaches is the temporal resolution of the detector, which must be high
enough to freeze light in motion14 ( ). Light travels ~3 cm in 100 ps, which determines
the desired temporal resolution of the imaging system, because this dictates the achievable
transverse and axial resolution of reconstructed 3D images.
In this Perspective, we discuss the emerging field of NLOS imaging and aim at making it
accessible to the reader by categorizing existing approaches by the types of measurement
systems they use and their algorithmic approaches. We first discuss technologies that enable
imaging at the speed of light: that is, detectors with femtosecond or picosecond accuracy.
We then discuss time-resolved NLOS imaging approaches that build on these technologies.
Finally, we overview alternative methods for
NLOS imaging and discuss possible future
directions of the field.
Imaging at the speed of light The
concept of freezing light in motion,
sometimes referred to as ‘light-in-
flight’ or ‘transient’ imaging, is not
specific to
NLOS imaging14. Several techniques for
light- in- flight imaging have been proposed,
starting in the 1960s when nonlinear optical
gating techniques were first used to create
an ultrafast shutter. Doing so extended the
basic concept of the mechanical shutter used
in many high-speed cameras to that of a
shutter that is activated by light itself.
Another ingenious approach that effectively
paved the way for true light-in-
flight imaging was developed in the 1970s
and relies on standard holographic
techniques15, modified so that the reference
field is a laser pulse that is spatially
extended and hits the photographic plate at
an angle1619. The result is a hologram in
which different transverse locations on the
exposed photographic plate correspond to
different times in the scene, owing to the
different arrival times of the tilted reference
pulse. Viewing the photographic plate at
different lateral positions provides an image
at different times with resolutions of the
order of picoseconds or even less. A related
technique that also relies on interference of
the light reflected from a scene or object
with a reference field is based on a
generalization of optical coherence
tomography. Reconstructions of transient
light scenes with very high spatial resolution
(tens of micrometres) with 15 trillion frames
per second are obtained through detection of
interference fringes as the interferometer
delay is varied20. Despite the success of
these and related approaches, their
application has been limited to scenes that
are relatively simple.
Transient imaging using time-of-
flight cameras provides a 3D image of a
scene that can also be applied to NLOS2123
and offers the distinct advantage of being
low budget, with commercial time-of-
flight cameras costing around US$100.
These cameras illuminate the scene with a
sinusoidally
modulated (typically 10100 MHz or
higher) light beam. The return signal is
demodulated against a reference sine wave
from which a phase delay is extracted that is
directly related to the time of flight and
hence to the propagation distance within the
scene
(see 14,24 for an overview).
Higher temporal resolution and better
light sensitivity, both key parameters for
NLOS imaging, can be obtained with more
complex and expensive cameras. For
example, full 3D NLOS imaging was first
demonstrated with a streak camera, which
enabled precise reconstruction of a small
mannequin2 ( ). These cameras rely on a
photocathode to convert the incoming
photons into electrons. The electrons can
then be ‘streaked’ by a time-varying electric
field, thereby mapping time onto transverse
position. The streaked electrons are detected
on a standard charge-coupled device (CCD)
camera after reconversion back into photons
on a phosphor screen. The use of one spatial
dimension for the temporal streaking implies
that these cameras can only see one line of
the scene at a time, a limitation that can be
offset for NLOS imaging by scanning the
illumination laser spot2,25. Techniques have
been implemented that make it possible to
fully open the input slit and, by
computational fusion with data from a CCD,
obtain a full 2D image without any need for
scanning2628. Interestingly, these full-
imaging approaches have not yet been
applied to NLOS imaging.
An alternative approach to transient
imaging is based on the use of intensified
CCD cameras (iCCD). iCCDs rely on a
microchannel plate that is electronically
gated so that electrons generated by an input
photocathode are amplified only for a short
gate time before being reconverted back to
light on a phosphor screen and detected on
a CCD or complementary metaloxide
semiconductor (CMOS) camera. Typical
gate times are of the order of nanoseconds
but can be as short as 100 ps, or even less.
Like all of the imaging techniques reviewed
here, iCCDs can also be used for NLOS
imaging29.
this wall extends into the obscured region and indirectly illuminates
hidden objects, which in turn scatter the light back to the wall where it is
recorded by a time- resolved detector, typically a single- photon
avalanche diode (SPAD) sensor. b | Schematic of the waves scattered
wall of spherical scattered waves from the object. c | Schematic of the
temporal trace of photon counts observed at a given pixel on the wall.
The peaks correspond to the scattered spherical waves expanding
outwards with time.
PersPectives
Nature reviews | Physics
These and later techniques applied to
light- in- flight imaging have sufficient
precision to observe distortions in the final
video due to the finite speed of light, such
as apparently inverted motion of refracted
waves from a bottle or apparent
superluminal motion of light pulses30,31. A
100- ps- gate iCCD has been used to record
the apparent time reversal of events
occurring during light propagation: the
intersection of a plane wave and a wall
travels at speed c/sinθ (θ is the intersection
angle between the plane wave and the wall)
and is therefore always superluminal. The
transient imaging of the scattering of light
from this intersection plane on the wall
reveals an apparent motion in the direction
opposite to that actually followed by the
light pulse32, in much the same way that a
piece of music played by a speaker moving
faster than the speed of sound is heard
backwards33.
Moving beyond the first 3D NLOS
imaging based on streak cameras2, work
ensued to improve on some of the
limitations encountered in these
measurements that required several hours of
data acquisition. There was particular
emphasis on improving acquisition speed
(with the goal of video frame-rate imaging),
light sensitivity (aiming to extend the
observation area to entire rooms and observe
human-sized objects), portability (for
deploying the technology in the real world)
and cost (ideally, there would be a
technology that does all the above with
similar costs to a time-of- flight
camera).
Single- photon avalanche diodes
(SPADs) are semiconductor structures
similar to a photodiode but with a large bias
voltage, which results in carrier
multiplication: the absorption of a single
photon causes an avalanche breakdown,
leading to a large current signal that can be
detected and processed by external
electronics. Time- to- digital converters
measure the time between the emission of an
illumination pulse and the detection of an
associated returned photon on the SPAD. A
time- correlated single-photon counter is
then used to form a histogram of photon
arrival times34. SPADs achieve single-
photon sensitivity with photon detection
efficiencies up to 40% and exceptionally
low dark count rates of 110 photons per
second in the visible spectrum. After the
detection of a photon, the detector is blind
for a hold-off period (dead time) of tens to
hundreds of nanoseconds, thus limiting the
achievable maximum count rate. The
histogram of photon arrival times gives a
precise measurement of the light pulse
temporal profile, as long as the measurement
is performed in a photon-sparse regime
that is, a regime in which the likelihood of
more than one photon hitting the detector
during the dead time (referred to as pile-up)
is substantially less than one. Accounting
for the SPAD dead time, working in the
photon- sparse regime provides a maximum
allowed count rate, avoiding photon pile-up
distortion effects, of the order of 110 MHz.
SPAD detectors are available in both
single- pixel and arrayed (that is, camera)
format, at both visible3546 and infrared
wavelengths4749. SPAD cameras have been
used for light-in- flight imaging, for
which the single- photon sensitivity enabled
the camera to capture a light pulse
propagating in free space, with photons
collected on the camera originating from
Rayleigh scattering in air, as opposed to
scattering from a surface or enhanced
scattering in a diffusive medium50 ( ). The 32 × 32 pixel SPAD camera had a temporal
resolution of about 50 ps, corresponding to 200 million frames per second. Although not as
fast as some of the techniques discussed above that can attain more than a trillion frames per
second, this frame rate is still sufficient to freeze light in motion with a blur of only 1.5 cm.
This minor loss of temporal resolution comes with several benefits. The cameras are
compact, are straightforward to use (the camera is based on standard CMOS technology, is
commercially available and is small enough to be integrated into a smartphone), have high
data acquisition rates (NLOS data acquisition has been
demonstrated with sub-second timescales) 51 resolution directly affects both transverse
and, with interference filters at the specific and depth resolution of the 3D image
laser illumination wavelength, can also reconstruction, as discussed below.
be deployed outdoors and in daylight At present, most SPAD arrays are
conditions3,52. Video frame-rate acquisition developed for LiDAR imaging. Looking
of transient images using SPADs has been to the future, NLOS applications require
achieved53, as well as in more standard improvements in temporal resolutions,
LiDAR configurations deployed outdoors better fill factors, the ability to gate out
over kilometre distances54. direct light from the relay surface, and a
The first application of SPAD array more flexible way to read out photon time
sensors to NLOS imaging was in a simpler stamps from those SPAD pixels that see a
configuration in which only the position photon. There is therefore a need for SPAD
of the target was assessed, rather than its arrays specifically designed with NLOS
full 3D shape. This simplification allowed applications in mind.
acquisition and processing times of the order
of 1 second for a moving target, both in a Time- of- flight NLOS imaging
small- scale laboratory set- up55 and also for Image formation model
detecting people behind a corner on larger A time- resolved detector, such as a SPAD,
scales (more than 50 m distance from the measures the incident photon flux as a
detector)52. Single- pixel gated SPADs56 and function of time, relative to an emitted
line arrays57 with a scanning laser spot have light pulse. The detector is therefore used
also been used to acquire full 3D scenes to record the temporal impulse response
PersPectives
www.nature.com/natrevphys
and are currently some of the preferred of a
scene, including direct and global
approaches for NLOS imaging, with most
illumination, at sampling positions x′,y'
set- ups over the past few years using
SPADs on a visible surface ( ), resulting
in a
either in single pixel or array format. 3D
spacetime volume that is referred to
The temporal resolution actually as the
transient image, τ. As discussed in
required from the detector depends on the
previous section, a transient image
factors that include the illumination pulse
contains both directly reflected photons
length and the task at hand. For example,
and photons that travel along indirect light
for transient imaging, such as capturing
paths. The direct illumination (that is, light
a light pulse in flight, there is no need to
emitted by the source and scattered back
use a detector with temporal resolution to the
detector from an object) contains all
shorter than the pulse length. For 100-ps or
information necessary to recover the shape
longer pulses, this can readily be achieved
and reflectance of visible parts of the scene.
with the techniques described above. For
Recovering such information is commonly
femtosecond pulses, such as those available
done for 3D imaging and LiDAR. For
from standard femtosecond oscillators, the
NLOS imaging, the direct light is typically
current resolution of detectors, limited to 10
not considered because it does not contain
or more picoseconds, will unavoidably result
useful information on the hidden scene. It
in temporal blur of the pulse that will be of
can be readily removed from measurements,
order 0.3–1 cm, compared with the 30 μm of
for example by using the fact that it arrives
a 100- fs pulse. However, when considering
earlier than multiple-surface reflected
NLOS imaging, the detector’s temporal
photons, and can therefore be gated out.
Back-projection intensity (a.u.) Confidence (a.u.)
Fig. 2 | First experimental demonstration of ‘looking around corners’. A mannequin behind a
corner (panel a) is recovered from time-resolved measurements using unfiltered (panel b) and
filtered (panel c) back-projection algorithms. Adapted from 2, Springer Nature Limited.
Fig. 3 | Demonstration of the capability of recording light in flight at picosecond timescales for a
pulse of light propagating between three mirrors. Such time- resolved measurements of light
transport form the basis of many non- line- of- sight imaging techniques. The laser light first hits
the small circular mirror on the right and is directed towards the field of view of the single- photon
avalanche diode (SPAD) camera, as indicated by the arrow in the first image. The field of view (FOV)
is represented by dashed rectangles and corresponds to a region 35 × 35 cm2. In the successive
frames the laser pulse is imaged at increasing times, indicated in each frame, before exiting the
FOV in the last frame. Adapted from 50, CC BY 4.0.
The image formation model for the
time- resolved indirect light transport of
a confocal NLOS system3 (that is, one in
which both the laser illumination and the
subsequent detection are at the same
point x′,y' on the visible surface) can be
formulated as
in the confocal configuration integrates
over spherical surfaces in the hidden scene.
More general non-confocal configurations
are also common, for which the detector
samples the time-resolved indirect light
transport at one point on the wall while the
laser directly illuminates a different point
on the visible surface2,4. The laser point or
PersPectives
Nature reviews | Physics
τ x y t( ′ ′, , )=the detection point
can then be scanned r
(1)
with linearized transport models. Various
approaches to solving both the linearized
and nonlinear NLOS problem are discussed
below. The linearized problem reduces to
approximating or solving the large linear
equation system τ = , where τ represents
the discretized transient measurements, ρ
are the unknown reflectance values of the
hidden scene albedo, and A is a matrix
describing the indirect time-resolved light
transport. Inverse methods
Heuristic solutions. Heuristic solutions for
estimating the shape and reflectance of the
hidden volume are popular. One of the most
intuitive of these approaches is to relate the
measured times of the first-returning
indirect photons to the convex hull of the
hidden object or scene58. Alternatively,
simple parametric planar models can be
fitted to represent the hidden scene59.
Another area still in its infancy is the use of
active capture methods, which shape
illumination and detection to optimize
capture based on the anticipated content of
the scene. Spatial refocusing after the first
scattering surface can be controlled using
spatial light modulators, and the focused
spot can be scanned across the scene60.
Temporal focusing uses an illumination
pulse that is shaped in space and time to
create an
illumination pulse
at an area in the hidden scene61. These
techniques can improve the signal-to-
noise ratio and resolution for the obtained
reconstruction.
Back- projection methods. Back- projection
methods are some of the most popular
methods for NLOS image reconstruction from
transient measurements ( ). They
approximate the hidden volume ( ) as ATτ
and optionally apply a filtering or other post-
processing step to this result ( ).
Similar strategies are standard practice for
solving large- scale inverse problems, for
example in medical imaging. Indeed, the
inverse problem of confocal NLOS scanning
approaches is closely related to the spherical
Radon transform62, whereas the general non-
confocal scanning approach is similar to the
elliptical Radon transform63. Filtered back-
projection methods are standard solutions to
these inverse problems. Both computational
time and memory requirements of these
Radon transforms are tractable even for large-
scale inverse problems. Hence, several
variants of back-projection algorithms have
been explored for NLOS imaging2,6467, but
when applied to NLOS imaging these algorithms have a computational complexity of O(N5)
for N voxels. Like limited-baseline tomography problems68, NLOS problems are typically
ill-posed inverse problems because acquired measurements usually do not sample all
Fourier coefficients. In microscopy and medical imaging, this is known as the ‘missing
cone’ problem. To estimate these missing components, the inverse method must incorporate
statistical priors to fill in these parts using iterative solvers.
Linear inverse methods. Linear inverse methods have been proposed to solve the convex
optimization problem of estimating ρ from τ. Several of these approaches aim to use
iterative optimization methods to solve this problem64,69,70, but such approaches are typically
very slow. The light-cone transform
( ) was proposed as a closed-form solution to the linear inverse problem, and it
efficiently solves the exact linear inverse problem with a computational complexity of O(N3
log N) by assuming a smoothness prior on the reconstructed volume3. An implementation of
this method on graphics processing units has achieved real-time reconstruction rates71.
Inverse light transport with partial occlusions, surfaces and normals. This class of
methods has received much attention in recent research proposals, because some of the
simplifying assumptions of the image formation model (Eq. ) can be lifted by solving the
nonlinear problem rather than a linearized approximation. For example, several time-
resolved methods have included partial occlusions within the hidden scene in the image
formation model7274. Interestingly, it has been shown that occlusions and shadows in the
hidden scene can also be exploited to facilitate passive NLOS approaches that do not require
time- resolved imaging systems9,10,75,76. However, the associated inverse problems are much
more ill- posed than they are for active imaging, and the proposed algorithms often make
restrictive assumptions. A few recent approaches have also incorporated hidden surface
normals into the image formation model72,77, which can further help improve reconstruction
quality. Finally, an emerging research direction is to reconstruct hidden surfaces, rather than
volumes, directly from the transient measurements7780. High- resolution volumes are
memory- inefficient data structures and can quickly exceed available computational
resources. Therefore, in practice, a trade-off between level of detail of a reconstructed
volume and memory requirement may have to be made. Surface representations have the
potential to
δ(2 (x′ −x) (2 y′ − ) +y 2 z2tc) d d d ,x y z
where ρ is the reflectance of a point
in the hidden scene and the Dirac
delta function δ relates the time of
flight t to the distance function
r = (x′− x) + (2 y′− ) +y 2 z2 = tc/2. Here,
c is the speed of light and x,y,z are the
spatial coordinates of the hidden volume.
For convenience, we assume that the
sampling locations x',y' are located on the
plane z = 0 and that the laser pulse is
infinitesimally short, and we only consider
indirect light transport that bounced
precisely three times after emission by a
light source and before being detected: off a
visible surface within the line of sight, then
off a hidden surface outside the line of sight,
and finally, once more off the visible
surface. The function g absorbs
miscellaneous time-independent attenuation
effects that depend on the hidden surface
normals, reflectance properties of the hidden
scene, visibility of a hidden point from some
sampling point x',y' and several other
factors. Each measurement
independently from each other. In this more
general configuration, measurements
integrate along elliptical surfaces.
Moreover, higher-order bounces of indirect
light transport could also be considered to
model indirect reflections of light within the
hidden scene, although these become
increasingly difficult to measure.
This image formation model is at the core
of most NLOS imaging approaches. The
effects modelled by g make this a nonlinear
image formation model. Several approaches
work with a linearized approximation of Eq.
, for which g = 1. This linear approximation
is easier to invert than the nonlinear model,
but it makes several additional assumptions
about the light transport in the hidden scene,
such as that light scatters isotropically and
no occlusions occur between different scene
parts outside the line of sight. Indeed, line-
of- sight imaging problems are made
nonlinear in a similar fashion if surface
normals, bidirectional reflectance
distribution functions and occlusions are
included in the model. This is why,
typically, line- of- sight imaging systems
also operate
PersPectives
www.nature.com/natrevphys
represent finer geometric detail with fewer
computational resources. It remains unclear,
however, what the ‘best’ representation for
general NLOS imaging is.
Wave optics models. These models as
opposed to the geometric optics model
outlined above have recently been
explored for transient imaging
configurations with time-resolved detectors
and pulsed light sources4,5,8184 ( ). In
these methods, the light transport in the
hidden scene is modelled using the time-
dependent wave equation or other models
from physical optics. A similar concept was
also applied to NLOS data captured in the
Fourier domain by an amplitude-modulated
continuous- wave light source21.
The algorithms in this category do not
necessarily try to solve the inverse problem
of estimating the hidden geometry directly,
unlike most of the methods discussed above.
Rather, the transient image is treated as a
virtual wave field and propagated
backwards in time to a specific time instant.
The geometry estimation problem then
becomes that of relating the hidden
geometry to specific properties of the
temporally evolving wave field. As in a
line-of- sight camera, the problem is thus
divided into a linear operator that estimates
the wave in the hidden scene (that is, the
image) and a nonlinear problem of
estimating properties such as geometry and
bidirectional
reflectance distribution functions from the
image.
There are several benefits of a wave
optics model for the NLOS problem. First,
some of these approaches have been
experimentally shown to be more robust to
different types of reflectance properties of
the hidden surfaces. Glossy, specular,
diffuse or retro- reflective materials can all
be treated with the same method, whereas
geometric optics approaches must either
know and model the reflectance properties a
priori or estimate them along with the
hidden geometry. Second, wave models
make it easier to draw the connection
between NLOS imaging and related work in
areas such as radar, seismic imaging,
ultrasonic imaging and other established
fields. For example, range migration
techniques, including frequency
wavenumber or fk migration, originally
developed in the seismic imaging
community85,86, and later adopted to
synthetic aperture sonar87,88, ultrasound
imaging89 and synthetic aperture radar90,
result in some of the fastest and most robust
NLOS imaging techniques5. The phase
information of the light wave used in these
experiments is not measured or required.
What is used instead is the phase and
wavefront of an intensity wave riding on the
optical carrier wave. The phase of this wave
is related to the time of arrival of the signal
photons, not to their optical phase. The
phase of the light wave is typically not
Fig. 4 | NLOs reconstructions of a hidden room-sized scene. a,b | One approach to non-line- of-
sight (NLOS) imaging is to capture time-resolved measurements sampled across a visible surface
and recon- struct the 3D shape and reflectance of the hidden scene. A disco ball produces the
bright dots seen in the measurements of indirect light transport (panel a), and other diffuse and
glossy objects produce the streaks. c | Of the methods for reconstructing shape and reflectance
from these measurements, filtered back-projection is conceptually one of the simpler methods; it
involves a delay- and-sum (that is, back- projection) operation of the time- resolved
measurements, followed by a heuristic high- pass filter on the result. d | The light- cone transform
is a fast reconstruction algorithm that produces more accurate reconstructions in less time than
other approaches, but it requires the hidden objects to be either diffuse or highly reflective. e |
NLOS imaging with frequencywavenumber (fk) migration is both fast and versatile. The wave-
based nature of this inverse method is unique in being robust to objects with diverse and complex
reflectance properties, such as the glossy dragon, the diffuse statue and the reflective disco ball
shown in this scene. All volumes are rendered as maximum- intensity projections. Adapted with
permission from 5, Association for Computing Machinery.
Fig. 5 | Reconstructions of a large scene using the phasor-field virtual wave approach. Data are collected with a single-pixel single- photon avalanche
diode (SPAD), using point scanning to emulate a large detector array. a | The hidden scene. b | Reconstructions. The exposure time per scanned point
and total data collection time is shown under each image in panel b. The entire scan involves 24,000 points. The scene is approximately 2 m wide and
3 m deep.
Adapted from 4, Springer Nature Limited.
accessible with time-resolved NLOS
imaging systems. The time-of-
flight information of indirect light transport
must instead be used to estimate object
shape, which makes the associated inverse
problems different.
Data- driven approaches. Data-driven
approaches are emerging as a tool for NLOS
reconstructions. Neural networks can
reconstruct hidden scenes from steady- state
data captured with a continuous light source
and a conventional camera91,92. However,
practical application of neural networks to
time-of- flight data faces the difficulty of
generating sufficient training data. One
approach could be to generate data
numerically based on a known forward
model. Recently, training data were
experimentally collected using actual
people, and these subsequently allowed
NLOS classification of a small set of
individuals and of their positions93.
NLOS tracking. NLOS tracking of objects
and people with time-resolved imaging
systems is also an active area of
Measurement
PersPectives
Nature reviews | Physics
research3,52,55,94,95. The tracking problem is
substantially simpler than reconstructing a
full hidden 3D volume, which makes it
computationally more efficient to
implement. These NLOS tracking
approaches pave the way for future research
that goes beyond hidden shape
reconstruction and that could aim at
classification93, object detection, target
identification or other inverse problems that
build on transient light transport.
NLOS imaging without a relay wall. Most
existing NLOS approaches require the
imaging system to scan a large area on a
visible surface, on which the indirect light
paths of hidden objects are sampled. In
many applications, however, optical access
to a large scanning area may not be
available. Inverse methods have been
derived that exploit scene motion to
simultaneously estimate both the shape and
trajectory of a hidden object from transient
images96. This problem is far more
challenging and ill- posed than conventional
NLOS imaging because the light transport is
only measured along a single optical path,
but it may further extend the application
space of NLOS imaging techniques.
Resolution limits The resolving power of
conventional, diffraction- limited imaging
systems is fundamentally limited by the
numerical aperture of the optics and the
wavelength at which they operate97. Time-
resolved NLOS imaging also obeys
fundamental resolution limits. These are
primarily defined by two factors, namely the
area on the visible surface over which the
time- resolved indirect light transport of the
hidden scene is recorded and the temporal
resolution of the imaging system. The first
factor, the scanning area, is analogous to the
numerical aperture of a conventional
imaging system the larger the scanning
area or numerical aperture, the better the
transverse resolution. The second factor,
temporal resolution, is somewhat analogous
to the wavelength-limiting resolution of
conventional systems. Together, these two
characteristics of an NLOS imaging system
define both transverse and axial resolution
of a hidden volume, which can be estimated
unambiguously that is, without the use of
statistical priors.
Formally, the resolution of an NLOS
system is defined as the minimum resolvable
distance of two scatterers. These two
scattering points are resolvable in a hidden
3D space only if the measurements of their
indirect reflections are resolvable in time.
Assuming that the temporal resolution of
the system is given by the full-width at
half- maximum (FWHM) of its temporal
impulse response, transverse and axial
resolutions are
2 2
FWHM, (2)
Δ ≥z c× FWHM .
2
Here, Δx and Δz are the minimum
resolvable distance between the two
scatterers in the transverse and axial
dimension, respectively; c is the speed of
light; z is the distance of the point scatterers
from the visible surface; and the scanning
area has a size of 2w × 2w. These resolution
limits were derived for the confocal
scanning configuration3. For non-confocal
scanning configurations, the transverse
resolution theoretically decreases by a factor
of 2. Other works have also used signal
processing techniques98, linear systems
approaches99 or feature visibility100 to bound
localization and photometric error in NLOS
imaging scenarios.
Other NLOS imaging approaches
It is worth mentioning that there are other
techniques that do not require transient light
imaging capability.
Steady- state systems use a continuous
spatially confined light source and a slow
conventional camera or detector to detect
spatial variations in the return light. In these
systems, integration times of the detector are
long enough to consider the time of flight of
the light to be infinite, and what is detected
is always a steady-state scene response. For
example, the location of a single hidden
object can be estimated when using a
shortwave infrared light source and
camera101. An intriguing modification of the
steady- state approach is to use occlusions in
the scene, such as edges, to provide
additional spatial information, and to rely on
motion and differential measurements to
eliminate problems with background light.
In suitable scenes, these methods can
provide detailed information about objects
in the scene using inexpensive, passive
visible light cameras and natural ambient
light sources911,73,76,92,102. In interferometric
approaches, the scene is illuminated with a
coherent light source, and interference
patterns in the returned light are analysed.
For example, the spatial speckle of the
returned light can be collected and analysed
to reconstruct 2D NLOS images7. This
method makes use of the memory effect that
preserves angular information in the
interaction with thin scatterers. Doing so
limits existing demonstrations to imaging
very small objects, covering a solid angle of
no more than several degrees when viewed
from the wall. This limitation could
probably be improved by incorporating
more information, such as speckle patterns
from multiple coherent light sources. Spatial
correlations within the reflected light from
an observation wall can also be used to
directly retrieve information of a hidden
scene, made up of active yet incoherent light
sources103, for example. Extending this
concept to the temporal domain (that is,
tracking the temporal correlations within the
reflected beam) enables a time-of-
flight approach with an impressive 10-fs
resolution 11. We have also already
mentioned adaptive shaping of the
illuminating laser beam that can transform
the wall into a mirror by using an input
spatial phase on the beam that compensates
for scattering from the first surface. This
makes it possible to scan a focused spot
across the scene and retrieve image
information from the reflected light intensity
during the scan60. Finally, deep learning
techniques have recently been demonstrated
to provide a useful framework to solve
challenging inverse correlography problems
arising in interferometric NLOS
approaches104.
Another group of interferometric
methods is based on illuminating the hidden
scene with a pulsed coherent source via the
relay surface and interfering the returning
light with a delayed local oscillator light
beam derived from the same coherent
illumination source. This process can be
thought of as a coherent time gating method
that produces data that can be treated
similarly to data from other time-resolved
detectors. An example of this used a set-up
similar to a time-domain optical coherence
PersPectives
www.nature.com/natrevphys
Table 1 | Methods and requirements for the
most common reconstruction techniques
More details and references are provided in the
main text. APD, avalanche photodiode; CMOS,
complementary metaloxidesemiconductor; fk,
frequencywavenumber; SPAD, single-photon
avalanche diode.
Reconstruction method
Light source
Detector
Refs
Detection or localization
High- repetition/single- shot
laser
SPAD or APD
55,107
Backpropagation
High- repetition laser
Streak camera, SPAD
array
2,56
Light- cone transform,
fk migration
High- repetition laser
SPAD array
3,5
Virtual/phasor field
High- repetition laser
SPAD or SPAD array
4,8284
Steady- state, occlusions,
coherence
CW laser, ambient light
Standard CMOS
camera, APD
7,10,101,102
Machine learning
Pulsed or continuous-wave laser
SPADs, standard CMOS
camera
9193,104
tomography system20. Interference is used in
this case as a coherence gate to determine
the time of flight of the light through the
scene. The need for an adjustable optical
delay line complicates this set-up. Another
approach uses interference between the
speckle patterns created by a NLOS object;
the reconstruction is obtained by combining
the results from different illumination
frequencies. This procedure has the same
effect as using a short pulse but eliminates
the need for a delay line. Other efforts into
coherent NLOS imaging include speckle
interferometry to detect motion105,106.
Conclusions and future directions LiDAR
systems are emerging as a standard imaging
modality in autonomous driving, robotics,
remote sensing and defence. The same
detectors avalanche photodiodes (APDs)
and SPADs are also increasingly used in
consumer electronics, fluorescence lifetime
microscopy and positron emission
tomography. SPADs in particular are an
ideal platform for extending LiDAR to
NLOS imaging because they address two
primary challenges: being able to detect a
few, indirectly reflected photons among
many, and time-stamping the photon time of
arrival with high accuracy.
The ability to image objects outside the
direct line of sight is likely to be most useful
for applications that already use LiDAR
systems. For instance, self-driving cars
could sense obstacles beyond the next bend
or in front of the car ahead, and could more
safely navigate around them. Eventually,
NLOS imaging could become a software
upgrade in existing or future LiDAR
systems. For this reason, we believe that
such time-resolved NLOS imaging systems
are one of the most promising directions in
this emerging research area.
There are multiple approaches and options
for NLOS imaging, even when
restricted to time-of- flight techniques.
The main techniques discussed here are
summarized in  and  , together
with their hardware requirements (based on
present- day implementations). Each
approach has its own advantages, and these
need to be weighed when considering a
particular application. For example, some
NLOS LiDAR applications for the
automotive industry may not require full 3D
reconstruction of a scene but instead will
benefit from a much simpler approach
geared towards locating the position of a
hidden object and identifying its nature
(such as human, car or bicycle). Compared
with alternative methods to image occluded
spaces, such as transmitted or reflected
radar, X- ray transmission, reflected
acoustic imaging or the placement of mobile
cameras or mirrors, optical NLOS imaging
has the potential to work in real time,
particularly for large detector-to-
scene (that is,
‘stand-off’) distances, albeit with targets
that are limited to distances of 23 m
behind the obstacle. This kind of task
becomes even more favourable when the
hidden object is in movement, in which case
subtraction of the background is
straightforward and has already been
demonstrated to work at stand-off distances
of 50 m or more in daylight. Recent
reports indicate stand-off distances of 1.4
km. Simple range-finding from behind an
obstacle can also be achieved with a single-
shot measurement if APDs rather than
SPADs are used, as APDs can collect
multiple photons from a single, high-
energy return signal107.
However, there are scenarios in which
full 3D imaging is indeed desired, for
example in reconnaissance missions or in
situations for which 3D information of an
otherwise inaccessible area is needed.
Examples that we have encountered range
from identification of suitable underground
cave sites for future manned planet missions
to decommissioning
PersPectives
Nature reviews | Physics
Fig. 6 | Main detector technologies classified
based on spatial and temporal resolution.
Steady- state detector technologies (that is,
using non- time- resolving detectors such as
charge- coupled device (CCD) or
complementary metaloxidesemiconductor
(CMOS) cameras) are shown in blue, to
differentiate these from time- of- flight
technologies. Some detector formats are linear
(1D), as indicated in the figure. References in
square brackets indicate example uses of each
technology. APD, avalanche photodiode; iCCD,
intensified CCD; PMT, photomultiplier tube;
SPAD, single-photon avalanche diode.
REFERENCES
[1] Martín Abadi, Paul Barham, Jianmin Chen,
Zhifeng Chen, Andy Davis, Jeffrey Dean,
Matthieu Devin, Sanjay Ghemawat, Geoffrey
Irving, Michael Isard, et al. 2016. Tensorflow:
A system for large-scale machine learning. In
USENIX Symposium on Operating Systems
Design and Implementation (OSDI). 265283.
[2] Arnon Amir, Pallab Datta, William P Risk,
Andrew S Cassidy, Jeffrey A
Kusnitz, Steve K Esser, Alexander
Andreopoulos, Theodore M Wong, Myron
Flickner, Rodrigo Alvarez-Icaza, et al. 2013.
Cognitive computing programming
paradigm: A corelet language for composing
networks of neurosynaptic cores. In International
Joint Conference on
Neural Networks (IJCNN). IEEE, 110.
[3] Aayush Ankit, Abhronil Sengupta, and
Kaushik Roy. 2018. Neuromorphic
Computing Across the Stack: Devices,
Circuits and Architectures.
In Workshop on Signal Processing Systems.
IEEE, 16.
[4] Marco Bacis, Giuseppe Natale, Emanuele Del
Sozzo, and Marco Domenico Santambrogio.
2017. A pipelined and scalable dataflow
implementation of convolutional neural
networks on
FPGA. In International Parallel and
Distributed Processing Symposium
Workshops (IPDPSW). IEEE, 9097.
[5] Adarsha Balaji, Prathyusha Adiraju, Hirak J
Kashyap, Anup Das, Jeffrey L Krichmar,
Nikil D Dutt, and Francky Catthoor. 2020.
PyCARL: A PyNN Interface for Hardware-
Software Co-Simulation of Spiking
Neural Network. In International Joint
Conference on Neural Networks
(IJCNN). IEEE.
[6] Tao Lin, Xue Fu, Fu Chen, Luqun Li, “A
novel approach for code smells detection
based on deep learning”, EAI International
Conference on Applied Cryptography in
Computer and Communications, 2021.
[7] Tao Lin and J. Gao, "Graphic User Interface
Testing Based on Petri Net", Application
Research of Computers, vol. 33, no. 3, pp.
768-772, 2016. Ministry of Science and
Technology, China.
[8] Tao Lin, J. Gao, X. Fu, Y. Ma, and Y. Lin,
"Extraction Approach for Software Bug
Report", Computer Science, vol. 43, no. 6, pp.
179-183, 2016. China Computer Federation.
[9] O Bichler, D Briand, V Gacoin, B Bertelone,
T Allenet, and JC Thiele. 2017. N2D2-Neural
Network Design & Deployment.
https://github.com/CEA-LIST/N2D2 (2017).
[10] Tao Lin, J. Gao, X. Fu, and Y. Lin, "A
Container - Destructor Explorer Paradigm
to Code Smells Detection", Journal of
Chinese Computer System, vol. 37, no. 3, pp.
469-473, 2016. Chinese Academy of Sciences
and China Computer Federation.
[11] J. Blazewicz. 1976. Scheduling dependent
tasks with different arrival times to meet
deadlines. In Proceedings of the International
Workshop
organized by the Commision of the European
Communities on Modelling and Performance
Evaluation of Computer Systems. North-
Holland Publishing Co., 5765.
[12] Kwabena A Boahen. 1998. Communicating
neuronal ensembles between neuromorphic
chips. In Neuromorphic systems engineering.
Springer.
[13] Alessio Bonfietti, Michele Lombardi,
Michela Milano, and Luca Benini. 2013.
Maximum-throughput mapping of SDFGs on
multi-core SoC platforms. J. Parallel and Distrib. Comput.
73, 10 (2013), 13371350.
[14] Romain Brette. 2015. Philosophy of the spike:
rate-based vs. spikebased theories of the brain.
Frontiers in Systems Neuroscience 9 (2015), 151.
[15] X. Fu, Y. Ma, and Tao Lin, "A Novel Image
Matching Algorithm Based on Graph Theory",
Computer Applications and Software, vol. 33,
no. 12, pp. 156-159, 2016. Shanghai
Computer Society.
[16] Geoffrey W. Burr, Robert M. Shelby, Abu
Sebastian, Sangbum Kim, Seyoung Kim,
Severin Sidler, Kumar Virwani, Masatoshi
Ishii, Pritish Narayanan, Alessandro
Fumarola, Lucas L. Sanches, Irem Boybat,
Manuel Le Gallo, Kibong Moon, Jiyoo Woo,
Hyunsang Hwang, and
Yusuf Leblebici. 2017. Neuromorphic
computing using non-volatile memory.
Advances in Physics: X 2, 1 (2017), 89124.
[17] Yu-Hsin Chen, Joel Emer, and Vivienne Sze.
2017. Using dataflow to optimize energy
efficiency of deep neural network
accelerators. IEEE
Micro 37, 3 (2017), 1221.
[18] Tao Lin, J. Gao, X. Fu, and Y. Lin, "A
Novel Bug Report Extraction Approach",
15th International Conference on Algorithms
and Architectures for Parallel Processing,
2015, pp. 771-780.
[19] Tao Lin, “Deep Learning for IoT”, 39th IEEE
International Performance Computing and
Communications Conference, 2020.
[20] Tao Lin, “A Data Triage Retrieval System for
Cyber Security Operations Center”,
Pennsylvania State University Thesis, 2018.
[21] Tao Lin, J. Huang, and J. Gao, "Flame
Detection Based on SIFT Algorithm and One
Class Classifier with Undetermined
Environment", Computer Science, vol. 42, no.
6A, pp. 231-235, 2015. China Computer
Federation
[22] Anup Das and Akash Kumar. 2018.
Dataflow-Based Mapping of Spiking Neural
Networks on Neuromorphic Hardware. In
Proceedings of the
Great Lakes Symposium on VLSI (GLSVLSI).
ACM, 419422.
[23] Anup Das, Akash Kumar, and Bharadwaj
Veeravalli. 2014. Communication and
migration energy aware task mapping for
reliable multiprocessor systems. Future
Generation Computer Systems 30 (2014),
216228.
... [17,18] In some applications, such as danger rescue, it is more important to obtain the positions of the hidden object [19][20][21][22][23][24]. To apply this technology to real-world situations, separating weak target echoes from noise is still one of the critical challenges [20,21,25] because the intensity of the photons attenuates inversely proportional to the square of the distance and only a few of the photons can be returned to the detector. On the issue of detecting weak echoes, in 2017, Chan et al. indicates that slightly expanding the radius of the acquisition spot could increase the signal-to-noise ratio [21]. ...
Article
Full-text available
Non-line-of-sight (NLOS) technology has been rapidly developed in recent years, allowing us to visualize or localize hidden objects by analyzing the returned photons, which is expected to be applied to autonomous driving, field rescue, etc. Due to the laser attenuation and multiple reflections, it is inevitable for future applications to separate the returned extremely weak signal from noise. However, current methods find signals by direct accumulation, causing noise to be accumulated simultaneously and inability of extracting weak targets. Herein, we explore two denoising methods without accumulation to detect the weak target echoes, relying on the temporal correlation feature. In one aspect, we propose a dual-detector method based on software operations to improve the detection ability for weak signals. In the other aspect, we introduce the pipeline method for NLOS target tracking in sequential histograms. Ultimately, we experimentally demonstrated these two methods and extracted the motion trajectory of the hidden object. The results may be useful for practical applications in the future.
Article
Signal capture is at the forefront of perceiving and understanding the environment; thus, imaging plays a pivotal role in mobile vision. Recent unprecedented progress in artificial intelligence (AI) has shown great potential in the development of advanced mobile platforms with new imaging devices. Traditional imaging systems based on the “capturing images first and processing afterward” mechanism cannot meet this explosive demand. On the other hand, computational imaging (CI) systems are designed to capture high-dimensional data in an encoded manner to provide more information for mobile vision systems. Thanks to AI, CI can now be used in real-life systems by integrating deep learning algorithms into the mobile vision platform to achieve a closed loop of intelligent acquisition, processing, and decision-making, thus leading to the next revolution of mobile vision. Starting from the history of mobile vision using digital cameras, this work first introduces the advancement of CI in diverse applications and then conducts a comprehensive review of current research topics combining CI and AI. Although new-generation mobile platforms, represented by smart mobile phones, have deeply integrated CI and AI for better image acquisition and processing, most mobile vision platforms, such as self-driving cars and drones only loosely connect CI and AI, and are calling for a closer integration. Motivated by this fact, at the end of this work, we propose some potential technologies and disciplines that aid the deep integration of CI and AI and shed light on new directions in the future generation of mobile vision platforms.
Preprint
Signal capture stands in the forefront to perceive and understand the environment and thus imaging plays the pivotal role in mobile vision. Recent explosive progresses in Artificial Intelligence (AI) have shown great potential to develop advanced mobile platforms with new imaging devices. Traditional imaging systems based on the "capturing images first and processing afterwards" mechanism cannot meet this unprecedented demand. Differently, Computational Imaging (CI) systems are designed to capture high-dimensional data in an encoded manner to provide more information for mobile vision systems.Thanks to AI, CI can now be used in real systems by integrating deep learning algorithms into the mobile vision platform to achieve the closed loop of intelligent acquisition, processing and decision making, thus leading to the next revolution of mobile vision.Starting from the history of mobile vision using digital cameras, this work first introduces the advances of CI in diverse applications and then conducts a comprehensive review of current research topics combining CI and AI. Motivated by the fact that most existing studies only loosely connect CI and AI (usually using AI to improve the performance of CI and only limited works have deeply connected them), in this work, we propose a framework to deeply integrate CI and AI by using the example of self-driving vehicles with high-speed communication, edge computing and traffic planning. Finally, we outlook the future of CI plus AI by investigating new materials, brain science and new computing techniques to shed light on new directions of mobile vision systems.
Article
Full-text available
Non-line-of-sight (NLOS) imaging recovers objects using diffusely reflected indirect light using transient illumination devices in combination with a computational inverse method. While capture systems capable of collecting light from the entire NLOS relay surface can be much more light efficient than single pixel point scanning detection, current reconstruction algorithms for such systems have computational and memory requirements that prevent real-time NLOS imaging. Existing real-time demonstrations also use retroreflective targets and reconstruct at resolutions far below the hardware limits. Our method presented here enables the reconstruction of room-sized scenes from non-confocal, parallel multi-pixel measurements in seconds with less memory usage. We anticipate that our method will enable real-time NLOS imaging when used with emerging single-photon avalanche diode array detectors with resolution only limited by the temporal resolution of the sensor. Current implementations of non-line-of-sight imaging use reconstruction algorithms that are difficult to implement fast enough for real-time application using light efficient equipment. The authors present an algorithm for non-line-of-sight imaging that is low complexity and allows fast and efficient reconstruction on a standard computer.
Article
Full-text available
The decomposition of light transport into direct and global components, diffuse and specular interreflections, and subsurface scattering allows for new visualizations of light in everyday scenes. In particular, indirect light contains a myriad of information about the complex appearance of materials useful for computer vision and inverse rendering applications. In this paper, we present a new imaging technique that captures and analyzes components of indirect light via light transport using a synchronized projector-camera system. The rectified system illuminates the scene with epipolar planes corresponding to projector rows, and we vary two key parameters to capture plane-to-ray light transport between projector row and camera pixel: (1)the offset between projector row and camera row in the rolling shutter, and (2)the exposure of the camera row. We describe how this synchronized rolling shutter performs illumination multiplexing, and develop a nonlinear optimization algorithm to demultiplex the resulting 3D light transport operator. Using our system, we are able to capture live short and long-range indirect light transport, disambiguate subsurface scattering, diffuse and specular interreflections, and distinguish materials according to their subsurface scattering properties. In particular, we show the utility of indirect imaging for capturing and analyzing the hidden structure of veins in human skin.
Article
Full-text available
Non-line-of-sight imaging allows objects to be observed when partially or fully occluded from direct view, by analysing indirect diffuse reflections off a secondary relay surface. Despite many potential applications1–9, existing methods lack practical usability because of limitations including the assumption of single scattering only, ideal diffuse reflectance and lack of occlusions within the hidden scene. By contrast, line-of-sight imaging systems do not impose any assumptions about the imaged scene, despite relying on the mathematically simple processes of linear diffractive wave propagation. Here we show that the problem of non-line-of-sight imaging can also be formulated as one of diffractive wave propagation, by introducing a virtual wave field that we term the phasor field. Non-line-of-sight scenes can be imaged from raw time-of-flight data by applying the mathematical operators that model wave propagation in a conventional line-of-sight imaging system. Our method yields a new class of imaging algorithms that mimic the capabilities of line-of-sight cameras. To demonstrate our technique, we derive three imaging algorithms, modelled after three different line-of-sight systems. These algorithms rely on solving a wave diffraction integral, namely the Rayleigh–Sommerfeld diffraction integral. Fast solutions to Rayleigh–Sommerfeld diffraction and its approximations are readily available, benefiting our method. We demonstrate non-line-of-sight imaging of complex scenes with strong multiple scattering and ambient light, arbitrary materials, large depth range and occlusions. Our method handles these challenging cases without explicitly inverting a light-transport model. We believe that our approach will help to unlock the potential of non-line-of-sight imaging and promote the development of relevant applications not restricted to laboratory conditions.
Article
Parquetry is the art and craft of decorating a surface with a pattern of differently colored veneers of wood, stone, or other materials. Traditionally, the process of designing and making parquetry has been driven by color, using the texture found in real wood only for stylization or as a decorative effect. Here, we introduce a computational pipeline that draws from the rich natural structure of strongly textured real-world veneers as a source of detail to approximate a target image as faithfully as possible using a manageable number of parts. This challenge is closely related to the established problems of patch-based image synthesis and stylization in some ways, but fundamentally different in others. Most importantly, the limited availability of resources (any piece of wood can only be used once) turns the relatively simple problem of finding the right piece for the target location into the combinatorial problem of finding optimal parts while avoiding resource collisions. We introduce an algorithm that efficiently solves an approximation to the problem. It further addresses challenges like gamut mapping, feature characterization, and the search for fabricable cuts. We demonstrate the effectiveness of the system by fabricating a selection of pieces of parquetry from different kinds of unstained wood veneer.
Article
Being able to see beyond the direct line of sight is an intriguing prospect and could benefit a wide variety of important applications. Recent work has demonstrated that time-resolved measurements of indirect diffuse light contain valuable information for reconstructing shape and reflectance properties of objects located around a corner. In this article, we introduce a novel reconstruction scheme that, by design, produces solutions that are consistent with state-of-the-art physically based rendering. Our method combines an efficient forward model (a custom renderer for time-resolved three-bounce indirect light transport) with an optimization framework to reconstruct object geometry in an analysis-by-synthesis sense. We evaluate our algorithm on a variety of synthetic and experimental input data, and show that it gracefully handles uncooperative scenes with high levels of noise or non-diffuse material reflectance.
Article
Imaging objects outside a camera's direct line of sight has important applications in robotic vision, remote sensing, and many other domains. Time-of-flight-based non-line-of-sight (NLOS) imaging systems have recently demonstrated impressive results, but several challenges remain. Image formation and inversion models have been slow or limited by the types of hidden surfaces that can be imaged. Moreover, non-planar sampling surfaces and non-confocal scanning methods have not been supported by efficient NLOS algorithms. With this work, we introduce a wave-based image formation model for the problem of NLOS imaging. Inspired by inverse methods used in seismology, we adapt a frequency-domain method, f-k migration, for solving the inverse NLOS problem. Unlike existing NLOS algorithms, f-k migration is both fast and memory efficient, it is robust to specular and other complex reflectance properties, and we show how it can be used with non-confocally scanned measurements as well as for non-planar sampling surfaces. f-k migration is more robust to measurement noise than alternative methods, generally produces better quality reconstructions, and is easy to implement. We experimentally validate our algorithms with a new NLOS imaging system that records room-sized scenes outdoors under indirect sunlight, and scans persons wearing retroreflective clothing at interactive rates.
Article
Imaging objects obscured by occluders is a significant challenge for many applications. A camera that could “see around corners” could help improve navigation and mapping capabilities of autonomous vehicles or make search and rescue missions more effective. Time-resolved single-photon imaging systems have recently been demonstrated to record optical information of a scene that can lead to an estimation of the shape and reflectance of objects hidden from the line of sight of a camera. However, existing non-line-of-sight (NLOS) reconstruction algorithms have been constrained in the types of light transport effects they model for the hidden scene parts. We introduce a factored NLOS light transport representation that accounts for partial occlusions and surface normals. Based on this model, we develop a factorization approach for inverse time-resolved light transport and demonstrate high-fidelity NLOS reconstructions for challenging scenes both in simulation and with an experimental NLOS imaging system.